The BIG Brother Stalker

The problem is not that computers can recognise faces. It’s that they can’t forget them
Arguments about surveillance and privacy are usually framed around Big Brother – the overweening state. But the widespread use of facial recognition in private hands suggests a more urgent danger: that not just Big Brother but anyone in the family can watch, and profit from, our faces. The private landlords are using facial recognition now in their CCTV surveillance. It is not clear whether this is entirely legal, partly because the owners have been reluctant to disclose what it is they’re actually doing.
This is a development that looks like the worst of all possible worlds. Visual recognition boosted by AI is cheap, widely available and easily programmed – one hobbyist has used it to train his catflap to open only when his cat was not trying to carry prey into the house – but it is also worryingly inaccurate. Recent trials by police forces in London and south Wales, among other places, have shown a high rate of false positives, and the rate of inaccuracy is much higher with black faces than with white. A technology that cannot in real life discriminate between individuals will only tend to increase the amount of discrimination in society as a whole. It will spread false confidence and real fear.
So many privacy pressure groups have called for a moratorium on the use of these systems by the police. But this could only be a temporary solution. The technology has been steadily improving, and it is not unreasonable to expect that within five years it will be acceptably reliable as well as ubiquitous. This will pose a different set of problems to the obvious ones raised when it doesn’t work, and these will not be solved by an attempt to ban it altogether: even if governments are constrained, private actors will run ahead. It offers, for instance, huge advantages to retailers trying to curb shoplifting without much assistance from an overstretched police force. And, as the example of the central metropolitan significant parts of modern cities are now in fact private property, even when they appear to be public.
The dangers then will lie not only with the collection of data, which may be unavoidable, but also with its subsequent hoarding. The cameras themselves, like most of the so-called internet of things, will present endless vulnerabilities to determined hackers. So will the databases where their information is stored. Even when it is not hacked, the recombination of facial data with material collected in other ways offers disquieting possibilities. It has been claimed that cameras can detect emotion from the way that people walk.
The possible uses of that technology for manipulation is obvious – but the real problem is that it is as bogus as a lie detector. Imagine a surveillance system that falsely identifies a black man from his face and then just as falsely attributes to him aggressive intent from his expression and the way he is standing. The dangers are obvious in countries where police routinely carry guns; but even if the system doesn’t summon armed force, it could still place a damning mark against the record of the victim it believes it has identified. The legal and institutional framework within which these dangers must be managed is still unclear; the urgency of the problem, though, is obvious.