Escaping the silent eye of facial recognition is impossible in the digital age, unless you live somewhere with no technology or maybe wear a protective/religious covering on your head. As costs decrease to use surveillance cameras, along with more accurate facial recognition technology, identifying people is becoming easier. This type of technology is also becoming more readily available for anyone to use.
The artificial intelligence and technology used to identify faces is refined constantly. A few years ago, recognizing faces was something humans did better than machines. However, it seems in the near future, this is something that machines will be able to do better and faster. Using a computer also has the added benefit of being able to identify images from multiple sources (e.g., social media apps) quickly.
Last week I read a NYTimes article, “Unmasking a Company That Wants to Unmask Us All,” by Kashmir Hill. The article discusses Clearview AI, a company making a business by offering facial recognition services to law enforcement agencies. Once signed up, a law enforcement agent simply has to scan a picture of the unknown perpetrator for Clearview AI to do its magic. Behind the scenes, Clearview AI uses an algorithm that breaks faces down into a series of vectors, then matches the scanned image with ones available in its database containing millions of facial images.
And where does Clearview obtain all this data? By scraping available images (e.g., social media) that are supposed to be legal for them to use.
Apparently law enforcers have found the service useful for identifying suspects. They are also quick to point out that the facial recognition match is only one piece of evidence used in the whole case.
I find many things troubling about this. First of all, my face is no longer going to be something private that belongs only to me. I know there are labeled images of my face, either ones I’ve put up (e.g., LinkedIn) or that somebody else posted.
Secondly, by putting this service in a law enforcement context makes it seems less harmful than it could potentially be with different uses. Of course everybody wants to catch criminals, but what about when stalkers and marketers use this technology to prey on people, especially in vulnerable moments?
The technology is here and I’m not sure there’s any good way to control it anymore.