Surveillance

The age of AI surveillance is here

Dave Gershgorn
Quartz

For years we’ve been recorded in public on security cameras, police bodycams, livestreams, other people’s social media posts, and on and on. But even if there’s a camera in our face, there’s always been a slight assurance that strangers wouldn’t really be able to do anything that affects us with the footage. The time and effort it would take for someone to trawl through months of security footage to find a specific person, or search the internet on the off-chance they’ll find you is just unrealistic. But not for robots.

Long possible in Hollywood thrillers, the tools for identifying who someone is and what they’re doing across video and images are taking shape. Companies like Facebook and Baidu have been working on such artificial intelligence-powered technology for years. But the narrowing rate of error and widening availability of these systems foretell a near future when every video is analyzed to identify the people, objects, and actions inside.

Artificial intelligence researchers struggled for years to build algorithms that could look at an image and tell what it depicts. The complexity of images, each containing millions of pixels that form unique patterns, was just too complicated for hand-coded algorithms to reliably work.

Artificial intelligence researchers struggled for years to build algorithms that could look at an image and tell what it depicts. The complexity of images, each containing millions of pixels that form unique patterns, was just too complicated for hand-coded algorithms to reliably work.

Video, which uses similar techniques to still images but requires higher processing power, also allows AI to understand what’s happening over time. Baidu, the Chinese search giant, announced in late August 2017 that it had won the ActivityNet challenge, correctly labeling the actions of humans in 300,000 videos with 87.6% accuracy. These are actions like chopping wood, cleaning windows, and walking a dog.

Facebook has also demonstrated interest in this technology to understand who’s in livestreams on the site and what they’re doing. In an interview last year, director of applied machine learning Joaquin Quiñonero Candela said that, ideally, Facebook would understand what’s happening in every live video, in order to be able to curate a personalized video channel for users.

Facial recognition in still images and video is already seeping into the real world. Baidu is starting a program where facial recognition is used instead of tickets for events. The venue knows who you are, maybe from a picture you upload or your social media profile, sees your face when you show up and knows if you’re allowed in. Paris tested a similar feature at its Charles de Gaulle airport for a three-month stint this year, following Japan’s pilot program in 2016, though neither have released results of the programs.

US governments are already beginning to use the technology in a limited capacity. Last week the New York department of motor vehicles announced that it had made more than 4,000 arrests using facial recognition technology. Instead of scanning police footage, the software is used to compare new drivers’ license application photos to images already in the database, making it tougher for fraudsters to steal someone’s identity. If state or federal governments expand into deploying facial recognition in public, they will already have a database of more than 50% of American adults from repositories like DMVs. And again, the bigger the dataset, the better the AI.

And that might not be far off. Axon, a company once known as Taser and the largest distributor of police body cameras in the US, has recently ramped up ambitions to infuse artificial intelligence into its products, acquiring two AI companies earlier this year. Axon CEO Rick Smith told Quartz previously that the ideal use case for AI would be the objective generation of incident reports, giving police more time out from behind desks. Facial recognition, he noted, isn’t active now but could be in the future. Motorola, another major bodycam supplier, pitches its software on its ability to quickly learn faces, highlighting a scenario where an officer is looking for a lost child.

Security cameras are also getting a boost of AI. Intel announced in Aprilthat it had built hardware for security cameras capable of “crowd density monitoring, stereoscopic vision, facial recognition, people counting,” and “behavior analysis.” Another camera, called the DNNCam, is a deep learning camera that’s waterproof, self-sufficient, and claims to be virtually indestructible, meaning it can be set to work in remote environments away from internet connections or behind a cash register for “regular customer recognition,” according to the website.

So what’s a privacy-minded, law-abiding citizen to do when surveillance becomes the norm? Not much. Early research has identified ways to trick facial recognition software, either by specially-made glasses to fool the algorithms or face paint that throws off the AI. But these often require knowledge of how the facial recognition algorithm works. This is just a heads up. Maybe wear a big hat?

Leave a Reply

Your email address will not be published. Required fields are marked *