Novel photo filter can deter AI photo recognition systems, protect privacy

TORONTO: Scientists, including those of Indian origin, have created a ‘privacy filter’ that can prevent artificial intelligence facial recognition systems from identifying our photos on social media.
Each time we upload a photo or video to a social media platform, its facial recognition systems learn a little more about us. These algorithms ingest data about who we are, our location and friends.
As concerns over privacy and data security on social networks grow, researchers led by Professor Parham Aarabi and graduate student Avishek Bose from University of Toronto in Canada have created an algorithm to dynamically disrupt facial recognition systems.
“Personal privacy is a real issue as facial recognition becomes better and better. This is one way in which beneficial anti-facial-recognition systems can combat that ability,” said Aarabi.
The solution leverages a deep learning technique called adversarial training, which pits two artificial intelligence algorithms against each other.
Researchers designed a set of two neural networks: the first working to identify faces, and the second working to disrupt the facial recognition task of the first. The two are constantly battling and learning from each other, setting up an ongoing AI arms race.
The result is an Instagram-like filter that can be applied to photos to protect privacy. Their algorithm alters very specific pixels in the image, making changes that are almost imperceptible to the human eye.
“The disruptive AI can ‘attack’ what the neural net for the face detection is looking for,” said Bose.
“If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they’re less noticeable. It creates very subtle disturbances in the photo, but to the detector they’re significant enough to fool the system,” he said.
Researchers tested their system on the 300-W face dataset, an industry standard pool of more than 600 faces that includes a wide range of ethnicities, lighting conditions and environments.
They showed that their system could reduce the proportion of faces that were originally detectable from nearly 100 per cent down to 0.5 per cent.
“The key here was to train the two neural networks against each other – with one creating an increasingly robust facial detection system, and the other creating an ever stronger tool to disable facial detection,” said Bose.
In addition to disabling facial recognition, the new technology also disrupts image-based search, feature identification, emotion and ethnicity estimation, and all other face-based attributes that could be extracted automatically. (AGENCIES)

LEAVE A REPLY

Please enter your comment!
Please enter your name here