Facial Recognition Technology Can prove a Curse

Divyanshu Dembi
Facial recognition technology as a concept, is not new. Charles Darwin, in his book Expressions of Emotions argued that human facial expressions are reflexive in nature and part of a broader set of physiological reactions to an external stimuli. He argued that expression captured in a photograph are an objective measure of the associated human condition, i.e. what one is feeling on the inside. It didn’t matter what the person was really feeling, the photograph was the ultimate truth. And this is how we started to look at facial expressions as absolute and reliable codes for peeking into the psyche of humans, to know their innermost secrets.In present day Russia, a company by the name of Tech lab is set to produce ‘aggression detection’ technology that will try to pre-emptively flag an individual to law enforcement agencies, who the algorithm thinks is ‘likely’ to commit an offence after analysis of facial cues. The future is here, and it doesn’t look pleasing.
There is a certain cultural belief in all technologies as being ‘objective’. We look at anything that is digital and driven by code as being inherently fair. The argument employed is that technologies or machines don’t have any vested interests to discriminate against specific individuals or groups of people, so why would they indulge in any biases? In a study of facial expressions of 400 NBA payers from the 2016-17 season, it was seen that the same FRT constantly rated black men as being twice as angry and thrice as contemptuous as white men for largely the same facial expression. This is not just a one off study, almost all FRT bias assessment tests have found that the technology isn’t as accurate as advertised and has difficulties in correctly identifying people of colour, and other minorities. Recently Amazon’s Rekognition misidentified 28 members of Congress as being criminals. The false matches in the test disproportionately comprised of people of colour – nearly 40%. There are two major reasons behind this.
Firstly, such technologies employ machine learning algorithms that require a base data set to learn how to identify subjects and emotions from, which means that any discrepancy in the base data set will keep on amplifying as the algorithm builds on itself. It is also important to note that the research teams behind most such technologies lack diversity across board and are predominantly comprisedof white individuals who reinforce their personal racial stereotypes (subconsciously or subconsciously) onto the data set that they use to train the algorithms. So, when these algorithms are taught the concept of ‘positive’ emotions such as smiling or happiness, the data set used is overwhelmingly made up of photos of white individuals. Alternatively, for the training of ‘negative’ emotions such as contempt or anger researchers mostly end up using faces of black people and people of colourby acting on their racial biases. The end result is that the bias transmutes itself from the human into the algorithm and is reflected in the results it produces. There is a famous phrase that captures the essence of this process, garbage in, garbage out. A bias ridden limited data set is bound to produce wrong results and when we place this in the context of FRT being rapidly employed by law enforcement agencies around the world, such technologies are bound to have a disproportionate effect on certain populations due to the biases that creep into the algorithms against them.In July this year, in a first of its kind case, a black man was wrongfully arrested in USA after the state police FRT misidentified the accused.
The second reason is that the algorithm used in FRT, after a point builds on itself and keeps re-writing its own code. Essentially it becomes ‘opaque’ to external observers, which means that even the brightest engineers can’t tell for sure what’s happening with the code and how it is evolving. In other words, we lose any effective control over how the algorithm functions and develops. Therefore even if we are aware of any discrepancy or biases present in the data set, there is a very limited amount of rectification that we can do. Thus our ideas of FRT as being remotely objective or even decently accurate are wildly misplaced. The promise of a free and fair facial recognition technology has largely withered away around the globe, with multiple states in USA like San Francisco, Illinois and Boston either completely banning all usage of FRTs or at least heavily restricting access to it. However, various federal and state governments still use the technology in absence of very strict laws in place. In India, Internet Freedom Foundation is tirelessly working to stop the National Crime Records Bureau from launching a nationwide FRT in absence of any data protection laws or regulatory framework. However the NCRB remains largely unfazed.
The real world effects of FRT are wide reaching. FRT can be used a tool to achieve the perfect surveillance state, where the lingering eye of the state will never leave you. Imagine a world where you’re constantly faced with double jeopardy. If your expression aren’t ‘normal’ enough, you risk being flagged to law enforcement agencies for no fault of your own, and if you want to avoid that, the only other option you have left is to artificially ‘manufacture’ your facial expressions to game the system. That not only will spark suspicion, it will also be a psychological nightmare. If you are a sum total of your behaviours, expressions and conduct, then the chilling effect of this lingering eye will divorce your real self from you. You’ll be an alien to yourself, constantly trying to model your facial expressions and conduct to fit the standards set by faulty algorithms, constantly diverging away from your real self. And god forbid if you are born in the wrong skin, then you’re doomed forever.Caught in a never ending cycle of persecution by technology. However there is some relief. Recently, the court of appeals in South Wales held the use of FRT by the police as breaching privacy of people on accounts of the technology not having cleared risk and bias assessment tests and disproportionately collecting people’s data i.e. their photos.Other civil liberty organisations around the world are pushing back against the use of FRT either by the state or private entities.
We’re in unchartered waters when it comes to technologies such as FRT, and therefore must move forward only when we place requisite regulatory frameworks and laws preventing its abuse in place. As we know by now, a ‘move fast, break things’ approach when it comes to adopting technology is detrimental to the very foundation of ademocratic society, therefore to create a better society, we must start with better regulation, and then move to better oversight to ensure that such technologies are used for the benefit of people, and not against them.