Facial recognition technology is not new – airport travel, computer gaming, concerts, and football stadiums have all made use of the technology extensively to speed up queuing time, improve user experience and to increase security measures. You may even be using it to access your smartphone.
Concerns regarding facial recognition technology, from privacy violations to citizens’ rights in surveillance methods, have raged for years.
This article will focus on evidence that suggests the introduction of facial recognition technology, if implemented without due diligence, will lead to further racial profiling with devastating impacts and will discuss potential resolutions to this problem.
We have seen how unconcealed racial profiling using facial recognition technology, can be used negatively. One example comes from the Chinese government. Last year China received international condemnation for identifying and tracking Uighurs, who are a mostly Muslim minority group. Reportedly, up to a million Uighurs were subsequently detained in detention camps.
A forthcoming report by professors at Harrisburg University, Pennsylvania, has prompted a widespread backlash. The report claims to have created algorithms that can predict those more likely to commit a crime based on their facial features. There has been universal condemnation of this proposition from global giants such as IBM and Google. However, the mere fact that such a report has been created by respected academia is a chilling reminder that technological advancements must be tempered and led by policy, guidelines, and common sense.
The profiling of citizens based on facial features and, in particular race, is not new.
Phrenology was popular in the 19th century and is the study of the cranial features of humans. It’s simplified argument was that it is possible to identify criminal behaviour and the likelihood of criminality based on the shape of the skull and that certain bumps and protrusions were indicative of criminal propensities.
“Scientific Illustration” from 19th century Phrenology
What was once scientific and “factual” is now rightfully dismissed as pseudoscience. However, I see no difference between Phrenology and the claims of the report. Both have glaring inherent racial biases that are anything but scientific.
The researchers claim that their software has “no racial bias,” but in reality, the algorithms are only as scientific and unbiased as the creators of them.
“We need to look at who’s actually developing the technology and within what kind of incentive structure [and] what kind of ecosystem,”
“The fact is much of our technology is being developed and conceived of by a small sliver of humanity, and this sliver of humanity has projected onto everyone else its own vision of the good life.”
Ruha Benjamin – Author, Race after Technology
The FACTS speak for themselves: The error rates.
We have already seen racial biases in action stemming from facial recognition software…
Robert Julian-Borchak Williams was arrested and jailed in January this year by the Detroit Police Department. They used facial recognition technology with “reliable algorithms,” which misidentified him as the perpetrator of a jewellery store theft. Robert was innocent. He was given an apology.
How do we stop this from happening?
In many of my discussions with hiring managers, there seems to be a lack of understanding of why we need a push to hire professionals from a more diverse background. Some recent conversations have been around the lines of – “Yes, we’ve done the female diversity thing, and we are still working on it.”
However, you cannot have a genuinely diverse workforce without looking at the racial, cultural, and social-economic makeup of your workforce and create plans to address misrepresentation.
In the Department of Digital, Culture, Media and Sports study published this year, entitled Cyber Security Skills in the UK Labour Market 2020, the qualitative report found a rather blasé approach to creating a more diverse workforce and a lack of ingenuity around identifying ways to attract individuals from a more varied background. I would have liked to see the report focus on diversity within leadership teams, which would be likely to highlight even further imbalance.
Facial recognition technology has its benefits, and if developed and implemented correctly can be both a tool for convenience and security.
With the advancement of technology, there is an ever-growing effort to improve efficiency, motivated by increasing revenue, meeting regulatory demands, and limiting reputational risk.
Let’s take for example, the push to release more secure applications: we have seen the evolution from waterfall methodology to agile. But it has not stopped there.
The need to release secure applications at an accelerated rate has led to DevSecOps: a process in which application development and infrastructure security work together forming an integrated shared end to end responsibility.
This process creates value and embeds security from the beginning to the very end of the software cycle – continuously. We now have security specialists and coders working within the same team, increasing efficiency and collaboration from inception.
Perhaps this same principle should be applied to diversity: The pro-active commitment of business leaders and decision-makers to ensure that recruiting a diverse workforce within the very grass-roots of their firm. Embedded and reviewed within the selection process at each level of the organisation, hence creating Diversity by design.
“Inclusion of diversity inspires innovation,” Tim Cook CEO Apple
What measures have your firms taken to address the lack of diversity throughout the organisation, particularly with those developing tech?
If you would like to discuss how to attract a diverse talent pool for your organisation, please feel free to reach out and contact me.