Forward-thinking AI companies’ ethical claims are being debunked because dystopian surveillance is already on the verge of breaking fundamental freedoms and human rights (if they haven’t done so yet). In this context, fighting facial recognition wildfires with substitute technologies includes typing biometrics as a reliable alternative to authentication.
In a world profoundly transformed by tech, the fear of privacy infringements by enterprises and governments alike urges individuals to refrain from deliberately using pervasive technologies, or at least to become more skeptical about it.
Current political and technological trends are likely to push facial recognition into a gray area, making room for substitute technologies. Typing biometrics, a non-intrusive option used to verify identities undoubtedly holds a strong position on the global cybersecurity road map alongside other behavioral biometrics.
Face recognition – a highly charged topic
Facial recognition is a process that detects, captures, and matches images and videos of human faces. It analyzes metrics such as the space between the eyes, the bridge of the nose, and the lips contour. The creepy factor attributed to facial recognition comes from its inherent intrusive nature. If used for the wrong purposes, not only can facial recognition easily cross the limits of human rights violations, but it also gives the ominous feeling of being watched.
According to a study published in June 2019, law enforcement holds the most substantial chunk of the facial recognition technology on the market. While North America plays a crucial factor in the adoption and promotion of this technology, Asia-Pacific follows closely, seeing the fastest growth in the sector. However, facial recognition is under global scrutiny due to a collective reluctance taking its toll.
There have been contrasting trends around the world, with more abusive surveillance regulations in the east and an increased number of bans against face biometrics in the west.
The “China situation”
As the New York Times reports, allegedly, a worst-case scenario already exists in China where Uighurs, Kazakhs, and mostly Muslim minority groups have been kept a close tab on through “mass arbitrary detention, repression and high-technology surveillance.”
Recent reports tell a very dark side of the story alluding to quickly spreading fear of face identification in China, primarily due to new laws that forbid people from wearing face coverings. Despite brutal arrests already happening in Hong Kong, young protesters continue to wear face masks and use lasers to block cameras in efforts to promote empowerment.
The “US approach”
With more people seeking ownership over their identity, technologies such as facial recognition unquestionably have become a sensitive issue (subject to bans in many places.)
In this context, several cities in the USA have outlawed facial recognition technology. As a response to the oppressive use of facial recognition in China, the US has blacklisted around two-dozen facial biometrics organizations and companies. These entities were found to have provided such technology, which was ultimately misused. In the past years, the military, police, and other governmental bodies have increasingly been using facial recognition technology as a means for crime detection and prevention.
However, the significant impact of facial recognition to identify criminals is overlooked by an ethical concern about its gender and race biased targeting accuracy. This has lead to malign consequences, such as increased mistrust in the justice system.
Even though such bans on technology are focused mainly on law enforcement, private organizations might also be affected in the future. One example is a bill introduced on Capitol Hill, which forbids individual users of facial recognition technology from sharing data without explicit consumer consent.
Global concerns are on the rise
The US approach may seem like a win for privacy and civil rights advocates who speak out against the use of facial recognition for mass surveillance. Exceptional bans could boost the overall negative image of the use of this specific biometrics.
Latest Update: As of June 2020, big companies like Microsoft, Amazon, and IBM have decided to put a stop to the research, use, and selling of facial recognition technology at least until the publishing of stronger laws to regulate “how it can be deployed safely and without infringing on human rights or civil liberties”. This sends an alarming global signal on the effects of the lack of regulation in the field of facial biometrics but also speaks to the amount of work needed to be done to perfect the technology.
The most intolerant wins
Nassim Taleb introduced the intransigent minority rule, which is when a small intolerant group determines the status quo and preferences of the majority. A good example is that most drinks produced nowadays are Kosher, even though a tiny percentage of the population requires it. The majority will have no issue drinking Kosher beverages, and therefore, producers will make most drinks Kosher.
Based on Nasim Taleb’s notion of the winning opposing minority, the future of facial recognition might not be as bright as forecasts predict.
Therefore, a small number of outspoken advocates and human rights activists vehemently campaigning against facial biometrics technology may influence the decision of a flexible majority to reject the use of this type of biometrics.
Less intrusive technologies are more compatible
There’s lots of commotion around biometrics technology concerning its effects on privacy and human rights. In this context, what other technologies could be used to confirm a claimed identity? Could we remain secure online and have our identities verified based on our embedded behavior?
Typing biometrics sparks imagination
There’s a common ground where disruptive technologies meet and that’s Artificial Intelligence (AI) and deep learning. This indicates a system that learns large amounts of data and then can adapt based on specific algorithms. AI improves existing technologies, including those used for authentication and identification purposes.
Typing biometrics, also known as keystroke dynamics, is a form of behavioral biometrics that uses AI to analyze and match patterns in the typing behaviors of individuals. With the necessary samples (typically two or more), the accuracy of one-to-one authentication scenarios can reach 99%.
Taking into account that online, people are usually authenticated by typing their username and password, typing biometrics is a great alternative friction-free layer of security. It’s a reliable substitute, offering security without compromising the user experience.
Fingerprints are also less intrusive. It’s nearly impossible to place fingerprint scanners on every road corner, much unlike facial recognition cameras, which are everywhere, collecting data en masse. Based on the already existing regulations around this biometrics for law enforcement identification purposes, fingerprints can be legally collected only based on criminal suspicion. The case for authentication using fingerprints is that it’s much more comfortable to have a finger scanned versus your face.
Innovative privacy-friendly technology is key
Globally, people are becoming more and more aware of their privacy rights and are more concerned with how and for what purpose their data is processed.
The uncertain global regulatory environment and political relations could fundamentally cause growth in the number of paranoid perceptions coupled with more intrusive biometrics such as facial recognition.
With facial recognition technology being under global scrutiny and public reluctance, less invasive biometrics such as typing biometrics and fingerprint scans can be a reliable, sound option for the future of identity validation.
Learn more about behavioral biometrics and their applications in real life, here.