A.I., Biometrics, and the UN

On Friday, the UN released a statement hazarding the developed world on the proliferation of unrestricted A.I.-enhanced biometric technology being exploited “in sensitive contexts”. For the uninitiated, biometrics involve personal and unique physical characteristics, such as fingerprints or facial structures, to be analyzed and identified for automated purposes. We use biometric data to unlock our phones, log into our laptops, and some surveillance cameras use it to recognize certain faces that may or may not trigger a response from an artificial intelligence inside the machinery.

“Urgent and strict regulatory red lines are needed for technologies that claim to perform emotion or gender recognition,” said UN experts on the issue, declaring that human rights abuses stemming from the use of A.I. can be performed “under the guise of national security and counter-terrorism measures”. Those aware of the power suddenly given to the federal government in the aftermath of 9/11 and the Patriot Act perhaps know this procedure too well.

Fascinatingly, the UN called out rash and immoral behavior from corporations and capitalist investors who propagate the development of unfettered A.I. use “without adequate requirements for conducting human rights due diligence or consultation with affected individuals and communities.”

 

Potential Dangers

Unrestricted artificial intelligence (AI) in biometric systems can pose several dangers and risks. Here are some of the potential concerns:

  • Privacy invasion: Biometric systems powered by AI can collect and store highly sensitive personal data, such as fingerprints, facial images, or iris scans. If AI algorithms are not properly regulated or implemented, there is a risk of unauthorized access, misuse, or data breaches, which can result in severe privacy invasions.
  • Biometric data theft: Hackers or malicious actors may attempt to breach biometric systems to steal or manipulate biometric data. Unlike passwords or personal identification numbers (PINs), biometric data cannot be easily changed or reset once compromised, leaving individuals vulnerable to identity theft and fraud.
  • False positives and false negatives: AI algorithms used in biometric systems may not always be accurate, leading to false positives or false negatives in identification. False positives can result in innocent individuals being misidentified as potential threats, leading to unnecessary scrutiny or even legal consequences. False negatives, on the other hand, can allow unauthorized access to secure areas or systems.
  • Bias and discrimination: If the training data used to develop AI algorithms is biased or lacks diversity, it can result in biased outcomes in biometric systems. This can disproportionately impact certain demographic groups, leading to discriminatory practices or exclusion. For example, facial recognition systems have been known to have higher error rates for people with darker skin tones or women.
  • Unauthorized tracking and surveillance: AI-powered biometric systems have the potential to enable mass surveillance and tracking of individuals without their consent or knowledge. In the absence of proper regulations, this can infringe upon civil liberties and create a pervasive surveillance state.
  • Vulnerability to adversarial attacks: Adversarial attacks involve manipulating biometric data or input to deceive AI algorithms. For example, by adding imperceptible noise or patterns to an image, an attacker can trick facial recognition systems into misidentifying individuals. Such attacks can undermine the reliability and security of biometric systems.
  • Lack of human oversight: Overreliance on AI systems without human oversight can lead to errors or unanticipated consequences. Human judgment and intervention are crucial to address nuanced situations, contextual understanding, and ethical decision-making, which AI systems may not possess.

Addressing these dangers requires careful regulation, robust security measures, ethical considerations, diverse and unbiased training data, and ongoing monitoring and evaluation of AI systems. It is important to strike a balance between the potential benefits and the risks associated with the use of AI in biometric systems to ensure privacy, fairness, and accountability. Perhaps the UN is onto something with their anxiety riddled call to action. If no restrictions are put into place, time will tell if their words should have been heeded.

About the Author: Aaron Avila

Aaron J. Avila is a digital designer, social media marketer, and security professional with ENS Security.

Get Connected with ENS Security

Follow us for more articles, promos, videos, and more!

Comments