Artificial intelligence experts and privacy advocates are sounding the alarm. They demand the United Kingdom government implement stricter rules for facial recognition technology. A new report from the Ada Lovelace Institute reveals significant governance gaps. This situation creates what some campaigners call “wild west” for these systems.
The call for robust legislation is urgent. Facial recognition use by police and private companies is expanding rapidly. AI advancements make this technology more powerful and accessible. This growth fuels concerns about legality, privacy, and potential human rights infringements. The UK needs a clear framework to manage these powerful tools.
Policing Minister Dame Diana Johnson acknowledges these worries. She stated she “fully accept[s] . . . that there is a need to consider whether a bespoke legislative framework governing the use of live facial recognition technology for law enforcement purposes is needed”. The government plans to outline its approach “in the coming months”. The Ada Lovelace Institute urges Sir Keir Starmer’s government to establish a new, dedicated regulator and a “Biometrics Ethics Board,” a demand it has been raising for years. This board would oversee both public and private sector use.
Governance Gaps and Growing Use
The Ada Lovelace Institute’s research details “fundamental deficiencies” in the UK’s current legal framework. Researchers state the legality of many facial recognition deployments is in “serious question”. This uncertainty persists despite a 2020 appeals court ruling that South Wales Police’s use of the technology violated privacy and data protection laws. UK police scanned nearly five million faces in 2024, leading to over 600 arrests, according to Liberty Investigates.
Retailers like Southern Co-op and Asda also deploy this technology. They often aim to identify known shoplifters. Some businesses share CCTV footage with police via Project Pegasus. The Ada Lovelace report also warns that while new AI technologies claim to infer a wide range of internal human states, from emotions to intentions, the “ability to appropriately manage the risks… has not matured alongside this growing appetite for use.”
Calls for Oversight and International Comparisons
Privacy International’s Sarah Simms described the UK as an “outlier” due to this “legislative void”. She emphasized that live facial recognition is “extremely invasive”. Critics argue its use in public spaces can chill protests and has led to misidentifications.
Charlie Whelton of Liberty stated, “The UK is massively behind in regulating facial recognition technology, especially compared to Europe and the US where limits have already been put in place,” adding, “We’re in a situation where we’ve got analogue laws in a digital age.” This situation contrasts with the EU’s AI Act and legislations in several US states, which have banned many live facial recognition applications.
The UK Home Office, however, views the technology as “an important tool in modern policing that can identify offenders more quickly and accurately.” Industry body techUK also weighed in, supporting clear rules.
Dame Diana Johnson previously described the technology as “transformational for policing,” highlighting the government’s complex considerations.
Broader Tech Trends and Ethical Debates
These UK-specific concerns mirror global trends. Meta, for example, reintroduced facial recognition for security in the UK and EU in March, after a previous system shutdown. The company stated it deletes facial data after use. Yet, Meta might integrate facial recognition into future smart glasses. This followed a controversial demonstration where students used Meta’s Ray-Ban glasses with public tools to identify individuals.
Controversies surrounding firms like Clearview AI, which scraped public photos for its database without consent, further highlight the ethical minefield. Clearview AI later reached a creative settlement in a class-action lawsuit. Public bodies are also adopting AI surveillance; New York’s MTA plans to use AI to detect “problematic behavior” in subways, sparking alarm among civil liberties groups. These examples underscore the urgent need for the robust governance framework the Ada Lovelace Institute advocates.