New York’s Subway AI Surveillance Plan Sparks Privacy Alarms

The MTA has confirmed it is working with tech firms on AI to analyze subway camera feeds for "problematic behavior," though critics cite bias and surveillance risks.

New York’s Metropolitan Transportation Authority (MTA) is moving forward with plans to integrate artificial intelligence with its extensive subway camera network to identify “problematic behavior” on platforms, according to the agency’s security chief.

MTA Chief Security Officer Michael Kemper stated the goal is “predictive prevention,” using AI analysis of camera footage to alert police to potential trouble before crimes occur, reports New York City news website Gothamist. The initiative, part of a broader safety push revealed during an April 28th committee meeting, immediately drew criticism from civil liberties advocates concerned about algorithmic bias and expanding government surveillance.

Kemper confirmed the MTA is actively collaborating with technology companies, stating, “AI is the future… We’re working with tech companies literally right now and seeing what’s out there right now on the market, what’s feasible, what would work in the subway system” to automatically alert the NYPD “if someone is acting out irrationally.”

An MTA spokesperson emphasized the system’s focus, asserting, “The technology being explored by the MTA is designed to identify behaviors, not people,” and confirmed facial recognition technology would not be employed. This isn’t the MTA’s first use of AI analytics; in 2023, the agency used software from Awaait to analyze camera footage purely to count fare evasion instances, without identifying individuals involved.

The current AI behavior detection plan is part of a larger safety initiative championed by the MTA and Governor Kathy Hochul since 2021, driven by public concern over high-profile, unprovoked attacks in the subway system. This push has already seen cameras installed on every subway platform and train car, with Kemper noting about 40% of platform cameras are currently monitored live by personnel. The federal government also applied pressure earlier this year, requesting a detailed subway safety plan from the agency.

AI Surveillance Meets Civil Liberties Concerns

Civil liberties groups were quick to denounce the proposal. “Using artificial intelligence — a technology notoriously unreliable and biased — to monitor our subways and send in police risks exacerbating these disparities and creating new problems,” stated New York Civil Liberties Union (NYCLU) Senior Policy Counsel Justin Harrison, adding, “Living in a sweeping surveillance state shouldn’t be the price we pay to be safe. Real public safety comes from investing in our communities, not from omnipresent surveillance.”

The Surveillance Technology Oversight Project (S.T.O.P.) echoed these concerns, calling the plan reliant on “creepy pseudoscience” that is likely to “bake in bias and criminalize BIPOC New Yorker’s bodies.” S.T.O.P. noted that Kemper acknowledged the MTA is partnering with private firms and providing them with MTA surveillance data for development.

While the MTA denies using facial recognition for this behavioral analysis, a New York state law passed around April 2024 already bars the MTA from using biometric tech for fare enforcement. It remains unclear if this law restricts the NYPD, which has used facial recognition since 2011 and accesses MTA feeds, from applying such technology to the same footage. This new AI plan follows other recent tech deployments, like the NYPD’s pilot of AI-powered weapons detection scanners in stations starting July 2024, which also faced NYCLU criticism.

AI Surveillance in Government Use and Big Tech Involvement

The MTA’s exploration of AI surveillance aligns with its growing use in public transport globally for applications like passenger counting, hazard detection, and identifying altercations or unattended items. It also involves potential partnerships with major technology companies like Google and Amazon, who are navigating their own controversies.

Both firms faced intense criticism over Project Nimbus, a $1.2 billion cloud and AI contract with the Israeli government. Human rights groups and employees argued the project enabled surveillance and rights abuses, citing contract terms highly favorable to Israel and a lack of transparency. The EFF stated, “Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation.” Employee groups like No Tech for Apartheid claimed, “Amazon and Google are enabling the world’s first AI-powered genocide via Project Nimbus,” leading to over 50 worker firings at Google following protests.

These controversies highlight the potential pitfalls of large-scale surveillance systems, including data security risks, as demonstrated by the April 2025 WorkComposer data leak which exposed millions of employee screenshots. Concerns about AI being repurposed for control, similar to those raised about Spot AI’s workplace intervention system, also apply. Furthermore, the alleged use of AI by the Department of Government Efficiency (DOGE) to monitor federal workers provides a recent example of government AI surveillance raising ethical alarms.

Shifting Policies and Industry Trends

The environment for such collaborations has also shifted. In February, Google formally removed key restrictions from its AI Principles, eliminating prior bans on developing “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” and “technologies that gather or use information for surveillance violating internationally accepted norms.”

This reversed a policy adopted in 2018 after employee backlash over Project Maven, a Pentagon drone AI initiative. Google leadership framed the change as aligning with democratic values and national security, stating via a company blog post that “companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

This strategic shift allows Google to compete more directly with firms like Microsoft and Palantir for defense contracts, reflecting a wider industry pattern where tech giants are increasing involvement in government security projects.

Regulatory Gaps and Unanswered Questions

Despite the increasing use of AI in sensitive areas, clear regulatory frameworks lag behind. A New York state AI policy announced by Governor Hochul in January 2024 provides guidelines but does not formally bind authorities like the MTA, which are only “strongly encouraged” to adopt it.

International efforts like the G7’s Hiroshima Process and the EU AI Act, along with the US NIST AI Risk Management Framework, offer principles but lack specific enforcement for applications like public surveillance. This regulatory gap leaves significant discretion to the MTA and its tech partners. The MTA has shown willingness to adopt AI, recently completing a successful pilot with Google Public Sector using AI to detect track defects. As the agency moves forward with AI for behavioral monitoring, questions about data privacy, algorithmic bias, transparency, and overall effectiveness in enhancing safety without compromising civil liberties remain central to the debate.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x