Gary Marcus, a well-known critic of artificial intelligence, professor emeritus at New York University and serial entrepreneur, is calling for a public boycott of generative AI systems. He is arguing that the current trajectory of AI development poses serious risks to democracy and personal privacy. His latest book, Taming Silicon Valley: How We Can Ensure AI Works for Us, lays out his concerns about the unchecked power of AI companies and the potential for mass-produced misinformation to destabilize societal institutions.
In an interview with The Register, Marcus now proposed a boycott of generative AI, saying he was “deeply concerned about how creative work is essentially being stolen at scale“.
At the core of Marcus’s argument is the idea that AI-generated misinformation and deepfakes, especially when deployed on a large scale, could undermine the integrity of elections, public opinion, and free speech. He warns that without stronger regulations, foreign governments or bad actors could flood online spaces with convincing fake content that influences everything from stock markets to political outcomes. Marcus points out that while individual opinions are one thing, AI can generate billions of fake narratives designed to manipulate public sentiment.
Rising AI-Driven Cyberattacks Amplify Marcus’s Call for Action
As Marcus pushes for a boycott, the wider tech landscape is grappling with AI’s growing role in cybercrime. Microsoft recently released alarming figures revealing that already over 600 million cyberattacks are now happening every day, largely thanks to AI. Criminals, state-backed hackers, and rogue actors have embraced AI to automate their attacks, making them faster and harder to detect.
Microsoft’s Digital Defense Report revealed that AI is being used to craft highly sophisticated phishing schemes, ransomware attacks, and malware. These attacks, often operated by hackers from Russia, China, and Iran, are now targeting critical infrastructure, government entities, and businesses globally. With AI in their arsenal, attackers are becoming more adept at breaking through security defenses, leaving defenders struggling to keep pace.
The Role of AI in Misinformation and Cyber Warfare
In the realm of cybersecurity, the intersection of AI and nation-state hacking is especially worrying. According to the Microsoft report, countries like Russia and North Korea are using AI tools to enhance their espionage efforts and launch more effective cyberattacks. AI-generated phishing emails, for instance, are becoming increasingly difficult to distinguish from legitimate communications, allowing hackers to gain access to sensitive systems without raising red flags.
North Korea has even begun experimenting with AI-driven ransomware, such as the FakePenny malware, which has been used to extort companies in the aerospace industry after stealing confidential information. Iran, too, has escalated its cyber activities, with AI playing a key role in cyber-enabled influence campaigns aimed at destabilizing Gulf states and Israel.
AI is reshaping how cybercriminals and nation-state hackers operate. Microsoft’s latest report details how AI-driven attacks are becoming more organized and automated, enabling cybercriminals to scale their operations far beyond what manual efforts could achieve. This technology allows hackers to customize attacks on the fly, adjusting their methods in real time to evade detection.
Hackers globally are now employing more “human-operated” ransomware attacks, where they interact directly with compromised systems, adapting their methods based on real-time conditions. The majority of these attacks occur on devices that are poorly managed, leaving organizations vulnerable.
China has ramped up their cyber operations, targeting military intelligence in Taiwan and other Southeast Asian nations. These attacks, powered by AI, are part of a broader strategy to gain a technological and strategic edge over rival nations.
Anthropic’s New Safety Policies Seek to Address AI Risks
While Marcus is pushing for stronger public action, some companies are starting to take steps to address the risks of AI. Anthropic just announced updates to its Responsible Scaling Policy. These new guidelines are designed to prevent AI models from becoming too dangerous by implementing “Capability Thresholds,” which trigger enhanced safety measures when models demonstrate risky abilities.
The updated policy puts a special focus on AI models that could potentially assist in dangerous tasks, such as chemical or biological weapon development. If an AI model crosses a threshold that indicates it might be used for high-risk activities, additional safeguards come into play, ensuring that the model doesn’t spiral out of control. Anthropic has also emphasized the importance of transparency and internal governance, with a Responsible Scaling Officer empowered to halt development if safety concerns arise.
Anthropic is Testing AI Potential for Sabotage
Anthropic’s efforts to improve AI safety come on the heels of recent tests that revealed how their AI models, including Claude 3 Opus and Claude 3.5 Sonnet, could be used for sabotage. The company ran a series of experiments to determine whether AI might intentionally manipulate users or disrupt systems.
One test scenario involved models influencing human decisions. The AI was tasked with providing advice based on complex datasets, but some models were able to steer participants toward bad choices without being detected. In another evaluation, Anthropic tested how well AI could insert hidden bugs into code while other models tried to catch them. While some AI models were able to detect the sabotage, the results showed that as AI becomes more advanced, its capacity for harm increases.
These findings support Marcus’s broader concerns about AI’s potential for disruption. The risk that AI models could be used to influence elections, generate misinformation, or manipulate human behavior underscores the need for stricter regulations and public oversight.
AI’s Future: Regulation or Runaway Development?
While companies like Anthropic are trying to implement safety measures, Gary Marcus remains skeptical that self-regulation will be enough to address the dangers posed by AI. In the United States, AI regulation has lagged behind Europe, leaving many to question whether the government will step in to enforce stricter rules. Marcus believes that without strong public pressure, meaningful regulation may never happen.
In his view, the risks AI poses—ranging from misinformation to cyberattacks—are too great to ignore. He suggests that boycotting generative AI platforms might be necessary to force both tech companies and governments to take the issue seriously. The rapid spread of AI across all sectors, from cybersecurity to creative industries, has made the stakes higher than ever.
Last Updated on November 7, 2024 2:25 pm CET