Recent data from an Axios-Morning Consult AI Poll reveals that half of Americans anticipate that misinformation propagated by AI will influence the outcome of the 2024 presidential election. Furthermore, a significant one-third of the respondents expressed that their trust in the election results would diminish due to the involvement of artificial intelligence. This growing sentiment might intensify the skepticism and unrest surrounding the first presidential race since the infamous Jan. 6, 2021, attack on the U.S. Capitol.
Diverse Opinions on AI's Role
The poll's results show that supporters of former President Trump were almost twice as likely as President Biden's backers to state that AI would erode their confidence in the election outcomes, with percentages standing at 47% and 27% respectively. Interestingly, self-identified liberals, at 21%, were almost double in number compared to moderates (11%) and conservatives (12%) when it came to using generative AI for work or educational purposes. This disparity might be attributed to age differences, as 35% of Gen Z respondents claimed to have used generative AI, in stark contrast to a mere 3% of baby boomers.
The Broader Perspective on AI
The survey, which encompassed the views of 2,203 U.S. adults, also highlighted that a majority believe humans will lose control over AI within the next quarter-century. The general sentiment leans more towards pessimism, with 36% feeling negative about AI's future compared to the 26% who are optimistic.
When it comes to regulation, the general populace exhibited greater skepticism about AI's effective regulation than computer scientists who participated in a recent Axios-Generation Lab-Syracuse University AI Experts Survey. Additionally, 53% of the respondents felt that AI-driven misinformation would indeed play a role in determining the election winner.
Earlier this month, Microsoft warned that Chinese hackers will likely use AI in an attempt to derail or influence the election. Microsoft's Threat Analysis Center has released a report detailing the threats posed by influence operations and cyber activities. The report emphasizes the need for both public and private sectors to address the weaponization of technology, including AI.
Google has announced that from mid-November 2023, political advertisers must prominently disclose if their advertisements feature artificial intelligence content showcasing real or seemingly real individuals or events. The policy covers different kinds of content like images, videos, and audio.