AI Missteps of 2024

In 2024 we have witnessed several significant AI blunders that highlight the limitations and potential dangers of Artifical Intelligence. Here are some of the most notable incidents so far.

Google Bard's False Accusations

A group of Australian academics used Google Bard AI to generate information for a Senate inquiry submission. The AI produced false allegations against the Big Four consulting firms (KPMG, Deloitte, PwC, and EY), which were included in the submission without verification. This led to public apologies and raised concerns about the reliability of AI in academic research (Information Age, HRD Australia, Shifting Sands Digital).

Microsoft Start's Inappropriate Poll

Microsoft’s AI news aggregator, Microsoft Start, added a poll to an article about a young coach's death, asking readers to guess the cause of death (murder, accident, or suicide). This insensitive addition damaged Microsoft’s journalistic reputation and sparked widespread criticism (Guardian.com).

Mr Beast Deepfake Scam

Scammers used deepfake technology to create a video featuring YouTuber Mr Beast, falsely promoting a sale of iPhone 15s for $2. The convincing deepfake bypassed TikTok's moderation, misleading thousands of viewers (Nasdaq).

AI-Generated Song for Grammy Consideration

An AI-generated song using the voices of Drake and The Weeknd was submitted for Grammy consideration. The submission was ultimately rejected, but it intensified the debate over the legitimacy and impact of AI-generated music (Rolling Stone).

MSN's Insensitive Headline

MSN’s AI described deceased NBA player Brandon Hunter as "useless at 42" in a headline, which caused significant backlash and highlighted the risks of relying on AI for content creation without human oversight (Business Insider).

Google Gemini's Racial Bias

Google’s Gemini AI produced racially biased images, sparking a debate about the inherent biases in AI systems and the challenges of ensuring diversity and fairness in AI-generated content (TechXplore).

Google AI Overviews

Google’s AI Overviews feature provided erroneous and sometimes absurd search results for uncommon queries, such as recommending glue to keep cheese from falling off pizza. This incident highlighted the pitfalls of deploying AI without thorough testing and quality control (Wccftech).

Cruise Self-Driving Car Incident

A self-driving car from Cruise dragged a pedestrian after a collision, resulting in severe injuries. This accident underscored the ongoing safety challenges and ethical considerations in deploying autonomous vehicles (NPR).

Political Deepfake Scandal

A deepfake audio falsely depicting UK Labour Party leader Sir Keir Starmer verbally abusing staff went viral. The clip was debunked, but it had already caused significant misinformation and public confusion (Politico).

Chinese AI Influence Operations

Chinese operatives used AI to generate images aimed at sowing discord in the US and other democracies. These AI-generated images were part of broader influence operations designed to exploit social and political tensions (CNN).

These incidents serve as a stark reminder of the potential risks and ethical dilemmas associated with AI technology. As AI continues to integrate into various sectors, it is crucial to implement robust oversight, rigorous testing, and ethical guidelines to mitigate such issues.

Who is going to do it, and when?

Image credit: ergoneon