Top 9 AI Controversies of 2024

The year 2024 marked a transformative period for artificial intelligence, marked by great innovations and challenges. An estimated $500 billion global AI market value has created countless tools, apps, and companies that have changed industries and our daily lives. These technological marvels were also accompanied by waves of controversy that sparked debates over ethics, societal impact, and accountability in AI development. This article covers major AI controversies of 2024.

Top 9 AI Controversies of 2024

From high-profile lawsuits and corporate scandals to the moral dilemmas of deepfakes and the mistakes in AI decision-making, here are the top AI-related controversies of 2024, arranged chronologically:

OpenAI vs Elon Musk (March 2024)

Tesla CEO Elon Musk and Sam Altman co-founded OpenAI as a non-profit in 2015 with a mission to advance artificial intelligence for the greater good. However, Musk’s departure from the organization in 2018 marked the beginning of a contentious relationship between the two. Musk has since raised concerns over OpenAI’s transition to a for-profit model, its leadership, and its collaborations with corporations like Microsoft. In March 2024, Musk escalated the feud by filing a lawsuit against OpenAI. He alleged misuse of Tesla’s proprietary data in autonomous driving models.

This rivalry isn’t just about AI technology but also reflects personal tensions. Musk has referred to Altman as “Swindly Sam.” While Altman has labelled Musk a “bully” in interviews. Critics argue that the feud is fueled by their competing ambitions. It is particularly by Musk’s establishment of xAI, a direct competitor to OpenAI. The situation highlights the broader implications of AI governance, competition, and ethical AI development.

Also Read: What is Responsible AI and Why Do We Need It?

Grok AI Falsely Accuses NBA Star of Vandalism Spree (April 2024)

In April 2024, a surveillance tool to help the police went haywire when Grok AI reported that NBA star Klay Thompson had been involved in a bricks vandalism spree. This all started because Grok misunderstood basketball slang during discussions regarding Thompson’s poor performance in one game against the Sacramento Kings, shooting zero out of ten attempts. The term “shooting bricks” is a common idiomatic expression in basketball to describe an unsuccessful shot. The AI misinterpreted this phrase as an instance of actual vandalism, involving bricks being thrown at homes in Sacramento.

The baseless report stated that houses had been vandalized and authorities were investigating the incident. This fake story spread widely on social media, causing masses of users to be in a state of confusion and hilarity. Many mocked the situation on the X. They began making memes and hilarious posts which spread the untrue story. Although the context was easily miscommunicated, Grok’s report remained alive for days. Thus highlighting issues with how AI systems perceive the human language’s nuances and context.

The backlash against Grok AI came quickly. Activism and commentary called for stronger regulatory standards and better auditing practices by AI systems. They cited some of the dangers inherent to biased training data. For instance, Grok disproportionately flagged individuals from minoritised communities, perpetuating destructive stereotypes and spreading untruths. The Grok incident opened up broader talks around the implications of the failures of AI.

OpenAI vs Scarlett Johansson (May 2024)

Scarlett Johansson filed a lawsuit in May 2024 after learning that OpenAI used her voice for an AI-generated viral video advertisement to hawk a fake product, in which her voice had been synthesized without her knowledge or permission. The video advertisement ran far and wide across social media and raised critical legal and ethical questions concerning the deepfake technology used.

In court, Johansson’s lawyers claimed that the unauthorized use of her likeness violated her rights to privacy and publicity. The lawsuit shed light on the potential for abuse of deepfake technology and brought attention to a more general issue: how such technology could be misused by celebrities and public figures whose identities could easily be duplicated without consent. It started the debate on clearer regulations concerning AI-generated content and protocols for consent in the use of people’s likenesses.

This AI controversy led OpenAI to announce plans to revise its dataset policies to ensure stricter consent requirements moving forward. The incident marked a pivotal moment in the ongoing debate over intellectual property rights in the age of artificial intelligence and deepfakes, emphasizing the necessity for ethical standards in AI development.

Google’s AI Overview Controversy (May 2024)

In May 2024, Google suffered major backlash regarding a new AI-generated feature it was rolling out called AI Overviews. The new feature was meant to summarize search results in a few sentences to make it easier for users to find answers in less time. However, within a few days of launching, it became infamous for a series of absurd and nonsensical responses that put public trust in AI-generated content into question.

Misleading Information By AI

According to the users, the AI displayed misleading information on many topics about the following:

  1. Historical Misrepresentation: When answering a question about the Muslim presidents in the U.S., the AI stated that Obama was the only one among them, which is actually incorrect and lacks nuance.
  2. Absurd Recommendations: After being asked to suggest how one might prevent cheese from sliding off pizza, the AI recommended adding “⅛ cup of non-toxic glue,” clearly a nonsensical answer. Other ridiculous suggestions included eating rocks for their mineral content and putting gasoline in spaghetti for flavor.
  3. Dangerous Advice: The AI also made the false statement that parachutes were no better than backpacks when it came to jumping from an airplane, and this highlights the danger of relying on such incorrect information.

These falsehoods have sparked a flood of ridicule on social media as users post their incredulity and frustration with the situation. Many people escalated the situation by questioning Google’s AI systems and the integrity of their ability to provide information.

It saw the outrage, took in the criticisms of issues facing AI Overviews by acknowledging the problems with responses emanating from Google. “These are issues rooted in information gaps, situations when there aren’t great quality sources to back things like uncommon queries, bringing sometimes subpar and often less reputable sites to generate ‘bad’ information.”

Google stated to be doing a series of reviews and upgrades regarding its system’s oversight and quality checking. It would also further limit the application for more specific search inquiries while continuing working on further developments for error-free output.

Also Read: Top 6 AI Updates by Google – 2024 Roundup

McDonald’s Ends IBM Drive-Thru Voice Order Test (June 2024)

McDonald’s cancelled its test of IBM’s AI-powered voice order system in June 2024 following repeated problems with accuracy and customer satisfaction. The concept was to make ordering processes at drive-thrus simpler but faced significant operational issues.

The test threw up the following critical problems:

  1. Order Misinterpretation: Customers complain that their orders were misheard or mishandled by the AI system. Thereby resulting in delays and irritation at drive-thru windows.
  2. Customer Experience: The faults were not only a cause of annoyance for customers but also led to increased wait times. This is diametrically opposite to what efficiency gains were expected from the implementation of AI.

Industry analysts raised questions regarding the readiness of AI technology for mass adoption in customer service roles when McDonald’s ended the trial. Many analysts pointed out that while AI has potential benefits, its current limitations can lead to significant operational disruptions if not adequately addressed.

DoNotPay “Robot Lawyer” Controversy (June 2024)

In June 2024, DoNotPay, the legal AI platform that branded itself as “the world’s first robot lawyer”. It hit the ground running in one of its biggest AI controversies yet due to multiple legal and public scrutinizations of its claims and offerings. Founded in 2015 by Joshua Browder, the company initially began to aim at helping users fight legal challenges such as contesting parking tickets and generating legal documents for free. However, it was reported that the AI gave bad legal advice. These legal advices could have led to a serious situation for those relying on its services.

FTC’s Complaint

The controversy deepened after the Federal Trade Commission stepped in. They were claiming that DoNotPay had been engaged in the unauthorized practice of law and failed to meet promises. The FTC’s complaint pointed out several main points:

  1. Misleading Claims: DoNotPay advertised its services as capable of generating “ironclad” legal documents and providing advice comparable to that of a human lawyer. However, the FTC found that the AI did not undergo adequate testing to ensure its outputs were legally sound or equivalent to those produced by qualified attorneys.
  2. Consumer Harm: Users reported instances where the AI-generated documents were poorly drafted or contained inaccuracies, rendering them unusable in legal contexts. One plaintiff noted that he was unable to use documents created by DoNotPay due to their substandard quality.
  3. Settlement Agreement: Because of the FTC findings, DoNotPay agreed to pay $193,000 in a fine and be restrained from not telling consumers who had used the service between 2021 and 2023 the limits of the legal products sold by DoNotPay. It also agreed to stop making unsupported claims regarding the replacement of human lawyers with DoNotPay lawyers in the future.

This scandal raises critical questions about the feasibility and morality of using AI in high-stakes domains such as law. Critics argue that AI can be used to perform some tasks but should not be marketed as a replacement for professional legal advice. The incident has sparked a debate on the responsibilities of AI companies in terms of representing their capabilities and protecting consumers.

Also Read: AI Revolution in Legal Sector: Chatbots Take Center Stage in Courtrooms

Ilya Sutskever Launches Safe Superintelligence Inc (SSI) (June 2024)

In June 2024, Ilya Sutskever, co-founder of OpenAI, announced the launch of Safe Superintelligence Inc. (SSI), an initiative aimed at prioritizing ethical frameworks for artificial intelligence development. This move came amid growing concerns regarding the safety and ethical implications of advanced AI technologies following various controversies surrounding OpenAI. The mission of SSI is to ensure that advanced AI systems are developed and deployed responsibly. The main objectives include:

  1. Establishing Ethical Guidelines: SSI aims at establishing holistic ethics frameworks that guide AI development practices through safety and accountability.
  2. Facilitating Transparency: The group will advocate for the cause of making AI operations in a more transparent manner that will let stakeholders understand in detail how the AI makes decisions and works.
  3. Policymaker Engagement: SSI will work to engage with policymakers and business leaders on regulation policies shaping AI technologies.

The supporters lauded Sutskever’s move as timely and much-needed to address the ethical concerns surrounding AI. But on the other hand, critics viewed it as a reaction to OpenAI’s rising controversies. They questioned whether SSI was genuinely acting to change the status quo or if it was only engaging in public relations tactics to help salvage the OpenAI backlash.

Clearview AI Controversy (September 2024)

In September 2024, renewed outrage surfaced against Clearview AI, the infamous facial recognition company whose recent revelations of scraping the data of unsuspecting individuals to expand its database of faces has come under increasing fire. The company that offers software mainly to law enforcement agencies has been called out for its means of acquiring images from the internet and social media sites without consent. This controversy renewed debate about the violations of privacy and the ethics of the application of such technology in law enforcement.

Clearview AI reportedly hosts over 30 billion images scraped from many online sources. Such an act raised significant alarm among advocates and civil rights organizations. They raised concerns about the violations committed by Clearview concerning law and ethical standards. Clearview was aggregating images without people’s consent. This further creates what critics would consider a “perpetual police line-up” with individuals who can be tracked and identified without them being aware or giving any sort of permission.

Backlash Against Clearview AI

The backlash against Clearview AI is not new. The company has faced multiple lawsuits and regulatory actions in various jurisdictions. For example:

  • Fines and Bans: In September 2024, Dutch authorities fined Clearview €30.5 million for building an illegal facial recognition database. The Dutch Data Protection Authority emphasized that facial recognition technology is highly intrusive and should not be deployed indiscriminately.
  • Settlements: Earlier settlements included an agreement with the ACLU. It barred Clearview from selling its services to private individuals and businesses. Despite such lawsuits, Clearview remains active, bringing into question whether the regulations in place were effective enough.

The scandal has attracted widespread condemnation from civil liberties groups and activists who are pushing for stronger regulatory measures governing facial recognition technology. Many say that the practices of Clearview epitomize a disturbing trend where privacy rights are pushed aside for surveillance capabilities. The legal battles are indicative of the urgent need for comprehensive legislation to protect people’s biometric data.

Amazon’s AI Recruiting Tool Bias (Ongoing)

Amazon’s AI tool for recruitment has been criticized lately for being biased towards gender and race in hiring. Already, several attempts had been made to correct the ills. However, it showed that it was favouring male candidates for technical posts against females who were equivalent in all respects. That is a serious questioning of fairness and accountability in AI-driven decision-making processes.

The controversy over Amazon’s AI recruiting tool began with the discovery that the algorithm was trained on resumes submitted over ten years, predominantly from male candidates. As a result:

  1. Gender Bias: The tool developed a bias against female applicants. It was penalizing resumes that included terms associated with women’s experiences or qualifications.
  2. Racial Disparities: Similarly, candidates from minority backgrounds faced disadvantages due to historical biases embedded in the training data.

This made Amazon end up abandoning the tool, but only after it proved ineffectual in equitably fair hiring practices. Even as these biases continue to face adjustments, the AI recruiting tool developed by Amazon has never seen its last days as criticism on the inequality being strengthened in hiring instead of lessened.

End Note

As we enter 2025, the AI controversies of 2024 stand as a crucial lesson to the AI community. While the new year will bring its own challenges, these events highlight the need for more ethical, transparent, and accountable AI development. These AI controversies remind companies and innovators that the stakes are higher than ever. For one mistake can erode the public’s trust and bring on real-world harm. Yet, with the controversy comes an opportunity. By addressing these weaknesses, companies can create technologies that innovate while respecting human dignity, privacy, and societal norms. Though the journey will be challenging, it holds the promise of a more thoughtful, ethical, and impactful AI-driven future.

A 23-year-old, pursuing her Master’s in English, an avid reader, and a melophile. My all-time favorite quote is by Albus Dumbledore – “Happiness can be found even in the darkest of times if one remembers to turn on the light.”

Source link

Author picture

Leave a Reply

Your email address will not be published. Required fields are marked *