Growing public unease about AI’s integration into daily life has led to significant legal actions against major companies, highlighting the need for user control and rights over these technologies.
In recent months, a notable shift has emerged in public sentiment towards artificial intelligence (AI), characterised by increasing resistance to its unsolicited integration into daily life. This unease is reflected in a series of significant legal actions taken against leading AI companies. Among the key incidents, The New York Times initiated a legal dispute with OpenAI and Microsoft in December 2023, alleging copyright infringement. In a related development, three authors filed a class-action lawsuit in March 2024 against Nvidia, claiming that the company utilised their copyrighted works to train its AI platform, NeMo. Just two months later, A-list actress Scarlett Johansson addressed her concerns about AI’s potential encroachment on individual rights by sending a legal notice to OpenAI, citing the similarity of its new ChatGPT voice to her own.
This increasing legal scrutiny highlights underlying anxieties surrounding AI technologies, particularly concerning how they leverage personal data without explicit consent from individuals. A study conducted by Pew Research affirmed these sentiments, revealing that more than half of Americans harbour greater concerns than excitement regarding AI innovations. This apprehension extends beyond the United States, with a similar outlook echoed among populations across Central and South America, Africa, and the Middle East, according to findings from a World Risk Poll.
Looking to 2025, there is a strong expectation that individuals will actively seek greater control over the deployment of AI in their lives. A key strategy through which this aim may manifest is the practice of red teaming, a technique traditionally used in military and cybersecurity contexts. Red teaming exercises involve the engagement of external experts tasked with identifying vulnerabilities within systems, serving as a proactive measure to bolster resilience against potential failures.
While several prominent AI companies have begun to implement red teaming practices to enhance the robustness of their models, widespread public participation in such exercises remains limited. However, this trend is anticipated to change in 2025, promoting greater community involvement in scrutinising AI technologies. The law firm DLA Piper, for instance, has embraced red teaming to assess AI’s compliance with legal standards. Furthermore, organisations like Humane Intelligence are spearheading initiatives that facilitate red teaming exercises aimed at addressing issues of discrimination and bias within AI systems. A notable exercise in 2023 involved 2,200 participants, supported by the White House, which underscores the growing institutional recognition of the need for rigorous evaluation.
As discussions around these initiatives evolve, common inquiries arise regarding the transition from merely identifying issues to enabling actionable solutions. One emerging concept is the establishment of a ‘right to repair’ in the AI domain. This principle could empower users to conduct diagnostics on AI systems, report discrepancies, and observe rectifications executed by the companies involved. Third-party groups—such as ethical hackers—could create publicly accessible patches to address identified concerns. Moreover, this framework could permit individuals to engage independent, accredited parties to assess and tailor AI systems to suit their needs.
While the notion of an AI right to repair may currently appear abstract, there is a growing momentum aimed at catalysing this transformation into a tangible reality. As consumers increasingly navigate a landscape where AI companies routinely introduce untested models, the potential for adverse consequences places a spotlight on the urgent need for a shift in the prevailing power dynamic. Advocating for a right to repair in AI could ultimately provide individuals with the agency to influence the integration of these technologies into their lives. As 2024 concluded as a pivotal year in recognising the pervasive presence and ramifications of AI, the upcoming year may signify a turning point in the demand for user rights and control over artificial intelligence.
Source: Noah Wire Services
- https://news.gallup.com/poll/648953/americans-express-real-concerns-artificial-intelligence.aspx – Corroborates the increasing concerns among Americans about the impact of AI, including job displacement and lack of trust in businesses to use AI responsibly.
- https://www.pewresearch.org/short-reads/2024/09/19/concern-over-the-impact-of-ai-on-2024-presidential-campaign/ – Supports the findings that a majority of Americans are concerned about the impact of AI, particularly in the context of the 2024 presidential campaign and the misuse of AI to influence elections.
- https://business.yougov.com/content/50484-has-public-perception-of-generative-ai-shifted – Provides data on the shift in public perception of generative AI, showing that many Americans have a more negative opinion of AI tools compared to last year.
- https://aisel.aisnet.org/amcis2024/ai_aa/ai_aa/1/ – Highlights the increase in negative emotions and concerns about AI following the launch of ChatGPT, including fears about job displacement and ethical dilemmas.
- https://aiindex.stanford.edu/report/ – Discusses global attitudes towards AI, including the low sentiment in Western nations and the public’s pessimism about AI’s economic impact.
- https://www.pewresearch.org/short-reads/2024/09/19/concern-over-the-impact-of-ai-on-2024-presidential-campaign/ – Details the lack of confidence in tech companies to prevent the misuse of their platforms to influence elections, reflecting broader anxieties about AI.
- https://news.gallup.com/poll/648953/americans-express-real-concerns-artificial-intelligence.aspx – Supports the notion that transparency in AI usage could alleviate public concerns, as suggested by the Gallup survey.
- https://business.yougov.com/content/50484-has-public-perception-of-generative-ai-shifted – Provides insights into the varying perceptions of AI across different regions and demographics, highlighting the global nature of AI concerns.
- https://aiindex.stanford.edu/report/ – Discusses the increasing integration of AI in businesses and the ethical issues surrounding it, such as fairness, bias, and the need for public trust.
- https://aisel.aisnet.org/amcis2024/ai_aa/ai_aa/1/ – Emphasizes the need for policymakers and AI developers to address public apprehensions and promote a positive perception of AI advancements.
- https://news.gallup.com/poll/648953/americans-express-real-concerns-artificial-intelligence.aspx – Corroborates the public’s concerns about AI’s impact on jobs and the general distrust in how businesses use AI, aligning with the need for greater control and transparency.












