As AI technology becomes increasingly crucial, the contrasting views of Vice President Kamala Harris and former President Donald Trump highlight the significant implications for future governance and civil rights.

In a landscape rapidly evolving due to technological advancements, artificial intelligence (AI) has emerged as a pivotal issue in the 2024 United States presidential election. This marks a significant departure from past election cycles, as AI has now permeated various aspects of both personal and professional lives, becoming a major topic of discussion. In a historic moment during a general election presidential debate, Vice President Kamala Harris referenced AI in her exchange with Donald Trump on September 10, highlighting the increasing importance of this technology in the political arena.

The debate showcased the candidates’ visions for the United States’ role in AI development. Vice President Harris emphasized the need for investment in “American-based technology,” aiming for leadership in AI and quantum computing. On the contrary, former President Trump acknowledged AI’s potential as a formidable challenge, describing it as “maybe the most dangerous thing out there” due to its perceived lack of a definitive solution. These contrasting perspectives underline a broader debate about the risks and opportunities presented by AI technology.

Alondra Nelson, a social scientist previously at the White House Office of Science and Technology Policy under the Biden administration, commented on the potential impact of a second Trump administration on civil rights and consumer protection. She suggested that Trump’s approach might inadequately address the surveillance and workplace safety concerns associated with big tech companies, thereby not fully safeguarding citizens’ rights.

The 2024 presidential election comes at a critical juncture in AI policy development. Currently, no comprehensive safety legislation exists at the state or federal level to oversee AI’s most powerful applications. A notable attempt, California’s SB 1047, which aimed to establish protective measures for AI development, was vetoed by Governor Gavin Newsom. Meanwhile, the federal government has also struggled to enact substantial legislation, especially following the widespread recognition of AI models like ChatGPT in 2022, which significantly raised the profile of AI governance discussions.

Examining the executive orders issued by the candidates provides further insight into their potential policies. Trump’s administration focused on maintaining American leadership in AI through two executive orders in 2019 and 2020. These orders prioritised AI research and emphasized transparency and lawfulness in AI applications. Although these moves were generally well-received, they faced criticism for not adequately addressing AI risks and lacking detailed implementation plans.

Contrastingly, the Biden-Harris administration, responding to newer AI challenges, issued an executive order in October 2023 after the rise of generative AI models. This order acknowledged AI’s risks to privacy and consumer well-being and proposed measures to mitigate these dangers. It also initiated a pilot program, the National AI Research Resource, aimed at supporting AI research and development. The Harris-Walz campaign has committed to making this program a permanent fixture in the national AI infrastructure.

Further illustrating this policy divergence, Vice President Harris led a U.S. delegation to a global AI safety summit in the UK in November 2023, announcing the establishment of the U.S. AI Safety Institute. During the summit, she highlighted the potential threats posed by deepfakes and disinformation, broadening the discussion on AI’s existential risks.

AI’s role in the election is further complicated by differing views on AI-generated content. While Harris’s campaign explicitly avoids using AI-generated material, Trump has embraced AI-made content on platforms like X (formerly Twitter) and Truth Social. This includes disseminating misleading AI-generated images, sparking debates about misinformation and its ramifications.

The differing approaches reflect broader societal concerns about AI’s impact. A recent study by the Center for Democracy & Technology highlighted growing awareness of AI-generated deepfakes among U.S. high school students, indicating the technology’s deep penetration into everyday life.

As AI continues to influence political, social, and economic landscapes, voters face critical choices about which leader will best navigate the complexities of an AI-driven era. The divergent perspectives and policies presented by Harris and Trump encapsulate a broader debate over the future direction of AI governance in the United States.

Source: Noah Wire Services

Share.
Leave A Reply

Exit mobile version