paint-brush
Trump’s 2024 Victory Sparks AI Policy Debate: What’s Next for America’s Tech Future?by@victordey
200 reads New Story

Trump’s 2024 Victory Sparks AI Policy Debate: What’s Next for America’s Tech Future?

by Victor DeyDecember 22nd, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

As President-elect Trump plans to deregulate AI, debates ignite over safety, innovation, and accountability. With Elon Musk taking a leadership role in shaping policy, the U.S. faces critical decisions in its AI race against China, balancing growth with ethical considerations.
featured image - Trump’s 2024 Victory Sparks AI Policy Debate: What’s Next for America’s Tech Future?
Victor Dey HackerNoon profile picture


As Donald Trump prepares to take office, America’s artificial intelligence (AI) policy stands at a crucial turning point, sparking debates about the future of AI development and regulation. His victory as the 47th president has already earned congratulations from tech giants like Jeff Bezos and ignited discussions about potentially giving Elon Musk a significant role in shaping the country’s AI policies moving forward. Trump has vowed to reverse many of the AI executive orders put in place by the Biden administration, arguing that they "stifle AI innovation" and push "radical left-wing ideas" onto the development of the technology. Although it's unclear what the direct impact of this repeal will be, it signals a move toward deregulation.


The rhetoric surrounding it is dividing opinions, turning AI policy into a partisan issue—although voters across party lines support balanced regulation. Industry experts, however, are concerned that cutting back regulation may jeopardize AI safety at a time when it’s needed most. “The U.S. must focus on a solid and efficient infrastructure that allows organizations evaluating AI systems and their deployment to publish credible and verifiable information about these systems, including their origins, training data, sensor data provenance, and any security incidents,” Dave Maher, CTO of Intertrust Technologies, told me. “These filters should enforce safeguards and test for violations of ethical principles, such as promoting or endorsing harmful behaviors like violence in an AI model’s outputs.”


A New Direction: Rolling Back AI Regulation?

Musk, who supported Trump throughout his campaign, has been appointed to lead the newly created Department of Government Efficiency (DOGE)—a role aimed at overhauling the nation’s regulatory framework. He’ll be working alongside entrepreneur and politician Vivek Ramaswamy in this ambitious effort. Meanwhile, Americans for Responsible Innovation (ARI), a nonprofit focused on promoting responsible AI development, has started a petition with a goal of 10,000 signatures, urging Trump to appoint Elon Musk as a special advisor on AI. The group argues that Musk’s deep technical expertise and outspoken commitment to AI safety make him the perfect person to lead the U.S. in this crucial tech race.


“The U.S. should lead the world in advancing AI safely and securely. No one is better equipped to help the Trump Administration make America lead on AI than Elon Musk,” reads the petition page on ARI’s website. “With proper mechanisms in place to handle conflicts of interest, Musk would be an invaluable asset for helping the Trump administration navigate the development of this transformational technology. Help us tell President-Elect Trump to make Musk a Special Advisor to the President on AI.”


Musk has long warned about AI’s existential risks and expressed concern over AI becoming too powerful too quickly. But critics aren’t convinced. Some remain skeptical about Musk’s role in shaping AI policy, pointing to his decision to distance himself from OpenAI, a company he helped create, as well as his outspoken opposition to AI regulations. “There’s no way for Elon Musk to be unbiased, just like there’s no such thing as a truly ‘unregulated market.’ The absence of regulation is, in itself, a form of regulation—one that gives big corporations free rein to push their own agendas,” Hamid Ekbia, Director of the Autonomous Systems Policy Institute (ASPI) and Professor at Syracuse University, told me. “When Musk talks about deregulating AI, what he’s after is the chance to outpace his competitors. That's why he and others tend to exaggerate the existential risks of AI.”

China’s Growing AI Prowess Fuels U.S. Concerns

The U.S. faces mounting competition from China, which has invested heavily in AI development intending to surpass America by 2030. The AI race has become a central issue in discussions on national security, with both Democrats and Republicans viewing AI technology as a crucial component in defense strategies. China’s lower labor costs and focus on model training give it an edge in the AI race. Trump’s administration will likely continue to tighten restrictions on Chinese access to advanced semiconductors, a strategy initiated during his first term and expanded under President Biden.


Trump’s stance on AI has been erratic, often praising its potential while warning of its dangers. He said that the U.S. would require vast infrastructure upgrades for AI, particularly in energy and computing power, to ensure the U.S. maintains its lead over China. To secure America’s technological position, experts call for ambitious investments in AI infrastructure. A recent proposal from OpenAI suggests creating "National Transmission Highways" to modernize the power grid and meet AI’s immense energy demands, a plan that aligns with Trump’s vision of improving infrastructure. “AI can optimize energy distribution and usage through virtual power plants, which manage thousands of components involved in energy production, storage, and consumption,” added Maher. “However, this is only feasible with extensive automation and AI to support precise decision-making. AI safety and security will be crucial for such systems’ successful, large-scale deployment.”

Future of AI Regulation: What’s at Stake?

Despite widespread bipartisan support, the U.S. AI Safety Institute (AISI) —an organization created after Biden’s executive order to spearhead government efforts on AI safety—could face an uncertain future under President-elect Trump’s newly created DOGE. The department is expected to target federal programs for cuts, and AISI might be on the chopping block. However, tech leaders, lawmakers, and advocates are rallying for a more nuanced approach, arguing that safeguarding AI technology is critical to both national security and ethical progress. Moreover, AI companies like Leading OpenAI and Microsoft are vocal about the need for strong safeguards to maintain the U.S.’s position as a global leader in AI.


“AI regulation has to start at the top,” Raj De Datta, CEO and co-founder of Bloomreach, shared with me. “A handful of companies dominate the AI market, and everyone else depends on their data centers or the models they produce. It’s vital that we start with these tech giants—ensuring they respect privacy, operate fairly, use diverse datasets, and uphold values we all agree on. That’s how we get the outcomes we want, the kind that benefit society.”


But others, like Ekbia, caution that the current profit-driven approach of big tech companies is unlikely to prioritize the safety of their systems, let alone ethical or environmental concerns. He pointed to recent controversies at OpenAI as evidence that many tech companies put profit—their so-called "bottom line"—ahead of legal and ethical considerations. “How can we expect companies like Google, which has moved operations to tax havens like Ireland or the Cayman Islands, to act responsibly when it comes to developing AI?” he asks.


The current situation underscores a key tension: balancing innovation with accountability. As Trump prepares to reshape U.S. AI policy, the industry faces a period of uncertainty. Whether his administration will accelerate or stifle innovation remains to be seen, but one thing is clear: the stakes for AI safety, security, and leadership are higher than ever before.