spot_img
Saturday, July 19, 2025
spot_img
HomeHappening NowNo one running for president has a plan for AI

No one running for president has a plan for AI

-

Innovation in artificial intelligence is moving faster and faster for the day However, whether AI has the patience for a presidential election cycle is another matter.

Every day, machine learning and other forms of AI, and the content it produces, are outpacing the ability of headline writers to understand the technology. By 2023 alone, AI-powered programs have surpassed human capabilities in diagnosing diseases, including X-rays, MRIs, and CT scans. They can also quickly assess students’ learning styles and recommend unique lesson plans based on their strengths and weaknesses. They can imitate famous artists, compose music and even replicate the sound of a person’s voice.

But largely in the US, these technologies remain unregulated and rapidly evolving as policymakers race to understand and regulate them. They have the power to save lives and harness the limitless potential of the information age, along with the potential to become a job-killing scourge that will wreak havoc on the global economy.

The direction of US AI policy is already becoming an emerging issue in the 2024 presidential election, with everyone from republicans Will Hurd and Vivek Ramaswamy to the President Joe Biden increasingly focused on AI impressive and terrifying possibilities

“Sixty-five percent of Americans are worried about robots taking their jobs,” Hurd said in a recent interview with ABC news

Whoever wins the 2024 election could have a profound impact on the direction of AI and the future it will come to define. But at this point, few of the candidates, even the president, have been able to articulate the details of what that future might look like.

Presidents and machines

Washington, DC has been thinking about machine learning and AI for years.

Already in February 2019, President donald trump signed an executive order to create the American AI Initiative. The order established a federal office tasked with a commitment to double investment in AI research, established a series of national AI research institutes, created plans for AI technical standards, and set standards for regulatory guidance for the growing industry.

Earlier this week, Vice President Kamala Harris met with labor leaders to discuss their concerns and hopes for the future of technology. And on Capitol Hill, senate Leader of the majority Chuck Schumer He has reportedly been drafting a broad regulatory framework for the AI ​​industry that, while murky, aims to strike a middle ground between Big Tech and organized labor.

Ultimately, it will be the president’s views on artificial intelligence that will likely guide the direction in which America goes. But while the two political parties have largely mirrored each other’s policies from the Trump administration to the current one, Republicans and Democrats emphasize different priorities in how they would execute these policies.

President Joe Biden delivers a nationally televised speech from the Oval Office on June 2. The president has stated broad principles he believes should govern future AI regulations, but without specifics.
Win McNamee/Getty Images

“There aren’t as many partisan differences as you might expect on broad issues for AI policy,” said Matthew O’Shaughnessy, visiting fellow in technology and international affairs at the Carnegie Endowment for International Peace.

he said Newsweek“No matter the outcome of 2024, the White House and congress will focus on things like developing guidelines for the responsible use of AI, increasing America’s leadership in AI development, and encouraging the use of AI across government.”

Specific emphases, O’Shaughnessy added, may differ.

“The Biden administration has emphasized civil rights and equity in its AI work much more than the Trump administration,” he said. “Both administrations prioritize innovation, but the Trump administration was more hesitant to create AI rules it thought could limit growth.”

One of the candidates for the presidency has indicated that he has ideas about where he could go. Ramaswamy, a biopharmaceutical executive, said he will introduce a comprehensive plan for AI into his campaign platform. He points out that there are serious risks to both over-regulation and ignoring the risks posed by AI.

“Total bans are not the answer,” he wrote on social media. “The right approach is to set clear rules for who bears responsibility for the unintended consequences of AI protocols, and we should be *very* skeptical of the proposed regulations of large companies currently trying to commercialize AI.”

Regulation templates in the EU, China

At this point, it is not yet known exactly what the US government will look like in AI technology. But AI is not an exclusively American phenomenon, and some countries have already offered templates for what a potential regulatory scheme might look like.

The European unionfor example, it has taken a broad approach, grouping AI applications into four risk categories ranging from “minimal” (the most basic forms of AI) to “limited risk” applications such as chatbots and mood detectors, “high-risk” applications such as law enforcement or procurement procedures, and “unacceptable” applications such as social scoring or certain types of biometrics, all of which are subject to different regulations

Meanwhile, China has launched a national framework that regulates the deployment of specific algorithm applications in certain contexts, such as consent requirements for deepfake content or guiding principles for algorithms that guide workplace productivity.

While the United States has yet to establish its own formal frameworks, Darrell West, a senior fellow at the Brookings Institution’s Center for Technological Innovation, said Newsweek some from the private sector, incl Microsoft, have begun to develop their own policy recommendations. These include licensing requirements for AI companies and third-party audits of how the technology is used.

But at the breakneck pace at which technology is moving, the candidates, President Joe Biden in particular, will have to make their positions clear, and soon. More than two-thirds of Americans said they were worried about the negative effects of AI in a May Reuters/Ipsos poll. And 61 percent said they believed it could threaten the fate of civilization.

Articulating the details of his vision for the technology, West said of Biden, will be critical both for the companies that use the technology and for a public that fears its implications.

“Biden has laid out broad principles that he believes should govern future regulations, but he hasn’t really specified what the regulations should be and how far they should go,” West said.

“Of course, that’s what everyone wants to know. That’s what businesses want to know. That’s what advocates want to know, too. We’re at that stage where we have to figure out how to turn broad principles into real regulations. ” he said.

An articulated vision

An articulated vision of AI could help calm an anxious audience, especially a generation that, for nearly two decades, has been manipulated in many ways by algorithms that define what music people listen to or what products they buy.

That’s the landscape politicians are inheriting and must work to regulate, said Vince Lynch, CEO of software firm IV.AI. Newsweek.

“It’s not going away,” he said. “It’s not going to stop, and even if America decides to stop, it doesn’t mean it’s going to stop. It’s a globally open thing that can be used. We really need people to think about it and focus on how to use -la. to our advantage versus not paying attention to what’s going on.”

Lynch, whose clients have included companies ranging from netflix in the federal government, he said conversations around AI have long been isolated to the individual companies that use them. This keeps federal officials calm and often reactive to AI-related issues, particularly the algorithms that control the more mundane facets of everyday life.

After the 2016 presidential election, for example, the tech company Cambridge Analytica came under fire international scrutiny after handling personal data on platforms like Facebook feed users inflammatory political commentary. His actions raised questions about whether the election result was affected.

Most recently, OpenAI, the company behind the popular chatbot ChatGPT, is facing investigations by European and Canadian data protection authorities over its own data collection methods. It has already been temporarily banned in Italy.

While the U.S. should be careful not to stifle innovation in AI, Lynch said, it should be keenly aware of the technology’s risks. The focus should be on creating regulations that put people first possible AI damage and also find ways that technology can work for the benefit of humanity, he added.

“We have to be really thoughtful with this technology,” Lynch said. “It’s incredibly powerful. It’s incredibly useful. It helps us distill human nature, it helps us understand real need, and from a political standpoint, it can help us really understand what people want, regardless of political cycle”.

SOURCE LINK HERE

Related articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe
spot_img

Latest posts

en_USEnglish