When It Comes To AI In Elections, We’re Unprepared For What’s Coming

“I don't think we've acted quick enough,” Rep. Yvette Clarke, who has sponsored legislation on the topic, told TPM.
TPM illustration/Getty Images
Start your day with TPM.
Sign up for the Morning Memo newsletter

Rep. Yvette Clarke (D-NY) is one of the handful of Democrats who has been trying to get ahead of the possible threats — some that may seriously disrupt the country’s elections and threaten democracy — posed by ever-more-rapidly evolving AI technology. 

Earlier this month, the New York Democrat introduced the The REAL Political Ads Act, legislation that would expand the current disclosure requirements, mandating that AI-generated content be identified in political ads. 

The bill is one of a few efforts to regulate AI that lawmakers have introduced in recent months, but Clarke’s bill and its companion in the Senate have not yet attracted the Republican support they’d need to pass — or even substantial support from the sponsors’ fellow Democrats. 

AI, meanwhile, is advancing at a ferocious speed, and experts warn that lawmakers are not treating this issue with the seriousness they should given the role the unprecedented technology could play as soon as the 2024 election. 

As with all aspects of society that may be impacted by AI, the precise role it may play in elections is hard to game out. Clarke’s legislation focuses in particular on her concerns about AI-generated content supercharging the spread of misinformation around the upcoming elections. The need to create transparency for the American people about what is real and what is not is more urgent than ever, Clarke told TPM, in part because the technology is so cheap and easy for anyone to use. 

Experts TPM spoke with echoed that fear. 

“[AI] puts very powerful creation and dissemination tools in the hands of ordinary people,” Darrell M. West, senior fellow at the Center for Technology Innovation at Brookings Institution, told TPM. “And in a high stakes and highly polarized election, people are going to have incentives to do whatever it takes to win — including lying about the opposition, suppressing minority voter turnout, and using very extreme rhetoric in order to sway the electorate.”

“This is not really a partisan issue,” West added. “People on every side of the political spectrum should worry that this stuff might be used against them.”

Some Republicans have expressed concern about the technology, but have not yet signed on to legislation.

Clarke said she is happy to see that the interest to implement guardrails is there, but she is worried that it might be too little too late.

“Experts have been warning members of Congress about this and we’ve seen the rapid adoption of the use of the technology,” Clarke said. But still the congresswoman told TPM, “I don’t think we’ve acted quick enough.”

“We want to get stakeholders on board. We want to make sure that the industry is to a certain extent cooperative, if not, neutral, in all of this so we’re not fighting an uphill battle with respect to erecting these guardrails and protective measures. But when you keep seeing signs of the usage of deceptive video and how rapidly it can be circulated online that should make everyone uneasy and willing to do the work to erect guardrails,” she added.

Congress has historically been slow — sometimes comically so — to conduct effective oversight of technology-related issues, often reacting to problems rather than proactively addressing them through legislation. That has been especially true when it comes to the role of technology in elections — including, recently, social media. 

“They’re like the drunk looking for the keys,” Oren Etzioni, the founding CEO of the Allen Institute for AI told TPM. “They are ignoring the clear and present danger.”

“Nothing matters until there is passed legislation,” Imran Ahmed, CEO of the Center for Countering Digital Hate, said.

“We’ve had unbelievable amounts of talk on social media — some incredibly insightful, some incredibly dumb — and yet nothing has happened. We do not need endless discourse on the potential harms of AI, especially given that the people who are producing it are themselves saying we want regulation to avoid a race to the bottom, which is exactly what happened with social media,” he added. “Congress’s failure to deal with social media should not be an excuse for why they failed to do so on AI. It should be a warning against what happens if they fail to do so on AI.”

There are some other bills besides Clarke’s that have been introduced on Capitol Hill but experts like Ahmed say, “there are too many piecemeal hypothecated solutions.”

“What we need is a comprehensive framework,” Ahmed told TPM. “It is just the U.S. that remains a laggard in protecting their public safety and protecting the long term sustainability of the industry. Because they seem too scared to touch technology, in case they break it.”

Without comprehensive legislation on AI — addressing the issues and threats experts have been sounding the alarm on — Ahmed says, “Congress remains stupefied in the face of technological advancement, and incapable of serving the American public’s needs.”

Latest News
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Associate Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: