Opinion Why this year’s election interference could make 2016 look cute

The first big AI crisis isn’t some hypothetical far off in the future. It’s coming on Election Day.

(Ann Kiernan for The Washington Post)
9 min

When you watch a man attempt to hit a softball, you don’t expect him to swing at his own head. Apparently, Dario Amodei has never played softball. At a Bloomberg tech conference on May 9, the founder and chief executive of Anthropic — which makes the AI assistant Claude and is valued at $15 billion — was lobbed a sunny, end-of-session question: Make the case for why people should be excited about artificial intelligence rather than scared. The best he could offer was, “I’m 10 out of 10 excited, and I’m 10 out of 10 worried.” I’m guessing his publicist went home and got 10 out of 10 hammered.

Almost every AI CEO has a variation on the glass-half-full, glass-might-explode answer. Elon Musk thinks there’s a decent chance AI destroys humanity, which is why he says his company X.ai is moving full speed to prevent it. Sam Altman, the head of OpenAI, believes artificial intelligence can solve some of the planet’s hardest problems — while creating enough new ones that he suggests the top AI models be scrutinized by the equivalent of U.N. weapons inspectors.

Unfortunately, all of this dead-ends in the dumbest possible place. Accepting that the promise and peril of AI are equal — and that the only way off the Mobius strip is to keep making more AI until good eventually prevails, or doesn’t — is Wayne LaPierre territory. It was the former head of the National Rifle Association who merged the evasion of responsibility into a sales pitch when he declared, “The only way to stop a bad guy with a gun is with a good guy with a gun.” I don’t think the biggest players in AI are as scoundrelous as LaPierre, and there’s a national security argument for why they should keep sprinting ahead of China. But are we really just going to slouch our way into an arms race that enriches the AI makers while absolving them of any responsibility? Even for the United States, isn’t that almost too on-brand?

This would all be less concerning if the first big AI crisis were some hypothetical far in the future. But there’s one circled on the calendar: Nov. 5, 2024. Election Day.

For more than a year, FBI Director Christopher A. Wray has warned about a wave of election interference that could make 2016 look cute. No respectable foreign adversary needs an army of human trolls in 2024. AI can belch out literally billions of pieces of realistic-looking and sounding misinformation about when, where and how to vote. It can just as easily customize political propaganda for any individual target. In 2016, Brad Parscale, Donald Trump’s digital campaign director, spent endless hours customizing tiny thumbnail campaign ads for groups of 20 to 50 people on Facebook. It was miserable work but an incredibly effective way to make people feel seen by a campaign. In 2024, Brad Parscale is software, available to any chaos agent for pennies. There are more legal restrictions on ads, but AI can create fake social profiles and aim squarely for your individual feed. Deepfakes of candidates have been here for months, and the AI companies keep releasing tools that make all of this material faster and more convincing.

Almost 80 percent of Americans think some form of AI abuse is likely to affect the outcome of November’s presidential election. Wray has staffed each of the FBI’s 56 field offices with at least two election-crime coordinators. He has urged people to be more discerning with their media sources. In public, he’s the face of chill. “Americans can and should have confidence in our election system,” he said at the International Conference on Cyber Security in January. Privately, an elected official familiar with Wray’s thinking told me the director is in a middle manager’s paradox: loads of responsibility, limited authority. “[Wray] keeps highlighting the issue, but he won’t play politics, and he doesn’t make policy,” that official said. “The FBI enforces laws. The director is like, ‘Please ask Congress where the laws are.’”

Is this about to turn into one of those rage-filled yawps about Congress? Yeah, kind of. Because the Senate has spent a year grandstanding about the need to balance speed and thoroughness when regulating AI — and succeeded by delivering zero of both. But stick around for the punchline.

On May 15, the Senate released a 31-page AI road map that drew immediate friendly fire — “striking for its lack of vision,” declared Alondra Nelson, President Biden’s former acting director of White House Office of Science and Technology Policy. The road map contains nothing that would force the AI makers to step up and nothing that would help Wray. No content verification standard, no mandatory watermarking of AI content, and certainly no digital privacy law to criminalize the deepfaking of voices and likenesses. If the AI industry thinks it should be both arsonist and firefighter, Congress appears happy to provide them matches and water. But we can at least savor how Sen. Todd Young (R-Ind.), a member of the Senate’s self-described AI Gang, summarized the abdication of responsibility: “Where vagueness was required to come to an agreement, we embrace vagueness.” Dario Amodei, you’re free to go.

You’re familiar with the Spider-Man meme in which two costumed Spider-Men point at each other in confusion, accusing the other of being an impostor while a criminal gets away? This version seemingly has 2½. Because while the AI companies shrug, and Congress celebrates vagueness, it’s the social media companies who distribute most of the misinformation. And by comparison, they’re actually not terrible.

I mean, they have been. It was Meta chief executive Mark Zuckerberg who initially dismissed the impact of Russian misinformation on the 2016 election as “pretty crazy.” But a year later, Zuckerberg recognized he was wrong, kicking off what security people at Meta and other platforms, as well as officials in law enforcement, describe as something like Glasnost. Each side acknowledged the stakes of failure and found a way to work together — often using their own AI software to detect anomalies in posting patterns. They would then share findings and zap malicious content before it could spread. The 2020 and 2022 elections were more than proof of concept. They were a success.

But all of that collaboration preceded the boom in AI — and all of it ended last July. Murthy v. Missouri, a case brought by the Republican attorneys general of Louisiana and Missouri, claimed that federal communication with social media platforms to remove misinformation was a “censorship enterprise” that violates the First Amendment. Was the suit an act of political vengeance motivated by the misperception that social media leans left? You bet. But you don’t have to be a partisan to imagine how a back-and-forth between a social media platform and, say, a president with narcissistic personality disorder could turn coercive. U.S. District Judge Terry Doughty sided with the plaintiffs and issued a temporary injunction that was affirmed by the U.S. Court of Appeals for the 5th Circuit.

In March, Sen. Mark R. Warner (D-Va.), chairman of the Senate Intelligence Committee, revealed the consequences of that decision: eight months of total silence between the feds and social media companies regarding misinformation. “That ought to scare the hell out of all of us,” said Warner. The Supreme Court is scheduled to rule on Murthy v. Missouri next month, and there are at least preliminary indications the justices are skeptical of the lower court’s ruling.

Regardless of the court’s decision, most of the social media executives I speak with are high on the fumes of their own righteousness. For once, they’ve been caught doing the right thing! It’s not their fault the courts got involved! They remind me of my teenage self when it was the neighbor’s son who got in trouble — and also in their convenient amnesia. Misinformation isn’t just the presence of lies but the erosion of credible facts; and Meta, Google, X and the rest have led the charge to enfeeble journalism. First by killing its business model, next by equalizing news with makeup tutorials and ASMR videos under the bloodless banner of “content,” and finally by eliminating it from their feeds altogether. Half a Spider-Man might be generous.

It’s hard to know what November will look like. Not great is the obvious answer. A singular “War of the Worlds”-style act of imaginative deception — a presidential candidate in a compromised position, an announcement of an incoming terrorist attack — seems unlikely and would at least have the benefit of standing out, making it easier to debunk. I’m more frightened of a million AI monkeys on a million AI typewriters cranking out low-level chaos — particularly in local elections. Rural counties are plagued by bare-bones staffing and are often in the middle of news deserts. They’re perfect petri dishes for an information virus that could go undetected. Some states are training election officials and poll workers to spot deepfakes, but there’s no chance all of the country’s more than 100,000 polling places will be prepared. Especially when the tech moves so fast they can’t be sure of what to prepare for.

The best way to clean up a mess is to never make one in the first place. This is a virtue of responsible adults — and until recently, functional democracies. Maybe the Supreme Court overrules Murthy. Maybe the AI companies will volunteer for more oversight and postpone new AI releases until 2025. Maybe the FBI’s Wray can move Congress. But if none of those maybes come through, prepare for an Election Day in LaPierre Land. Thoughts and prayers, everybody.