Russian, Chinese and other actors both foreign and domestic could flood the 2020 election and the broader political landscape with sophisticated "deepfake" digital forgeries, lawmakers and researchers cautioned Thursday, warnings that arrive as questions mount about whether campaigns and Silicon Valley firms are prepared to ward off a swarm of phony footage.
Off-the-shelf video-editing and artificial intelligence software has made it easier than ever to create so-called deepfakes — advanced visual counterfeits that turn people into digital puppets, doing or saying things they never said or did. And if left unchecked, the phenomenon could supercharge fake news of the sort that pervaded Facebook and other online sites during the 2016 campaign, which spread false rumors that Hillary Clinton was dying of Parkinson’s disease or that Pope Francis had endorsed Donald Trump.
Eventually, the widespread existence of deepfakes could even make some people dismiss legitimate videos as fabricated — in yet another blow to public faith in objective reality.
During a first-of-its-kind congressional hearing on the rapidly emerging technology, officials called recent high-profile forgeries of House Speaker Nancy Pelosi and Facebook CEO Mark Zuckerberg just a preview of the wave of more advanced visual disinformation that could soon plague the campaign trail.
"The circulation of deepfakes has potentially explosive implications for individuals and society," University of Maryland law professor Danielle Citron said in her written testimony. "Under assault will be reputations, political discourse, elections, journalism, national security, and truth as the foundation of democracy."
And fellow witness Clint Watts, a fellow with the Foreign Policy Research Institute and the German Marshall Fund’s Alliance for Securing Democracy, cautioned that Russia and China will likely be at the fore of a vast disinformation campaigns in part aimed at "subverting democracy and demoralizing the American constituency."
"These two countries, along with other authoritarian adversaries and their proxies, will likely use deepfakes as part of disinformation campaigns seeking to discredit domestic dissidents and foreign detractors; incite fear and promote conflict inside western-style democracies; and three, distort the reality of American audiences and the audiences of American allies," Watts said.
House Intelligence Chairman Adam Schiff (D-Calif.), whose panel hosted the session, cited recent reports of journalists and researchers easily altering videos of public figures like Sen. Elizabeth Warren (D-Mass.) to warn about potential misuse of the technology.
“Thinking ahead to 2020 and beyond, one does not need any great imagination to envision even more nightmarish scenarios that would leave the government, the media, and the public struggling to discern what is real and what is fake," Schiff said.
The tech is still in its early stages: The deepfake videos produced so far tend to have clunky audio and a halting, uncanny appearance that don’t take an expert eye to spot. Then again, it doesn’t take especially sophisticated phoniness to fool people by the millions.
“The deepfakes I’ve seen don’t fool anybody yet, but they’re very accessible and the technology is improving all the time,” said Brooke Binkowski, former managing editor at Snopes, which ended a fact-checking partnership with Facebook earlier this year. "So I assume that sooner rather than later it’s going to be accessible to anybody with a phone."
The topic of deepfakes gained renewed attention last month after crudely edited videos of Pelosi, slowed to make the House speaker appear to drunkenly slur her words, spread across social media platforms.
Facebook declined to take down the videos, saying its policies do not ban false content, and opted instead to limit their visibility on the platform. The company then made a similar call this week after pranksters posted fake footage to Instagram showing CEO Mark Zuckerberg bragging of his “total control of billions of people’s stolen data.” Deepfakes of celebrities have also shown up on YouTube, as well as on porn websites, and CNN illustrated the problem this week with videos showing crude likenesses of Trump and Sen. Elizabeth Warren (D-Mass.) making remarks that had actually appeared in spoofs on “Saturday Night Live.”
The Pelosi videos — produced without the use of artificial intelligence — provided government officials and social media companies with a “dry run” at grappling with video-based disinformation, Schiff said in an interview with POLITICO. But he said their more sophisticated counterparts could be “election altering.”
And during the hearing, Schiff said social media firms had to act fast to improve their safeguards against the technology. "Now is the time for social media companies to put in place policies to protect users from misinformation, not in 2021 after viral deepfakes have polluted the 2020 elections," he said. "By then, it will be too late."
The Pelosi and Zuckerberg clips have reignited a divisive debate over how active online platforms should be in identifying and taking down intentionally false content, including where they should draw the line on deepfakes.
Click Here: Tienda Pachuca
For now, many of the top social networking sites do not explicitly ban deepfakes on their platforms. Spokespeople for Facebook, Instagram, Twitter and Google-owned YouTube said the companies would take enforcement actions against such videos — including potentially taking down the content — if they violate their existing policies, which prohibit everything from harmful and violent content to automated and spam behavior.
But many of those same policies have been subject to intense scrutiny on Capitol Hill, with lawmakers routinely blasting the companies for not moving quickly enough to take down racist, terrorist and even pedophilic content.
To ward off deepfakes in particular, companies have invested in detection technologies to identify forgeries without needing human review. Facebook, for one, announced this year that it is investing $7.5 million in partnerships with new research institutions to improve tools to detect manipulated media, including deepfakes. That has given hope to some researchers and fact-checkers who contend that concerns over the software are overblown.
"Deepfakes can be identified, so even though it requires some technological savvy to identify a doctored video, the concern among some that we wouldn’t be able to tell a fake from a real video does not seem to be well-founded," said Chimène Keitner, a professor at the University of California Hastings College of the Law who has researched the technology.
But others are more worried, particularly when the online platforms lack unambiguous bans on deepfakes and other manipulated or manufactured material. Binkowski, now the managing editor of fact-checking site Truth or Fiction, argued that as long as tech companies make user engagement a higher priority than keeping damaging content off their platforms, more instances of disinformation will continue to creep up.
"They want tech solutions to problems that cannot be solved by tech," she said. She added, "What they’re actually doing is putting a Band-Aid on a problem that can only be solved by changing their internal policies about engagement."
In the meantime, it’s unclear if any 2020 campaigns are taking specific steps to prepare for the risk posed by deepfakes.
POLITICO asked the campaigns of the 24 Democrats running for the White House, plus those of President Donald Trump and GOP challenger Bill Weld, what policies, strategies or protocols they have in place to deal with any deepfake forgeries targeting their candidates. Most didn’t respond. Several said they don’t comment on internal security policies. Others said they have staff broadly dedicated to the issue of disinformation, and some linked the issue to their cybersecurity efforts.
“This campaign is actively engaged in defending our operation from disinformation and other cyber attacks,” said a spokesperson for former Texas Rep. Beto O’Rourke. A campaign official for Massachusetts Rep. Seth Moulton said the candidate has “staff who work on the issue area.”
A spokesperson for the Trump campaign said in an emailed statement that the team “maintains constant vigilance, since the media and others online routinely distort the President’s remarks, record, and positions,” adding, “We fight back when it’s warranted.”
None of the campaigns detailed protocols or policies they have in place to deal specifically with deepfake videos and images directed at their candidates — or offered details about the amount of resources they have allocated to tackling disinformation more broadly.
Meanwhile on Capitol Hill, efforts to crack down on deepfake technology are only just getting off the ground.
At the House Intel hearing, several lawmakers questioned whether tech companies would be forced to crack down on deepfakes if Congress were to amend or even kill a longstanding, industry-cherished safe harbor shielding them from lawsuits over user-posted content. Those legal protections, offered under Section 230 of the Communications Decency Act, have been fiercely protected by the tech industry. But Schiff said he saw broad support for reconsidering the rule.
"If the social media companies can’t exercise a proper standard of care when it comes to a whole variety of fraudulent or illicit content, then we have to think about whether that immunity still makes sense," Schiff told reporters after the session. But he did not signal immediate plans to pursue legislation on the topic.
Meanwhile, another Democratic lawmaker, Rep. Yvette Clarke(D-N.Y.), on Wednesday introduced legislation that would require creators to label deepfakes with digital watermarks and disclaimers identifying them as forgeries. But the bill was introduced without any co-sponsors, and Clarke conceded Wednesday that legislative action on the issue has been hard to come by.
"There is conversation. There is a certain level of awareness. There just hasn’t been action," Clarke said. "And I think that what we’re not as conscious of how quickly this type of technology can be deployed."