Like so many others, the video looks real at first glance.
Ksenia Turkova, a journalist with VOA’s Russian Service, is seated at a desk looking directly at the camera. She’s wearing a smart suit and is introducing a guest who will talk about cryptocurrency trading software. Turkova looks like herself, and she sounds like herself.
The only problem is that this video isn’t real. It’s a deepfake — a fake video created by artificial intelligence to look authentic.
It’s also a concerning example of artificial intelligence being used to create videos in which real news anchors are reporting fake stories, a phenomenon experts say risks spreading disinformation and sowing distrust in the media.
VOA’s Russian Service on Friday became aware of a deepfake video being disseminated on Facebook that uses VOA branding and Turkova’s AI-generated voice and appearance. Instead of sharing the news, the video was promoting a trading product.
When Turkova first saw the video on Friday, she said it took her a few moments to realize it was fake.
“My first reaction was, ‘I don’t remember that. I didn’t say that,’” she said. “Then I realized it was a deepfake. I’ve never experienced anything like that.”
VOA is far from the only outlet to be the target of this kind of tactic — the ubiquity of which is cause for concern among AI and disinformation experts. It’s a relatively new ploy, but journalists at news outlets including CNN, CBS and the BBC have also been the subject of deepfake impersonations, which usually aim to spread false information.
Originally from Moscow, Turkova worked as a journalist in Russia and Ukraine before moving to the United States in 2017 to work at VOA. She said she is more concerned about potential future deepfakes of her than this particular video.
“Good thing it was an advertisement. But what will happen next time?” she said. “What if they fake a video of me saying I support [Russian President Vladimir] Putin’s war in Ukraine, for example, or saying I support terrorist attacks? Or will it be porn content?”
“I felt vulnerable, and I felt that my reputation, my trustworthiness as a journalist can be in danger,” Turkova added.
In the video, Turkova appears to give introductory remarks in Russian. But the majority of the video consists of a speech, also AI-generated, by Yuri Milner, a Soviet-born Israeli entrepreneur. Milner appears to be lauding a Quantum AI trading platform in an advertisement of sorts. It is unclear whether the trading platform is legitimate.
The video was posted on the Facebook account of Argentinian communications agency Digit Estudio, which appears to have been inactive since 2021. VOA was unable to determine who created the video. The Facebook account, as well as the videos, are no longer visible.
VOA messaged the phone number listed on Digit Estudio’s LinkedIn account but has not received a reply. Meta, Facebook’s parent company, did not reply to VOA’s email requesting comment for this story.
VOA filed a complaint with Facebook last week about the video, but it has not been resolved, VOA spokesperson Emily Webb said in a statement on Monday.
“We are very concerned about the threat artificial intelligence poses to VOA's mission of delivering unbiased and factual reporting,” Webb added.
Until recently, creating fake newscast videos was too time- and resource-expensive, according to John Scott-Railton, who researches disinformation at the Citizen Lab.
“That is no longer the case,” he told VOA. “Facts are at a disadvantage, and part of the price we pay for this is distrust in legacy media and genuine news outlets.”
It’s important to remember that the point of disinformation often isn’t to convince people to think one way or another, said Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights.
“They don’t really try to change your mind. They try to undermine your faith in what you're seeing and hearing and reading so that you’ll be confused,” he told VOA. “Adopting the appearance of being VOA or being CNN is the perfect way to confuse people."
Doctoring news broadcasts can be an effective disinformation strategy because “news anchors are the face of trust in conventional media,” said Bill Drexel, who works on AI at the Center for a New American Security think tank in Washington.
Perpetrators of this kind of disinformation “essentially hijack” a news outlet’s trustworthiness and legitimacy, he added.
“Leveraging news anchors seems to be one of the more effective methods to spread misinformation and disinformation we’ve seen so far,” Drexel said.
To Barrett, deepfake newscast videos highlight the need for government regulation of artificial intelligence.
U.S. President Joe Biden on Monday signed an executive order laying out initial guardrails for AI companies.
Barrett said the new regulations are comprehensive, but he cautioned there is no order the White House can issue to prevent foreign governments or other actors from misusing open-source AI systems.
“It’s an aspect of the introduction of new technology that requires the designers of the technology to take steps to make sure their products cannot be manipulated,” Barrett said.
Turkova also worries about how deepfakes may be used against her in the future.
“I don’t want the trust of the audience to be in danger,” she said. “Everything is so fragile.”