FBI warns of the rise of ‘deepfakes’ in coming months and explains how to spot them easily

The FBI has issued a warning that “malicious actors” may use synthetic content for campaigns.
It said there had been an increase in campaigns since 2019, pointing to Russian and Chinese actors.
Attacks have been primarily to “advance tradecraft and increase the impact” of actors’ activities. 
See more stories on Insider’s business page.

The FBI has issued a stark warning saying “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.””Synthetic content” refers to any manipulated or generated content across video, photo, text, and audio.It also includes deepfakes, which use artificial intelligence to replace the likeness of one person with another.In the statement issued March 10, the FBI said “Russian, Chinese, and Chinese-language actors are using synthetic profile images derived from GANs [generative adversarial networks].”

They also pointed to an increase in the number of fake journalists and articles circulating online. While these journalists had a “robust online presence,” their fraudulence can be uncovered by “basic fact-checks.”Deepfakes have now entered popular culture and are easier than ever to make, becoming the subject of online memes but also of misinformation and abuse, particularly in the form of revenge porn.Former “fraud czar” of Google, Shuman Ghosemajumder, told Insider last year that deepfakes were likely to evolve and spread further, with “perfectly realistic” deepfakes in our near future.The FBI said they had “identified multiple campaigns which have leveraged synthetic content” since late 2019, and the number looks set to grow.

These attacks were carried out to “advance tradecraft and increase the impact” of the perpetrators’ activities.How to spot deepfakesIn its statement, the FBI also detailed how to spot deepfakes. Too much space between the subject’s eyes as well as head and torso movements and issues of synchronization between face and lip movements could be key clues.Researchers at the University of Buffalo have also developed a tool for spotting deepfakes and claim that it is 94% effective with lower success rates in non-portrait pictures.The FBI also issued detailed guidance on how individuals could protect themselves from cybercrime, notably outlining that people should not assume an online profile corresponds to a person.

They also said users should seek multiple sources of information and exercise caution when providing confidential information online or over the phone.