Digital Media Center
Bryant-Denny Stadium, Gate 61
920 Paul Bryant Drive
Tuscaloosa, AL 35487-0370
(800) 654-4262

© 2024 Alabama Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

AI-generated fake faces have become a hallmark of online influence operations

Meta headquarters in Menlo Park, Calif. The parent company of Facebook says bad actors are increasingly using realistic faces generated with artificial intelligence to run social media influence operations.
Justin Sullivan
/
Getty Images
Meta headquarters in Menlo Park, Calif. The parent company of Facebook says bad actors are increasingly using realistic faces generated with artificial intelligence to run social media influence operations.

Fake accounts on social media are increasingly likely to sport fake faces.

Facebook parent company Meta says more than two-thirds of the influence operations it found and took down this year used profile pictures that were generated by a computer.

As the artificial intelligence behind these fakes has become more widely available and better at creating life-like faces, bad actors are adapting them for their attempts to manipulate social media networks.

"It looks like these threat actors are thinking, this is a better and better way to hide," said Ben Nimmo, who leads global threat intelligence at Meta.

That's because it's easy to just go online and download a fake face, instead of stealing a photo or an entire account.

"They've probably thought...it's a person who doesn't exist, and therefore there's nobody who's going to complain about it and people won't be able to find it the same way," Nimmo said.

Facebook parent Meta says more than two-thirds of the influence operations it took down this year used profile pictures created by an AI technology known as GAN.
/ Meta
/
Meta
Facebook parent Meta says more than two-thirds of the influence operations it took down this year used profile pictures created by an AI technology known as GAN.

The fakes have been used to push Russian and Chinese propaganda and harass activists on Facebook and Twitter. An NPR investigation this year found they're also being used by marketing scammers on LinkedIn.

The technology behind these faces is known as a generative adversarial network, or GAN. It's been around since 2014, but has gotten much better in the last few years. Today, websites allow anyone to generate fake faces for free or a small fee.

A study published earlier this year found AI-generated faces have become so convincing, people have just a 50% chance of guessing correctly whether a face is real or fake.

But computer-generated profile pictures also often have tell-tale signs that people can learn to recognize – like oddities in their ears and hair, eerily aligned eyes, and strange clothing and backgrounds.

"The human eyeball is an amazing thing," Nimmo said. "Once you look at 200 or 300 of these profile pictures that are generated by artificial intelligence, your eyeballs start to spot them."

That's made it easier for researchers at Meta and other companies to spot them across social networks.

"There's this paradoxical situation where the threat actors think that by using these AI generated pictures, they're being really clever and they're finding a way to hide. But in fact, to any trained investigator who's got those eyeballs skills, they're actually throwing up another signal which says, this account looks fake and you need to look at it," Nimmo said.

He says that's a big part of how threat actors have evolved since 2017, when Facebook first started publicly taking down networks of fake accounts attempting to covertly influence its platform. It's taken down more than 200 such networks since then.

"We're seeing online operations just trying to spread themselves over more and more social media platforms, and not just going for the big ones, but for the small ones as much as they can," Nimmo said. That includes upstart and alternative social media sites, like Gettr, Truth Social, and Gab, as well as popular petition websites.

"Threat actors [are] just trying to diversify where they put their content. And I think it's in the hope that something somewhere won't get caught," he said.

Meta says it works with other tech companies and governments to share information about threats, because they rarely exist on a single platform.

But the future of that work with a critical partner is now in question. Twitter is undergoing major upheaval under new owner Elon Musk. He has made deep cuts to the company's trust and safety workforce, including teams focused on non-English languages and state-backed propaganda operations. Key leaders in trust and safety, security, and privacy have all left.

"Twitter is going through a transition right now, and most of the people we've dealt with there have moved on," said Nathaniel Gleicher, Meta's head of security policy. "As a result, we have to wait and see what they announce in these threat areas."

Copyright 2022 NPR. To see more, visit https://www.npr.org.

Tags
Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.
News from Alabama Public Radio is a public service in association with the University of Alabama. We depend on your help to keep our programming on the air and online. Please consider supporting the news you rely on with a donation today. Every contribution, no matter the size, propels our vital coverage. Thank you.