Both artificial intelligence face generators and deepfake video creation software are getting easier to deploy and making it harder to track down disinformation, which will have an increasingly devastating toll on our politics in coming election cycles.
The ‘Immediately suspicious’ and ‘Fake faces’ sections of the article get into the technical details on how these things are made and detected. It’s fascinating and scary stuff.
“One of the things that investigators look at to understand the narrative that is spreading is whether the accounts are authentic, whether they’re real,” DiResta said. “If they were to use a stock photo, it confirms something dishonest is likely happening. By using an AI-generated face, you’re guaranteeing you won’t find that person elsewhere on the internet.”
BTW, I didn’t use the article headline in my thread title because my point in posting this is not the Trump/Biden implications (and all the standard partisan replies that would ensue from both sides), but because the technical details in the second half of the article represent some good reporting on a problem that will have major ramifications in coming years.