The internet has always been filled with misinformation, but at least it hasn’t been difficult to separate fact from fiction with a little effort. The rise of sophisticated artificial intelligence tools has changed that forever, making skepticism more important than ever.
Wrong wrong
The word “deepfake” encompasses a whole family of technologies that all share the use of deep learning neural networks to achieve their individual goals. Deepfakes caught the public eye when replacing someone’s face in a video became easy. So someone could replace an actor’s face with the American president’s or replace the president’s mouth alone and have him say things that never happened.
For a while, a human should impersonate the voice, but deepfake technology can also reproduce the voices. Deepfake technology can now be used in real-time, opening up the possibility that a hacker or other bad-faith actor could impersonate someone during a video call or real-time broadcast. Any Video “evidence” you see on the internet should be treated as a potential deepfake until verified.
AI image generation
AI image generators have caused a stir in the artist community for all the implications it has for those who make a living as artists and whether commercial artists are at risk of being replaced. What doesn’t spark as much debate is the potential for misinformation through this technology.
AI image generation systems can produce photorealistic all-fabric images using text prompts, sample images, and original images for manipulation. For example, you can erase parts of an original image, then use a technique known as “inpainting” to have the AI replace the erased part of the image with whatever you want. It’s easy to generate images on your own PC with software like Stable Diffusion.
If you wanted to make it look like someone was holding a real gun instead of a toy, that’s trivial for the AI. Want to create a scandalous photo of a celebrity? AI image generation (and deepfakes for that matter) can be abused in this way. You can even generate photorealistic faces of people who don’t exist.
AI video generation
AI image generation and deepfakes are just the start. Meta (Facebook’s parent company) has already demonstrated AI-powered video generation, and while only a few seconds of footage can be generated over time, we expect video length and the amount of control that users will take on video content will grow exponentially.
For now, it’s entirely possible for an AI to just generate a grainy clip of Bigfoot or Nessie without anyone dressing up or flying off to Scotland with a camera and a small wooden model. Video has always been easy to manipulate before AI video generation was possible. However, you can no longer trust any video you see.
AI chatbots
When you chat with customer support, you’re talking to a machine rather than a human. AI technology (and traditional programming methods) are good enough for machines to have sophisticated conversations with us, especially if it’s in a narrow area like getting a warranty replacement or if you have a technical question. about something.
Speech recognition and text-to-speech are also advanced, and if you watch demos for systems like Google Duplex, you’ll get a real sense of where we’re headed with this. Once you launch AI-powered bots on social media platforms, the potential for concerted disinformation campaigns with real consequences becomes high.
To be fair, social media platforms like Twitter have always had a bot problem, but in general, these bots weren’t sophisticated. It’s now conceivable that you can create a makeup person on social media that will fool just about anyone. They might even use other technologies on this list to create images, audio, and video to “prove” they’re real.
AI Writers
We go to the internet to get information about the world and to find out what is happening in the world. Human writers (that’s us!) are a key source of this information, but AI writers are getting good enough to produce similar quality work.
Just as with AI artists, there is debate over whether such software will replace people who write for a living, but again there is a misinformation angle that is largely ignored.
If you can create an original face, create a social media persona bot, create a video and voice featuring your makeup persona, it becomes possible to create an entire post overnight. Dodgy “news” websites are already a compelling source of misinformation for many internet users, but AI technology like this can compound that problem.
The detection problem
These technologies are not only a problem because they open up new avenues of abuse, they are also a problem because counterfeit detection can be difficult. Deepfakes are already reaching the point where even experts have a hard time distinguishing between what is fake and what isn’t. That’s why they fight fire with fire and use AI technology to detect generated or manipulated images, looking for telltale signs invisible to the human eye.
This will work for a while, but it could also create an unintended AI arms race that could ironically push technologies that create fake content to higher levels of fidelity. The only sensible strategy for us as humans is to assume that everything we see on the internet, unless it comes from a verified source with transparent processes and policies, should be treated as fake until to the contrary. (Although we doubt your conspiracy theorist uncle believes the UFO videos he keeps sending you aren’t real.)