With the new AI tools for computers, fake videos and fake news are already possible and what we are seeing today is only a first foretaste of what will be possible in the future. Here we have the first concrete use case, because someone was using an algorithm to inserting the face of "Wonder Woman" star Gal Gadot into a porn video - and the results are terrifyingly convincing.
Reddit user Deepfakes trains Neural Networks on the faces of celebrities, he then inserts the moving facial animations into real porn using face detection - as long as I interpret the videos correctly, I don't link them here, you can find out for yourself. The result is authentic porn starring your favorite actress. Fake porn via Photoshop is about as old as the internet and realistically probably as old as Photoshop itself, only here the process is automated and will work on demand in the future. Although AI coders should install security mechanisms that should prevent an AI from “identity theft”, there will be hacks and cracks for that too. The concept won't go away, and it probably won't stop with actresses either. Welcome to the future.
So far, deepfakes has posted hardcore porn videos featuring the faces of Scarlett Johansson, Maisie Williams, Taylor Swift, Aubrey Plaza, and Gal Gadot on Reddit. I've reached out to the management companies and / or publicists who represent each of these actors informing them of the fake videos, and will update if I hear back. [...]
According to deepfakes - who declined to give his identity to me to avoid public scrutiny - the software is based on multiple open-source libraries, like Keras with TensorFlow backend. To compile the celebrities' faces, deepfakes said he used Google image search, stock photos, and YouTube videos. Deep learning consists of networks of interconnected nodes that autonomously run computations on input data. In this case, he trained the algorithm on porn videos and Gal Gadot's face. After enough of this “training,” the nodes arrange themselves to complete a particular task, like convincingly manipulating video on the fly. Artificial intelligence researcher Alex Champandard told me in an email that a decent, consumer-grade graphics card could process this effect in hours, but a CPU would work just as well, only more slowly, over days. [...]
"I just found a clever way to do face-swap," he said, referring to his algorithm. "With hundreds of face images, I can easily generate millions of distorted images to train the network," he said. "After that if I feed the network someone else's face, the network will think it's just another distorted image and try to make it look like the training face."
In a comment thread on Reddit, deepfakes mentioned that he is using an algorithm similar to one developed by Nvidia researchers that uses deep learning to, for example, instantly turn a video of a summer scene into a winter one. The Nvidia researchers who developed the algorithm declined to comment on this possible application.