Deepfakes have gained massive popularity and acknowledgment in recent years. As increasingly more negative and positive applications are becoming prominent, exploring the technological benefits and limiting the harm through rules and policies has become crucial. Regulating companies using deepfake technologies is a matter of government policies; however, the media posted by users is borderline impossible to control. While malicious altered content and rumors spread through different social media platforms were easily ignored and debunked in the past, deepfake content created for that purpose is much more convincing and, therefore, potentially catastrophic for the victims. To get a complete idea about the issue at hand, exploring the origin of the technology and the enormous array of current usability is necessary.
Deepfake history
Deepfake technologies are not as recent as one might think. To get the complete picture, it’s crucial to talk about the history of deep learning, the machine learning technology deepfakes are based on.
The whole evolution to modern deep learning can be divided into several periods. Early variants of deep learning have actually existed since the 1940s, although they were unpopular at the time due to technological shortcomings. Until the 1960s, the constructed machines were comparably simplistic but necessary for the advancements. In the 1980s, the concept of Artificial Neural Networks was introduced, essentially becoming the backbone of modern deep learning, but the progress was halted due to the lack of computational power. The early version of deepfakes was a video editing program created in 1997 called ‘Video Rewrite’ by Christoph Bregler, Michele Covell, and Malcolm Slaney. Through machine learning, the program was able to alter facial expressions, to depict words from a different audio track.
The term ‘deepfakes’ originated around the end of 2017 and was the nickname of a Reddit user. He and many other members of r/deepfakes, a subreddit dedicated to deepfake media, created and shared altered content, mainly of face-swapped celebrities. Nowadays, deepfake technologies have numerous uses: movie editing, education, art, entertainment, and even healthcare. Deepfakes are also rapidly gaining popularity for commercial services and marketing, with new companies and open-source programs becoming increasingly available to the public.
Companies, users, and deepfakes
Since the explosive popularity growth of deepfakes, a number od applications and programs have gained traction among deepfake creators. Depending on the software, users receive an array of tools and features to experiment with. The selection ranges from applications like Hoodem.com, or Mmasked.com, both offering an automated deepfake creator online service, to open-source software like DeepFaceLab, which gives its users tools and possibilities for advanced media manipulation. Other applications like Reface use deepfake technologies to face-swap their users into images, videos, and GIFs.
Deepfakes are also gaining popularity for marketing. Companies like deepfake-marketing offer B2B services for marketing campaigns and social media posts. Moreover, these companies also offer workshops and webinars to educate about deep fake technologies’ potential benefits and dangers. Some companies have taken their services further. Hour one, founded by Oren Aharon and Oded Granot, purchases the right from people willing to entrust their face to the company, creates synthetic characters, and sells these copies to other companies for marketing and educational purposes.
The slippery slope
Through new policies, social media platforms are currently trying to either limit or label deepfake content, though the technologies used for deepfake identification are far from ideal. The most feasible solution to counteracting the spread of malicious content would be user literacy. By informing the user about the capabilities and potential threat of deepfake content, it is possible to slow down the spread and misinformation caused by negative altered media.
It is safe to say that these applications of deepfake technologies are a cause for concern. Without the ability to differentiate between fake and real media with the naked eye, it can be hard to trust any content posted online. While this is currently a sensitive issue, deepfake technologies can bring a lot of benefit, and that is worth exploring.