Was the call you received from your boss asking you to do something unusual, really your boss? Is the person in the questionable picture/video of an acquaintance being circulated really them? It is fun to use an app to sound like a famous artist and see your favourite actor do stunts that do not seem physically possible. But the former situations can put one in a risky position. The internet and Artificial Intelligence have made our lives easier but they also bring with them the risks, which include fraud and deception. Spreading misinformation is as far as a click away. The infamous Public Service Announcement by Barack Obama in 2018, which was created using Deep Fake tools took the internet by storm and created a buzz around the concept.
The reason this buzz should be created again is because we as a community are spending more time on the internet, more than ever. With the pandemic in full swing and most organisations planning to shift to a permanent work from home structure for most positions opens up opportunities for people to work remotely and breaks down geographical barriers. But it also increases the risk of fraud, especially with technology advancing at such a fast pace and false information getting harder to verify. The scope of AI-generated deep Fakes has also expanded in various aspects which now include not only sophisticated visuals/ videos but also audios. Deep Fake phishing differs from email phishing and looks more authentic and is harder to catch.
To understand and defeat the purpose of a deep Fake it is important to learn how it works. Basically, a programmer uses an AI tool which understands and solves complex problems of datasets. It is trained to study the behaviour of a photo/person and learns to paste it on existing content by carefully learning the angels and reactions which eventually produces synthetic media. Although there are many ways off creating fake media the most common way includes using auto encoders on the deep neural networks. Let’s understand this step-wise:
Finding the content which has to be over-written.
Gathering enough media of the person to be duped.
Using an auto-encoder which employs a face-swapping technology.
The auto-encoder will learn and study the person from various angles and environments which will eventually map the features and paste the video/ over-write the content.
After this, a Generative Adversarial Network (GAN) is added to the mix, this is a machine learning tool. It improves the quality of the media by detecting any flaws, within multiple rounds.
Apart from these sophisticated technologies, there is a wide presence of apps which make it easier for a common man to create such synthetic media. Most common apps include FaceApp, Zao, DeepFace App. Also, as the software development community is becoming more open day –by-day, Github which is an open source community provides deep fakes. Increased accessibility to such tools can prove to be dangerous to teenagers and their mental health with increased cyber crimes. Talking of audio Deep fakes, they can be used to make fake calls and transfer money. There is a threat of stolen identity in which the user can either create new accounts and commit fraud or can access an already existing account and transfer funds and steal.
How to save yourself?
As of now, India does not have any regulations explicitly for deep fakes, so the most plausible way to save yourself from such a threat is to be aware and keep an eye out for anything that looks suspicious. Some synthetic media can easily be detected because of its poor quality, like automated calls, which could sound computerised and mechanical. Similarly, biometrics can be used in combination with a two-factor authentication which includes One Time Passwords, etc.. Also, for videos, one can look out for movements like facial expressions, hair movement, the smoothness of skin, the sync of audio and video and most importantly teeth. A mediocre deep fake might not focus on such aspects and this is where attentiveness can fill the gap. But with more sophisticated and smarter technology being deployed, these things can easily be corrected and a so-called flawless impersonation might not be that difficult to achieve. The need of the hour will be to cross-check unusual things until an anti-deep fake or a detection technology is widely available.