Can AI Detect Fake Images And Videos?

AI Detect Fake Images And Videos (Last Updated On: May 5, 2022)

When Tim Berner Lee introduced us to the World Wide Web, the Internet caught the world by storm. Everybody quickly wrapped their heads around this revolutionary technology because we realized the true potential it held. Now, let’s also look at the application of AI on internet searches and results.

The Advent of AI: An Overview

Today, when consumers think of convenience, they go to the Internet and treat themselves to millions of free services. Similarly, with globalization came improved, easy connectivity, and people could now share their stories from every corner of the world. 

News, information, e-commerce, cloud computing, AI, etc., have changed how modern industries operate. Hence, we can easily conclude that we live in the third significant industrial revolution era where technology and AI are set to take over. However, with indefinite advancement, we must also look at the vices that come with it because not all that glitters is gold. 

While the Internet is a great source to gather information and relevant data, it is easily the primary source of misinformation. Furthermore, misinformation spreads like wildfire with the Internet’s broad reach today. Now, misinformation can lead to severe turmoils and tarnish images, and it can also form misleading public opinion, leading to administrative hindrances. 

Therefore, there is an urgent necessity to develop a technology to identify and take down fake images and videos from the Internet today. While human diligence will still be at the helm of affairs, it is physically impossible to go through billions of articles on the Internet.

Therefore, this is where AI can save the day for all of us. Machine learning and natural language detection can scan for us and detect all sorts of fake images. Here is an elaborate article on the introduction to artificial intelligence and its massive strides in misinformation management.

Origin of Fake Images and Videos

When developers work on technology, they often look at its bright side and future potential. However, the failure to recognize the gray areas often leads to the miscreants taking undue advantage of technology. With similar reasoning, developers at Stanford University introduced the world to lip-sync and face swap technology. 

The purpose was to make cinema editing more convenient. As advertised, the purpose of the lip-sync technology was to make adjustments to technical glitches, avoiding the need to reshoot an entire scene and waste valuable resources on redundant activities. 

Additionally, movies could now be dubbed in regional languages in perfect sync so the audience wouldn’t feel that something was out of place. Similarly, the purpose of face swap was to superimpose actors’ faces on the stuntmen, so the professionalism in action scenes isn’t compromised. Now, the purpose of this technology was to revolutionize the cinema editing industry. However, people had something else in mind. This technology went on to serve as a tool to spread misinformation. 

Election campaigners and IT cells started to morph public leaders’ speeches negatively, and the purpose was to form a negative image of these leaders in the public’s eyes. A notable case came to light when Joe Biden’s speech was dubbed in a Republican tone, which caused a lot of public disruption. 

Hence, the technology developers felt it necessary to develop an antidote to stop public misleading. This article will further discuss how the face swap and lip-sync technologies work and the possible antidote. 

How deep fakes work?

As discussed earlier, deep fakes technology is a necessary evil because it allows cinema editors and advertisers to clean up their mistakes during post-production. It also facilitates the editors to tweak minor details and adjust them to their advantage—however, people who crave being a nuisance use this technology to spread misinformation. 

Currently, the anonymity on the Internet makes it difficult for the targeted person to control the damage. Therefore, founders of deep fakes technology described to us how this technology so a potential antidote could be developed. 

In the case of face superimposition in a video, significant features of the targeted person’s face are swapped with the character in the video. In most cases, the videos look convincing, but experts can easily spot the errors on a close examination. 

Usually, in the case of face swapping tools, the results are crude, and they leave digital and visual artifacts. A computer can easily detect these artifacts through machine learning software. The problem arises in a more sophisticated form of deep fake, which is lip sync technologies. Here, only a minor portion of the face is manipulated to synthesize the lip movement. 

Most lip-sync tools can manipulate a person’s speech in a way that will closely resemble how they would have stated those words. Therefore, with a good sample size of the targeted person’s original speeches, a deep fake producer can make up any fake video he wants. 

Spotting the Fakes

Researchers at Stanford came up with a novel idea to battle deep fakes on the Internet involving lip-sync technologies. The idea is to take a potentially fake video and identify the speaker’s lip movement when making B, P, or M sounds, as these sounds cannot be pronounced without firmly closing the lips. 

Hence, it is the cue for the spotters to identify a deep fake video. Initially, this theory was tested manually, which was successful. However, it seems impossible for a human to take each video and go through it frame by frame. 

Therefore, an AI algorithm was tested via machine learning. Initially, the AI software was tested on Obama’s deep fake videos, and the success rate came out to be 90%. Additionally, adaptive algorithms get better with data. The AI was put on a neural network to identify a fake video automatically. However, we still await foolproof technology using machine learning algorithms. 

Conclusion

AI and Machine Learning Certification in Sydney are making huge strides, and their application is increasing day by day. Moreover, government agencies and cyber cells of various sectors desperately need machine learning professionals who could develop AI software to prevent the spread of misinformation. 

If you are keen on making a career in this field, sign in to Great Learning to open the Pandora box of relevant courses, such as UT Austin AI and machine learning, designed by industry veterans. Find the course you like and establish yourself as a seasoned AI software developer. 

About the Author: John Abraham

You might like

Leave a Reply

Your email address will not be published. Required fields are marked *