[SOLVED] : How to Recognize Deepfakes: Five Helpful Tips Part two

The dangers of Deepfake

While this sci-fi tech might seem fascinating at first glance, there are many risks involved. Although we have become accustomed to the circulation of fake photos, many people still blindly trust the reality of the images they see, to form an opinion about an event or a person. Can you imagine the potential impact of a deepfake video? Experts warn that the deepfake could be used for example to manipulate elections or provoke social uprisings with videos that are actually faked.

On the other hand, do not think that deepfakes are only used to embarrass famous personalities: in several countries, including Mexico, they have already been used to extort individuals. These include people who have been contacted by networks and then been blackmailed with fake videos, in which they appear, for example, naked or having sex.

[SOLVED] : How to Recognize Deepfakes: Five Helpful Tips Part two

What can you do ?

First of all, whenever you receive a video in which something is too amazing to seem true, search the web for alternative and reliable sources (like notable media sources). For example, in the case of speeches by politicians, any strongly worded or suspicious comments should be checked.

As far as your personal safety is concerned, think about the information you upload to the networks, and if you are unlucky enough to be the victim of one of these blackmail, report it to the police immediately.

hat is a Deepfake? Everything you need to know

Deepfakes use deep learning artificial intelligence to replace the likeness of one person with another in video and other digital media.
This article gives an overview of Deepfakes – what they are, how they work, and how they can be detected.
Computers are more and more efficient at simulating reality. Modern cinema, for example, often relies on computer-generated sets and characters, and these scenes are, for the most part, indistinguishable from reality.

What is a deepfake and how does it work?

The term “Deepfake” comes from the underlying technology “deep learning”, which is a form of AI. Deep learning algorithms, which themselves learn to solve problems when they receive large data sets, are used to swap faces in video content to create realistic-looking fake media.

There are several methods to create Deepfakes. But the most common is the use of deep neural networks involving autoencoders that use a face “swap” technique. First you need a target video to use as the basis for the Deepfake and then a collection of video clips of the person you want to insert.

Autoencoder is a deep learning AI program responsible for studying video clips to understand what the person looks like from different angles, then mapping that person onto the individual in the target video by finding common characteristics. .

Another type of machine learning is added to the mix: known as Generative Adversarial Networks (GAN), which detects and improves all the loopholes in the Deepfake in multiple turns, making it more difficult for Deepfake detectors to detect them. decode.

Several applications and software facilitate the generation of Deepfakes even for beginners, such as the DeepFace Lab application, FaceApp (which is a photo editing application with integrated AI techniques), Face Swap …

A large amount of deepfake software can be found on GitHub, an open source software development community. Some of these apps are used for pure entertainment purposes – that’s why deepfake creation is not prohibited. While others are much more likely to be used for malicious purposes.

How are Deepfakes used?

While Deepfake is used in interesting ways (such as in movies and games), it can be very dangerous.

In 2017, a reddit user named “deepfakes” created a pornographic forum featuring actors with changed faces. Since then, these types of videos have made the news several times, seriously damaging the reputations of celebrities and personalities. Pornography made up 96% of deepfake videos found online in 2019, according to a Deeptrace report.

The Deepfake video has also been used in politics. In 2018, for example, a Belgian political party released a video of Donald Trump delivering a speech calling on Belgium to withdraw from the Paris climate agreement. Trump never made that speech, however.

Of course, not all Deepfake videos pose an existential threat to politics. Deepfakes are also used for humor and satire.

Are Deepfakes just videos?

Deepfakes aren’t limited to videos. Deepfake audio is a rapidly growing field which in turn has a huge number of applications.

Realistic audio deepfakes can be created using deep learning algorithms in just hours (or in some cases, minutes). A person’s audio is cloned to create a voice template, and will be able to “say” any words they want.

Audio Deepfake can also be used in gaming, allowing player characters to “talk” in real time and output recorded scripts before play.

How to detect a Deepfake?

As Deepfakes become more common, the company will likely have to adapt to detecting Deepfake videos.

Often, as is the case with cybersecurity, deeper technology must emerge in order to detect it and prevent it from spreading. Which in turn can set off a vicious cycle and potentially create more damage.

There are a few indicators that help detect Deepfakes:

Current Deepfakes struggle to animate faces realistically, and the result is a video in which the subject never blinks, or blinks too often or abnormally.

We can also detect skin or hair problems, or faces that appear more fuzzy than the environment in which they are positioned.

Lighting may also not look natural. Often, deepfake algorithms retain the lighting of clips that were used as templates for the fake video, which poorly matches the lighting of the target video.
The audio may not appear to match the person, especially if the video has been tampered with but the original audio has not been handled with as much care.

Deepfake: all you need to know about the new AI threat

A Deepfake is a spurious video produced or modified using artificial intelligence. Find out everything you need to know about this new phenomenon, and the many dangers it poses.

A Deepfake is a video or audio recording produced or altered thanks to artificial intelligence. The term designates not only the content thus generated, but also the technologies used for this purpose.

The word Deepfake is a contraction entre  » Deep Learning  » et  » Fake « that we could translate as “deep false”. Indeed, it is fallacious content, made “deeply” credible thanks to artificial intelligence. More precisely to Deep Learning or deep learning.

However, the name is more directly linked to a reddit user’s username who used Deep Learning to incorporate celebrity faces into pornographic films. This is the first Deepfake use case to achieve massive popularity, find out why …

How do Deepfakes work?

[SOLVED] : How to Recognize Deepfakes: Five Helpful Tips Part two

Deepfakes are based on the technique of GANs or generative antagonistic networks. It’s about a technique de Machine Learning consisting in putting in competition two algorithms.

From images provided upstream, the first algorithm “generator” seeks to create fake imitations that are as believable as possible. The second “discriminator” algorithm , meanwhile, seeks to detect fakes as efficiently as possible.

Over time, the two algorithms progress in their respective tasks. The former continues to produce fake videos until the latter can no longer detect the deception. In the end, “fakes” realistic enough to deceive even humans

The more data provided to the algorithm at the start of the process, the more easily it will be able to learn how to create fakes. This is the reason why former US presidents and Hollywood stars are often used to create Deepfakes: many archive videos are open access and can be used to feed machine learning models.

What are the risks associated with Deepfakes?

Thanks to AI, anyone can create a Deepfake quite easily and without special technical skills by downloading a simple software like FakeApp. Anyone can therefore produce such content in order to serve their interests by manipulating the opinion of viewers.

Thus, after the Fake News and their harmful impact on social networks, the spread of Deepfakes on the web represents a new threat linked to technologies.

So far, only a few jokers have used Deepfakes to embed celeb videos into fucking movies or to make politicians say nonsense. However, this innovation could also be used for propaganda or even terrorism.

On can easily imagine a fake bomb threat video spread panic, or even a fake s * x tape aimed at ruining a marriage or destroying a reputation.

A fake video aimed at discredit a candidate for election could also have serious consequences. Manipulation, disinformation, humiliation, defamation… the dangers associated with Deepfakes are numerous.

Besides the risk that people get manipulated by Deepfakes videos, these videos present another risk. If this fallacious content proliferates on the web, Internet users could permanently stop trusting what they see on video

Some examples of well-known Deepfakes

The Deepfakes phenomenon began in 2017, on Reddit, when a user had fun with incorporating celebrity faces into porn movies. As you can imagine, however, we will not be able to present an extract of these “creations” here …

For its part, the YouTube channel derpfakes has fun at edit movie trailers with AI. Admire for example this trailer for the film Star Wars Solo in which the main actor is replaced by Harrison Ford …

Another example is the website “ThisPersonDoesNotExist.com” which generates a fake face every two seconds. The result is simply stunning realism. The American giant NVIDIA also had fun creating an AI capable of generating fake faces.

More recently, a video posted on Instagram shows Mark Zuckerberg, the creator of Facebook, revealing “the whole truth” on his social network and on his intention to control humanity. Problem?

These few examples demonstrate the potential, but also the danger of Deepfakes. There is no doubt that this technology is still in its infancy, and that more and more credible fake videos will emerge in the future