A study published by the International Center for Risk Governance at EPFL (IRGC @ EPFL) provides a comprehensive overview of the risks associated with Deepfake, this machine learning application, and the potential responses that can be brought to it. The objective: to have all the cards in hand to prevent dangers.

A few weeks ago, Donald Trump’s fake hyperrealistic video announcing the eradication of AIDS, produced as part of an awareness campaign led by Solidarité Sida, was the buzz. Behind this montage, we find deepfake, a technique that relies on machine learning to produce increasingly realistic digital content such as images, videos, audio files and texts.

The use of these special effects by a charitable association underlines the new scale of this phenomenon. Although pornographic videos represent the majority of deepfake content, this technique is also used for fraudulent purposes, for the disclosure of fake news or for identity theft.

Evolving threats

Given the democratization of the phenomenon, the International Center for Risk Governance at EPFL brought together around thirty professionals last September for an interdisciplinary workshop dedicated to this theme. He is now publishing a report that can serve as a basis for better understanding and understanding the risks associated with deepfake.

The first observation: there are an infinite number of areas that can be affected by these risks. “Any organization or activity that relies on documentary evidence is potentially vulnerable,” said Aengus Collins, deputy director of the IRGC and author of the publication. Deepfakes can lead to information uncertainties, such as a fraudulent money transfer triggered by a synthesized voice imitating a company’s CEO. At the scale of a society, a proliferation of fabricated content could contribute to the erosion of trust and truth, the very foundations of democratic debate. The report therefore proposes a classification of risks according to their impact – damage to reputation, financial fraud and extortion or manipulation of decision-making processes – and highlights the fact that these impacts can be felt at the individual, institutional or societal level. But how can we judge whether a response in terms of risk governance is necessary? Experts suggest looking at the severity and extent of the damage caused as well as the ability of the “target” to cope with that damage. For example, a company with resources and processes in place will have more capacity to absorb the impacts of deepfake than an individual victim of harassment.

Interdependent solutions

Through 15 recommendations, the IRGC wants to offer a variety of answers that could mitigate the risks associated with deepfakes. This report also calls for further research on all aspects of the issue. One of the avenues mentioned in the report is to respond with technology. At EPFL, the Multimedia Signal Processing Group and the start-up Quantum Integrity are working on deepfake detection software, which could see the light of day in 2020. “There will always be vulnerabilities that can be exploited, but there are is crucial to maintain and develop technological responses to limit malicious use, ”adds Aengus Collins. The report also highlights the need to put more emphasis on the legal status of deepfakes, in order to clarify how laws in areas such as defamation, harassment or copyright apply to such content. More generally, digital education also has its role to play. “While one of the goals of digital education is to learn not to take digital content at face value, we also need to empower people and focus on aspects such as corroboration. or the verification of sources ”, specifies the researcher. “Otherwise, we run the risk of accentuating the problems associated with the erosion of truth and trust.”

A wider horizon

While this report focuses on deepfake, this research is part of a larger work stream on the risks associated with emerging technologies, which will continue into 2020. “We are in the process of deciding what our next priority will be,” said Aengus Collins. “There is no shortage of candidates. We live in an age where the relationship between technology, risk and public policy is more important than ever. ”

A site to warn about the danger of deepfakes

One of these faces was randomly created by artificial intelligence.

[SOLVED] : How to prevent the dangers of deepfake

FAKE NEWS – To warn about the danger of disinformation, two researchers at the University of Washington have devised a website that allows you to try to guess which face is natural and which is artificial.

By creating the WhichFaceIsReal.com site, Jevin West and Carl Bergstrom want to alert Internet users to the existence of artificial intelligences capable of creating fake people from real photos. If the rendering is impressive, these new artificial intelligences can also be used to create “deepfakes”.

“In making this site, we wanted to educate the public, show people that such technology exists,” Jevin West told The Verge. “When a new technology comes along, the most dangerous time is when people don’t know it exists”.

[SOLVED] : How to prevent the dangers of deepfake

A technological tool to create or disinform

The algorithms behind these fakes are GANs, generative antagonistic networks. These computer programs are able, in a few hundredths of a second, to generate images using samples taken from the Internet. An example of GAN is available on thispersondoesnotexist.com, a site that randomly generates faces.

Imagine a program that can draw millions of puzzle pieces from different boxes, then put them together to create a new design. GANs do the same, digging into the internet to create photos of people who never existed. In addition, two different algorithms are at work in a GAN. One draws and reconstructs, the other examines the fake face to verify that it does not look like a real person. At the end of the process, the result is to be mistaken.

These “deepfakes” are videos faked using artificial intelligence. The principle is to make believe that a person, generally famous, is present on a sextape or a scandalous video. For example, a “deepfake” could represent Emmanuel Macron hitting a yellow vest and be seen thousands of times before being denied.

The danger of “deep fakes”, these faked videos… but more real than life

While the legislative arsenal is strengthening to fight “fake news”, the techniques of propagators are developing even more rapidly, in particular with “deep fake”, faked videos.

President Obama who, facing the camera, calls his successor “deepshit” (which literally means “dark ****”), the American actress Scarlett Johannson who finds herself in pornographic films… Here are the most famous examples of a A relatively new phenomenon, “deep fakes”, videos so well designed by artificial intelligence that they are believed to be real.

Imagine the broadcast, on television or on social networks, of a press conference given by a famous politician. The facial expressions and the voice are the same, no doubt. And yet, everything is wrong.

A danger with the approach of elections. “You have an actor, (whose image is) manipulated by artificial intelligence techniques, who can, for example, answer questions directly from journalists”, describes Loïc Guézo, deputy secretary general of Clusif, the information security club. “The big danger is when these techniques will really be used by foreign countries which have large means of computer calculations. There, the images generated will be completely undetectable to a human eye ”, warns the specialist at the microphone of Europe 1.

As the European elections approach, this danger is real. In order not to fall into the trap of these new generation “fake news”, it is better not to rush before believing hard as iron, and wait for a possible denial from the politician.

Deepfake : danger !

[SOLVED] : How to prevent the dangers of deepfake

Artificial intelligence now allows you to easily edit videos to make them say what you want! Okapi explains all about this disturbing technology.

The deepfake is a technique which allows to combine and superimpose images and existing videos and manipulate them thanks to theartificial intelligence (AI). For the moment, specialists are able to identify the fake, but soon it will be more and more difficult to detect a fake video of a real one !

The AI ​​analyzes all the data in a video and finds all the movements and the intonations in a person’s voice. Thanks to the’machine learning, all these parameters can be transposed to another video. And now, voila, the video is tampered with !

The research group of Samsung even managed to animate photos, to make them Deep fake. This is how the Mona Lisa, Albert Einstein or even Marilyn Monroe were able to come back to life in the time of a video!

Risks in the future?

A video will therefore soon no longer suffice to prove that an event or a speech is true ! Which could quickly cause problems …
In a world where everything can be called into question, how unravel the true from the false ?

Soon it will become impossible to realize when a video has been altered or fabricatedMax Planck Institute

Today, the facebook group (Instagram, Facebook, Whatsapp) uses auditors (fact-checkers), such as AFP for example, to label certain content as “false” or misleading.

Track down a deep fake

Then how spot that a video is fake?

You have to look at her very closely, try to find small changes in terms of skin color, if it is too uniform, it is wrong. Second point: in many fake videos, no one don’t blink ! As a general rule, as with fake news, as soon as something seems strange to you, check it out. seeking the source, and trying to find this info on another media.

In the video below, discover the manufacturing below deepfakes!

[SOLVED] : How to prevent the dangers of deepfake