[SOLVED] : How Deepfake Technology is Used to Mimic Voices

When crooks use Deepfake technology to mimic your voice

Fake videos that put things in the mouths of personalities that they have never said are usually a laughing matter. Especially on social networks. A beautiful illustration is that of the video of former President Barack Obama. But the deepfake also entertains computer crooks. A German company found out the hard way. The synthesized false voice of the CEO of a company allowed clever con artists to embezzle 220,000 euros. And this technique seems to scare the giants of the web.

Newsletter infoReceive the main news every morning.

Deepfake – translated into French by hypertrucage – is a portmanteau word created from the English words ‘deep learning’ and ‘fake’. It entered geek vocabulary in late 2017 when celebrities appeared to appear in porn clips. The real danger of deepfakes is less the manipulation of images which has always existed than the possibility of making this manipulation accessible to as many people as possible. And this is what we are witnessing today.

It is the Wall Street Journal which revealed this “fraud to the president” carried out in Europe and particularly well prepared. To convince the employee to make a payment, the hackers used a synthesized version of the voice of the boss of the company. A good imitation it seems, since the transfer was immediately directed to an account external to the company. However, a second extortion attempt from the same group was unsuccessful. Scalded cat fears cold water.

Information taken from the Belgian cyber crime unit and the security software publisher Eset, there has not been, to date, a similar case in Belgium, but an attempt was reported a year ago in France. . In a BBC report, Symantec, another security company, says it has tracked down at least three fomented attacks against private companies using the identity of the boss to obtain money transfers. What if this was just the start?

The technological version of an ancestral fraud

For Jean-Michel Merliot, IT manager at Eset, this new scam, based on “fraud to the president” is one of the first attempts that achieved its goal. According to him, it is ultimately only the technological version of an ancestral fraud based on vocal imitation until then reserved for specialists in the genre. “Technically it’s pretty easy to do. Generating a voice that sounds like a person’s is easy, but in some cases the quality leaves a lot to be desired. ”

From program analysis to data analysis

According to him, the Deepfake is part of the same problem as the image editing. ”There are statistical algorithms that allow you to see if a photo has been modified. We can see that the structure of a color is too regular to be true. “In the future, it will be the same for video and sound. But, explains the security expert, it is no longer a question of program analysis, which is the business of antivirus developers, but of data analysis. “There, we will have to get started”, sums up Jean-Michel Merliot for whom this new type of hacking would allow, for example, to bypass the access protection procedures based on the face or the voice. But the most serious could be the ability of hackers to destroy a person’s reputation by making him say unworthy words.

Zao and FacApp: apps that no longer make you laugh

But already, the Deepfake machine seems to be underway. On August 30, the Chinese application ZAO gave a new dimension to Deepfake technology by allowing the free download (on the Chinese App store only) of software allowing to replace any face in a photo. by that of the Internet user. A simple digital photo allows you to model your own photo on that of a famous actor. Very quickly, Zao reached the heights in terms of downloads.

With a serious drawback, since the conditions of use of the app specify that users grant ZAO the right to use their photos and synthesized videos. This sparked an outcry from users, dropping the app’s appreciation to 1.9 out of 5 stars.

A problem reminiscent of the one raised by the Russian application FaceApp at the end of July. The application here makes it possible to “age” an individual on a photo using filters. Same success with Internet users, but even criticism of the lack of protection of personal data offered by the application.

[SOLVED] : How Deepfake Technology is Used to Mimic Voices

Audio manipulation is being emulated

Audio manipulation is far from anecdotal. Applications already master voice synthesis from a suite of audio extracts provided, such as the company Lyrebird, which allows you to create a voice copy in a few minutes. Just read 30 excerpts aloud in English. It is then possible for a crook to make the voice avatar read any text. This is also the case with a text generator called GPT2, which these developers have however decided not to make the full version of the software.

The Gaffa take over the case

Even more than the German scam, the giants of the net fear the overflow of Chinese apps such as Zao.

So Facebook and Microsoft have decided to invest $ 10 million to respond to the “Deepfake detection challenge”, we can read in the Financial Times. The mission of this project is to stimulate the creation of tools capable of identifying manipulated videos. The two companies have teamed up with artificial intelligence researchers at Oxford and Berkeley and others to detect the so-called “deep” videos, which threaten to create increasingly realistic disinformation campaigns.

With, as a bonus, a new partnership in the land of artificial intelligence, a coalition that has two other Gafa among its members: Google and Apple, and Cornell Tech and MIT universities. The objective is to design systems capable of detecting the slight imperfections of a falsified image to warn Internet users against messages diverted from reality. The main fear is to see videos or sounds arriving on social networks capable of deceiving voters on the messages of politicians. Mike Schroepfer, CTO at Facebook, writes on his blog about the fear that “deepfake” techniques will destroy the credibility of information posted online.

[SOLVED] : How Deepfake Technology is Used to Mimic Voices

Nothing, ever, will replace common sense

It is perhaps a new generation of scams that the webosphere is now discovering. A world where embezzlement could become less serious than embezzlement of reputation. Difficult, after having suffered the attack of a false video, to prove its good faith. Especially during an election campaign, for example. “Lie, lie, there will always be something left” said Voltaire. And faced with a ‘scam on the president’, there is only one effective weapon, concludes Jean-Michel Merliot: ”it is the technique of common sense, you have to be able to say no to your boss, especially if he request to pay a large sum into an account abroad “.

Audio deepfake, new voice scams

[SOLVED] : How Deepfake Technology is Used to Mimic Voices

Imitating – to almost perfection – the voice of a leader to defraud a company or employees, it is possible thanks to artificial intelligence. Already the era of audio deepfakes?

We have already told you about deepfakes: these assemblies made using artificial intelligence. They allow, for example, to overlay existing videos to change a person’s face.

And it is sometimes very successful. The proof with this video which places the face of Jim Carrey in the place of that of Jack Nicholson in the film Shining.

More these special effects can also reproduce a voice. That’s what Hugh Thompson, chief technology officer at Symantec, a cybersecurity firm, explains to the BCC. ” The artificial intelligence (AI) system can be trained using the large amount of audio recordings available on a ruler. That is to say corporate videos, received calls, appearances in the media, on social networks or even the conferences in which he has participated. “

And when some syllables or words aren’t reproduced well enough to sound real, just add background noise to cover it up.

Multi-million dollar scams

Symantec, There have already been three successful scams related to deepfake audio, explains the Axios website. It is, for criminal groups, reproduce the voice of a business executive to call employees and ask them to make an emergency money transfer.Each time, several million dollars have been stolen. Even though this audio deepfake technology requires a lot of money and time, it is very realistic and therefore effective in dealing with employees who do not even know of its existence.

As Axios Henry Ajder of Deeptrace, a start-up specializing in the detection of deepfakes, explained, “ the corporate world is not ready for a world where voice and video can no longer be trusted of his colleague. “

More than ever, the solution lies in the real. Unless Bryan, your cool but awkward colleague, is a puppet harboring a reptilian?