[SOLVED] : Deepfake – How can users protect themselves? Part one

[SOLVED] :  Deepfake - How can users protect themselves?  Part one

Today, hackers don’t just photoshop the faces of politicians on the bodies of p *** o stars. Drawing on technology similar to that used in Hollywood movies, deepfake videos prey on their victims by making them say things that can be used to damage their reputation or even blackmail them.

The emerging world of deepfake

Deepfake is a portmanteau – made up of “deep” from the term “deep learning” and “fake” from the word “fake”. Deep learning is an advanced artificial intelligence (AI) method that uses multiple machine learning algorithms to gradually extract higher-level material from raw data. It is able to learn from unstructured data – like the human face. For example, an AI system can collect data about your physical movements.

This data can then be processed to create a deepfake video via a GAN (Generative Antagonist Network). This is another type of specialized machine learning system. Two neural networks are put in competition in the learning of the characteristics of a training set (photos of faces, for example), then in the generation of new data having the same characteristics (new “photos”).

As such networks continuously test the created images against the training set, the fake images become more and more convincing. Which makes deepfake an even more serious threat. In addition, GANs can alter data other than photos and videos. In fact, the same machine learning and synthesis techniques can be used to forge voices.

Examples of deepfake

There are many known examples of deepfakes. Take for example the video posted by actor Jordan Peele. In this video, he used an authentic recording of Barack Obama and merged it with his own impersonation of the president to warn against deepfake videos. He then revealed what the two parts of the merged video looked like when separated. His advice? We have to question everything we see.

The video of Facebook CEO Mark Zuckerberg speaking about how Facebook “controls the future” through stolen user data has also caused a stir, especially on Instagram. The original video was taken from a speech he gave on Russian interference in the US election – 21 seconds of the speech was enough to synthesize the new video. However, the vocal imitation was not as good as Jordan Peele’s, which exposed the deception.

But even lower quality fake videos can have a remarkable impact. Nancy Pelosi’s “drunk” video garnered millions of views on YouTube – the original video was simply slowed down to look like she was sputtering. And many female stars have already found themselves “featured” in p ** n revenge images or videos, their faces having simply been inserted into pornographic content.

Threats of deepfake – fraud and blackmail

Deepfake videos have been used for political purposes in the past, as well as for personal revenge. However, today they are increasingly used in connection with attempted blackmail and fraud.

Thus, the CEO of a British energy supplier was extorted $ 243,000, a vocal deepfake from the head of his parent company asking him to make an emergency fund transfer. The deefake was so convincing that he didn’t think to verify; the funds were not paid to the head office, but to a third party bank account. The CEO only began to have suspicions when his “boss” asked him to make another transfer. This time around he was cheated – but it was too late to recover the funds already transferred.

France was recently the victim of a fraud not based on the use of deepfake technology, but on identity theft, coupled with a meticulous copy of the office and furniture of Foreign Minister Jean -Yves Le Drian. The aim was to defraud top executives of several million euros. Fraudster Gilbert Chikli is accused of pretending to be the minister in order to ask wealthy individuals and business leaders for money to free French hostages in Syria; he is currently on trial.

[SOLVED] :  Deepfake - How can users protect themselves?  Part oneDeepfake writers can also blackmail company directors by threatening them to post a harmful deepfake video if they refuse to get their hands on the wallet. Or access your network by synthesizing a video call from your IT manager, tricking employees into relinquishing their passwords and privileges, which then allows hackers to wreak havoc on your sensitive databases.

Fake pornographic videos have already been used to blackmail female reporters and journalists, such as Rana Ayyub in India, who has lifted the veil on abuses of power. With technology becoming more affordable, it is to be expected that deepfake will become an increasingly popular method of blackmail and scam.

How can we protect ourselves from deepfake?

Legislation is already starting to address the threats posed by deepfake videos. For example, in the state of California, two bills passed last year made certain aspects of deepfake illegal – the AB-602 prohibits the use of human image synthesis to make pornography without the consent of the person represented and the AB-730 prohibits the manipulation of images of political candidates within 60 days of an election.

But does it go far enough? Fortunately, cybersecurity companies are constantly offering more and more efficient detection algorithms. These analyze the video image and detect tiny distortions created during the “tampering” process. For example, current deepfake synthesizers create a 2D face, then distort it to fit the 3D perspective of the video; the direction in which the nose is pointing is a telling clue.

Deepfake videos are still at a stage where it’s easy to spot them yourself. A deepfake video may have the following characteristics:

  • jerky movement
  • variations in light from one shot to another
  • variations in skin color
  • strange blinks or no blinks at all
  • lips poorly synchronized with speech
  • digital artifacts in the picture

But as deepfakes get better, your eyes will be less useful to you and you will only be able to rely on a good cybersecurity program.

State-of-the-art anti-tampering technology

Some emerging technologies are now helping videographers authenticate their videos. A cryptographic algorithm can be used to insert hashes at defined intervals of the video; if the video is corrupted, the hashes will be changed. AI and blockchain can register a tamper-proof digital fingerprint for videos. It’s like putting watermark on documents; the difficulty with video is that the hashes must remain if the video is compressed for use with different codecs.

Another way to fend off deepfake attempts is to use a program that inserts specially designed digital “artifacts” into videos to disguise the pixel structures that face detection software uses. These artifacts then slow down the deepfake algorithms and give poor quality results – so the deepfake is likely to be less convincing.

Having good security procedures in place is the best protection

But technology isn’t the only way to protect against deepfake videos. Having good basic security procedures in place is very effective in countering deepfakes.

For example, incorporating automatic checks into all fund release procedures would have stopped many deepfake and similar frauds. You can also :

  • Make sure your employees and family understand how deepfake works and the challenges it can pose.
  • Train you and others to detect a deepfake
  • Make sure you are media literate and use quality news sources.
  • Have good basic protocols – “trust, but verify”. Adopting a skeptical attitude towards voicemail messages and videos doesn’t mean you’ll never be fooled, but it can help you avoid many pitfalls.

Remember that if hackers start using deepfake to try to gain access to your home and work networks, basic cybersecurity best practices will play a critical role in reducing the risk:

  • Performing regular backups protects your data against ransomware and gives you the ability to restore damaged data.
  • Use different strong passwords for your different accounts so that if one of your networks or services is hacked, the others are not compromised. If someone hacks your Facebook account, you don’t want them to be able to access your other accounts as well.
  • Use a good security solution like Kaspersky Total Security to protect your home network, laptop and smartphone from cyber threats. This solution includes antivirus software, a VPN to stop hacking your Wi-Fi connections and protection for your webcams.

What is the future of deepfake?

The deepfake continues to evolve. Even two years ago, it was very easy to spot deepfake videos with awkward movements and no flickering. But the latest generation fake videos have evolved and adapted.

It is estimated that there are currently over 15,000 deepfake videos in circulation. While some are just humorous, others try to manipulate our opinions. But now that it only takes a day or two to make a new deepfake, that number could increase very quickly.

DEEPFAKE – HOW TO DEAL WITH INCREASINGLY CREDIBLE FALSE INFORMATION

We saw in the first part of the article the risks that deepfakes present for companies. In this part, we will discuss strategies to guard against it and concrete actions to put in place now to reduce the risks posed by deepfakes.

DIFFERENT STRATEGIES TO PREVENT DEEPFAKES

In addition to the legal framework, public and private organizations are organizing themselves to offer solutions to detect and prevent the malicious distribution of deepfakes. We can distinguish four strategies to prevent it.

1 / The detection of imperfections

Detecting deepfakes by their imperfections is one of the main methods available. Some irregularities remain present in the generated content, such as the lack of blinking of the eyes and synchronization between the lips and the voice, the distortions of the face and the accessories (temples of glasses), or the inaccuracy of the context (weather, location).

Deepfakes are however built to learn from their mistakes. and generate content that is closer and closer to the original, making imperfections less noticeable. The tools using this deepfake detection strategy can be effective but need to be constantly improved to detect increasingly minor anomalies.

This protection category includes Assembler, a tool for journalists developed by Jigsaw (Alphabet branch, Google’s parent company). It makes it possible to verify the authenticity of the contents by analyzing them via five detectors including pattern and color anomalies, copied and pasted areas, and known characteristics of deepfake algorithms.

Tagged in :

, , , ,