Deepfakes: A real and deeply alarming threat

D

Introduction: Seeing is no longer believing – what is a deepfake?

A ‘deepfake’ is a piece of media which reflects something that did not happen, doesn’t exist, didn’t ever exist, or was ever said.  A deepfake is commonly in the media format of a photo, audio, or video with the intent to fool people into believing what that media represents is real, was really said, or really happened.  Typically, the effort and technical skill to create such media was out of the reach of most people and reserved for professionals.  A great example is the ‘I gotta pee’ scene from the movie Forrest Gump which filmmakers spent considerable time manipulating John F Kennedy’s mouth movement for his meeting with Forrest.  However, in more recent times advances in technology, machine learning, and AI has allowed people who are even modestly tech-savvy to create this type of media.

Deepfakes can easily be confused with ‘shallowfakes’, sometimes called ‘cheapfakes’, which are similar but not as advanced.  A shallowfake is a video which has been intentionally edited for malicious intent.  This can be to remove or alter context of what is said or to use editing tools to make the speaker slur their words and appear intoxicated. 

How are deepfakes created?

First and foremost, it’s important to point out we have already accepted elements of deepfake technology as a society.  Popular apps already exist to enhance our current appearance with various filters, apps like Snapchat, Instagram and of course photoshop are widely accepted in this space.  It’s not to say they create deepfakes though, just that we’re already accustomed to enhancing media already, just not for nefarious purposes. 

Deepfakes take our acceptance of such enhancement to another level by explicitly adding the intent to fool or deceive as the end goal as opposed to simply and innocently enhancing an image

At Privacy Rightfully we always put the technical jargon on the backseat of the bus, so the following description is in layman’s terms.  If you are more technically minded or have the interest, look into Generative Adversarial Networks (GAN) being the technically termed process.  The first step in creating a deepfake is to teach / train the GAN by feeding it as much footage or images as available on the subject so it can understand what the subject looks like from various angles, lighting, environments etc.  The GAN will assess everything including facial expressions, eye movements and even subconscious twitches.  Once the GAN is sufficiently trained on the subject a user can instruct it to apply that face on the body of another.  For example, if Peter takes a bribe and you want it to look like Kurt did it:

  • You would input all of your images and videos of Kurt into a GAN
  • As it does its pattern recognition and learning, you film or find images or video of Peter taking a bribe. 
  • You will then give that media to the GAN and instruct it to apply everything it’s learnt about Kurt’s face and superimpose it on Peter’s face
  • The deepfakes are assessed by another part of the GAN trained to detect this type of forgery and the process continues until the second cannot detect the forgery made by the first.  At this point the deepfake is considered to be of a standard which would deceive the majority of people. 

GAN models are great for images but have difficulty with video footage given the difficulty of keeping the same image aligned from one frame to the next.  The current state of affairs sees machine learning and AI being the prominent method used to create deepfake video and audio media. 

Let us look at some examples

Deepfakes first became topical in 2017 when a Reddit user doctored the faces of pornographic actresses with those of popular celebrities.  Research by Deeptrace found that as of September 2019, 96% of deepfakes videos were of a pornographic nature so they’re still prominently found in this part of the internet.  However, here are some examples of the other 4%:

  • A deepfake video of the Belgian Prime Minister linking COVID-19 to climate change
  • Canadian AI company Dessa created deepfake audio of podcaster Joe Rogan saying things he never said
  • YouTube channel Vocal Synthesis features celebrities and politicians saying things they never said, including former presidents reading lyrics to rap songs
  • A video of a Malaysian politician engaging in same-sex acts (illegal and punishable in Malaysia) was released threatening to ruin his career and potentially serve time in prison.  The politician in question, Prime Minister, and supporters all argued it was a deepfake
  • A UK energy company was defrauded $240,000 by instruction of their parent company’s CEO whose accent was synthesised by a deepfake voice impersonation
  • Deepfake videos of Boris Johnson endorsing his opponent Jeremy Corbyn for Prime Minister and Jeremy Corbyn doing the same.

As you can see from these examples deepfakes can be employed in a wide variety of ways.  Next, let’s look at how deepfakes can impact society as a whole first and you more personally to follow.

The broad impacts

As it stands deepfakes haven’t really been utilised to cause too much noteworthy or high impact havoc.  Making politicians or celebrities say funny things, putting celebrities’ faces on the bodies of those staring in adult videos, and showing what you may look like at 70 is somewhat innocent and typical of the online culture.  However, as you can expect given it’s still all quite new the potential can grow to include some of the following serious and risky issues or circumstances:

  • Political activists and nation-states can create realistic forgeries to engage in information warfare, slander, defamation, reputational damage, and spreading of propaganda
  • Can be used to enhance cyberattacks by posing as an authority with credentials.  For example, bypass security systems that rely on face or voice recognition
  • Can cause national security threats and panic.  For example, a deepfake of the President announcing an emergency alert or impending attack
  • Can be used to serve increasing disinformation, fake news, and propaganda through social media channels leading to changes in public opinion
  • Lead to a decline in trust of news sources, online content, and democratic institutions
  • Could steal sensitive company information such as intellectual property via a video conference call posing as a co-worker or superior
  • Potential to cause stock market fluctuations by posing as a prominent CEO making a distressing announcement
  • Potential to provoke religious tensions
  • Disrupt legitimate democratic processes by smearing candidates with deepfake media of things they never did or said.  For example, a video of a candidate accepting a bribe a few days out from an election. 

The local / personal impacts

  • The overall threat of blackmail with bad actors using deepfakes to create media which would undermine your career, relationships, and/or reputation
  • DeepNude is a piece of software available to purchase online which is trained to remove clothes from pictures of women and replace them with naked parts of the body.  This could be used for blackmail or Revenge Porn by threatening to release these nude photos of you or someone you’re close to online
  • It could be used against you in legal proceedings by showing deepfake media of you falsely engaging in abuse, harassment, or some type of criminal behaviour leading to a loss of job, custody of children, jail time, etc
  • The long term psychological and mental health impacts of being a victim of deepfake media
  • Financial loss from having to pay a ransom or from being deceived to transfer money by deepfake audio (voice of your boss / relative)

The threat of deepfake media can impact everything from political stability, international relations, and stock markets right through to your own career, reputation and mental wellbeing.  Let’s turn our attention from describing the problem to assessing potential solutions. 

Identifying a deepfake and possible solutions

We’re always told to research and investigate any claim or theory before accepting it as fact.  We are dealing with ‘fake news’ today, which is related to and starting to be supported by deepfake media, the results of which are polarising the political spectrum.  Confirmation Bias strikes even the most self-aware, engaged, and intelligent folks – everyone, just to varying degrees, seeks out information that supports what they want to believe and ignores information which challenges that particular view.  Deepfakes play into this bias, we’re so poor at fact checking or even stopping to think critically that our first response tends to be “I can’t believe they said that” rather than “that doesn’t sound like something they’d say, let me look into this”. 

The trouble is that we are living in times when people do and say unexpected things, things which one would be more likely to associate with a deepfake.  A great example of this was when Elon Musk famously smoked a joint on the Joe Rogan Experience podcast.  The incident in isolation caused shock, the Telsa share price dropped, and people were genuinely surprised by the footage.  Stand up comedians are a good example of needing to deliver content that is shocking and bordering on good taste and expectations just to get some support from an audience.  Building on this is the typical monetisation of online content which is built on clicks and views bring about the term ‘clickbait’. 

Clickbait is a heading or picture designed to sensationalise or mislead to the content being linked so as to increase consumption of the content.  There are businesses, careers, and economies reliant on either fooling people or being intentionally controversial.  Unfortunately, the existence of this means deepfakes are born into an environment where consumption of controversy and deception has already been normalised.  A deepfake of other prominent CEOs may now be accepted as fact first because we expect controversy now – making spotting deepfakes all the harder. 

Early deepfakes and poor quality or rushed ones are easy to spot.  The best indicator used to be a lack of blinking in video media as most images used are of the subject/s with their eyes open.  But like most obvious pitfalls the technology caught up and we now have deepfake videos with people blinking.  Other, though not foolproof clues that you’re viewing a deepfake can be hair inconsistencies (strands on the fringe especially), poorly rendered jewellery, and inconsistent lighting or reflections in the eyes.  However, the reality will probably go the way of the blinking clue and even digital forensics may not be able to identify a deepfake in the future.  This is thanks to the nature of AI, if you give deepfake AI all the methodologies used to detect deepfake media, it will learn to generate media without those flags

Another issue with deepfakes becoming increasingly difficult to identify is their potential to be used in reverse or in defence.  As described to now deepfakes are used to skew an original / real / truthful subject into a representation that is not reflective of the truth.  However, in reverse, there may be a plausible deniability or defence to punishable or criminal behaviour by claiming a piece of legitimate media in evidence is a deepfake.  This has already started in recent time with Prince Andrew casting doubt on the authenticity of the photograph of him with Virginia Giuffre and Donald Trump suggesting the recording of him boasting about grabbing women’s genitals was not real

Deepfakes can also be used to engage in text-based propaganda or misinformation spreading leading to changes in public opinion via the Bandwagon Effect Harnessing existing chatbot technology with deepfake profiles of people who don’t exist, it is possible for propaganda to be amplified through social media engagement (content creation, sharing and comments).  The impact here is the Bandwagon Effect, a tendency of individuals to do or believe something that they see others doing or believing in.  This has implications for public opinion and elections (it’s commonly cited in allegations of foreign interference in the 2016 US election). 

Our thinking is that it will be too difficult for the average person to identify a deepfake so perhaps that isn’t the solution to pursue.  For now, it’s probably better to follow some of these precautions:

  • Don’t get your information, blindly believe, or let yourself become outraged based on content on social media platforms.  Get your news and current affairs from established and reputable news sources.  Be aware even they may be fooled by deepfake media and report it as fact, however they’re more likely to rectify it when they find out
  • Follow the ‘trust but verify’ principle when exposed to suspect content.  For example, if you’re being asked to transfer money, protect against deepfake audio by asking a very specific question that only the real person would know the answer to
  • Limit your exposure by reducing the amount of images and video you share of yourself online.  Celebrities have hours of video and thousands of images freely available online to feed deepfake AI – the average person has far less.  However, this may not be the case in 30 years’ time depending on how much media you share online between now and then
  • Ensure your social media accounts operate at the highest privacy settings to limit how much of your media is visible to the public.  Have a high standard for accepting friend/follower requests – only accept people you actually know but be aware that even their profiles can be compromised or imitated. 
  • The usual cybersecurity best practise will also serve as a risk reduction measure.  If you store a large volume of images and/or video in the cloud for example, you definitely want to ensure access to it is protected with a strong password which is not used to secure any other account and with 2FA enabled.  You will also want to be extra wary of phishing links from a bad actor posing as your cloud storage provider and ensuring your devices are free from malware or key loggers by using reputable antivirus software.  Suffice to say you’ve chosen a reputable cloud storage provider without a history of security issues or compromises to begin with.
  • Avoid using apps that store your facial likeness
  • Conduct regular searches of yourself online, reverse search images of interest and report anything out of the ordinary to your local authorities if the website in question refuses to take it down.

Conclusion

Some of the strongest alarmists of deepfakes warn that given the internet is so interwoven in so many aspects of our lives we are facing a very serious threat.  If we are unable to trust what we interact with online, it could lead us to distrust everything we see and hear.  If society cannot distinguish between what is real and what isn’t, where does that leave our shared realty?  We at Privacy Rightfully don’t really subscribe to such an apocalyptic viewpoint in that we will one day reach an end of truth.  First, whilst they are serious threats, deepfakes exist and do most of their damage almost exclusively online only.  Second, and again generally speaking, they are a type of free media which typically relies on social media to spread – sharing these attributes with sousveillance (except when it comes to reflecting subjects in their true state of course).  Knowing the usual habitat of deepfake media is great for being sceptical when you come into contact with a potential deepfake media from that location and which doesn’t impact you personally.

The equation changes though when it comes to deepfakes being used by bad actors to cause you financial, career, or reputational damage – it’s not so easy to ignore then.  So who should be tasked to find a solution to the deepfake threat?

Government

Most people may be thinking it’s time our lawmakers and politicians acted on this new and growing threat.  The first hurdle here of course are the complexities between jurisdictions to legislate and enforce the creation and spread of deepfake media.  The second hurdle is the bureaucratic nature of government, by the time they agree there is an issue and agree on a legislated solution, the threat has likely outgrown that solution.  Government is always on the backfoot with technology – we’ve seen this with their slow and substandard attempt to address privacy and data collection practises of social media giants.  For balance some politicians are actively working on the threats of deepfakes, Marco Rubio and Mark Warner are prominent examples

Social media giants

Well, what about those social media giants?  For deepfake media to have an impact on elections, populations, stock market,s etc, there is a requirement of it going viral in order to be consumed by a large portion of the population.  The issue here comes back to tasking private companies to solve social issues when they’ve historically struggled with accusations of political bias and privacy issues.  For balance once again it must be said that many social media giants are actively taking steps to address quicker identification and promulgation of deepfake media and deleting it as soon as possible. 

Technology developers

Finally, we can point the finger at technology and say that there should be a technology solution to a technology problem.  The technology used to create deepfakes can be used to identify deepfakes however this is likely to be a ongoing cat and mouse chase.  Alternate technology is more promising however with blockchain being touted as a potential candidate to tackle this issue.  This would be done by cryptographically signing media at the source of origin.  Once signed and published to a blockchain anyone engaging with the media will be able to audit its authenticity.  This is thanks to blockchain’s immutability feature – when these cryptographic signatures are assigned to a piece of media, the media cannot be modified and maintains that signature.  Fundamentally, it would give everyone a way to audit the authenticity of all media they engage in and social media companies could use this to approve or reject the posting of every piece of media on their platforms. 

Us

But really, it comes down to us, all of us, to be informed, learn and develop good cybersecurity habits, and take ultimate responsibility for our risks in relation to privacy and security threats.  There’s plenty more to say about how governments, social media companies, and technology developers could address the growing threat of deepfakes but given it’s predominately outside of our individual control (aside from collective demands for change to increase urgency for a workable solution) we haven’t devoted too much on that in this piece.  As always, we take a pragmatic approach and always focus our work on what the average person can do to reduce the risk of the various threats we write about.  That’s not to absolve the big three parties listed above from responsibility, it’s about being practical rather than just hopeful. 

We started Privacy Rightfully because what we create is not taught in schools nor is it taught as an essential life skill by our parents.  The general population has been tasked to look after their online privacy and security without that ever really being explicitly communicated and understood by everyone. 

Schools and parents teach us about stranger danger or how to cross the street safely without getting hit by a car.  Deepfake media is just another car on the increasingly busier road of online threats, but many people think government and tech giants will be there to tell us to ‘stand back, look left, look right, look left again, and then cross the road’.  But the unfortunate reality is we are standing on the side of a busy road by ourselves without the knowledge of what to do next.  This article acts to increase public awareness and ideally stand in the place where your parents were when they first taught you to cross the road safely.

Further Reading

The full report by Deeptrace: The State Of Deepfakes: Landscape, Threats, And Impact is a worthy read full of examples.

This article is written in line with our Terms & Conditions and Disclaimer. As such all content is of a general nature only and is not intended as legal, financial, social or professional advice of any sort. Actions, decisions, investments or changes to device settings or personal behaviour as a result of this content is at the users own risk. Privacy Rightfully makes no guarantees of the accuracy, results or outcomes of the content and does not represent the content to be a full and complete solution to any issue discussed. Privacy Rightfully will not be held liable for any actions taken by a user/s as a result of this content. Please consider your own circumstances, conduct further research, assess all risks and engage professional advice where possible.

Recent Posts

Contact us

SUBSCRIBE TO OUR NEWSLETTER

* = required field
I am over 18 years of age