1300 669 711 - Australia

Investigating Deepfake Images, Video and Audio

random user

Cybertrace Team

May 22, 2023 · 9 min read

Share On

Developers continue to release Artificial Intelligence (AI) products at frightening speed, making it difficult for legislators and regulators to keep pace. While many are undoubtedly useful applications, others pose serious risks of harm when in the hands of malicious actors. Whether investigating scammers, extorters, fake news, or harassers, serious cyber fraud detectives like Cybertrace must apply vigilance, innovation, and foresight. After exploring the use of ChatGPT for nefarious purposes, let’s consider deepfakes and voice cloning. Considering the potential financial, reputational, and personal harm these troubling technologies can cause, Cybertrace feels compelled to warn the public. Our experienced investigators continually stay at the forefront of technological developments and build innovative, custom-made tools to catch cybercriminals. With expertise in website forensics, cryptocurrency tracing, and scam investigations, Cybertrace is now also investigating deepfake images, video, and audio to identify those responsible. Contact our team today to discuss your case and get the help you need.

What are Deepfake Images, Video and Audio?

Built on powerful AI Machine Learning (ML) models, a deepfake is a digitally forged image, video, or audio of a person that makes them appear to be someone else. Deepfakes produce frighteningly realistic-looking and -sounding images, videos, and audio of events that never took place. Usually, deepfakes target celebrities, politicians, or public figures, with early examples including President Obama and Pope Francis. This is because the technology behind deepfakes works better with large data sets, i.e., plentiful footage to train the model. However, with continual improvements in machine learning, anyone can become a target. In fact, AI voice cloning can now happen based on as little as hearing you speak for three seconds. This is why acquiring the capacity for investigating deepfake images, video, and audio is becoming increasingly important. As always, Cybertrace are pioneers in this investigative (cyber)space.

Scammer wearing hat in the dark creating deepfakes

How are Deepfakes Made?

Interestingly, deepfakes exploit two different ML models: one creates fake images/video/audio from available data, and one tries to detect them. When the latter can no longer tell if an image/video/audio is real or fake, then it’ll likely fool humans, too. This, in short, is how a Generative Adversarial Network (GAN) operates. Not too long ago, producing such deepfakes was the preserve of basement-dwelling hackers and computer nerds with high-powered gaming PCs. Nowadays, anyone with a laptop and internet connection can generate realistic-looking deepfake videos for free in less than 30 seconds. In addition, AI voice cloning only requires a three-second audio clip to make your voice say anything its creator desires. Indeed, a recent Forbes article called the whole process ‘scarily easy’, especially for those with bad intentions. Investigating deepfake images, video and audio has never been more important.

[google_adsense_shortcode]

What are the dangers of deepfake images, video and audio?

What started as harmless inter-user entertainment quickly degenerated into coders creating ML algorithms that transposed celebrity faces onto porn videos. As a recent investigation by the ABC’s 4Corners demonstrates, non-consensual porn is still one of the main applications of deepfakes. Criminals also use voice cloning to steal your money and identity via phone banking or government services portals like Centrelink. Add to that fake news, misleading voters, illicit payments, fake alibis, blackmail and terrorism, and you’ve got a heady cocktail! No wonder more and more experts are calling for increased capabilities for investigating deepfake images, video and audio.

image of woman with deepfake face added using AI

But beyond nefarious uses for specific criminal activities, there is also a larger problem at hand. Deepfakes pose huge societal challenges to our basic understanding of what is real, true, and believable. With humans’ strong propensity for believing in our own ears and eyes, video and audio evidence has strong credence. This is as true in law enforcement as it is in elections and popular culture. Do deepfakes mean we can no longer trust our senses? Will voice cloning make us think politicians said something they never did? Does it make claims of “fake news” easier? With deepfakes increasingly hard to spot, even for detector tools, we are likely going to see worsening political disinformation. In this climate, acquiring the professional capacity for investigating deepfake images, video, and audio has never been more important.

How do Cybercriminals use deepfake images, video and audio?

Naturally, cybercriminals have been quick to adopt AI and deepfake images, video, and audio to exploit, misinform and manipulate. In the realm of AI more generally, experts worry that terrorists are exploring how to use driverless vehicles as weapons. Additionally, hostile states might seek to disrupt AI-controlled systems, causing power failures, traffic gridlock, or breakdowns in food logistics. Furthermore, malicious actors can utilise deepfake images, video, and audio to foment civil unrest and manipulate elections. Finally, compared to traditional forms of crime, AI-based crime (including voice cloning) is far simpler to share, repeat or sell. As a result, cybercriminals can easily provide for-fee “crime services” or outsource the more difficult parts of their digital crime. Investigating deepfake images, video and audio has thus become a pressing issue for law enforcement and intelligence agencies worldwide.

deepfake, voice cloning, fake

What about scammers and fraudsters?

Malicious state actors, terrorists, and crime syndicates aren’t the only ones making use of AI. Indeed, deepfake images, video, and audio have also enabled previously small-time scammers to significantly dial up their operations. Using fake images and videos, fraudsters can now set up bogus accounts on dating apps to perpetrate romance scams. Likewise, they can create fake videos of CEOs or celebrities endorsing fraudulent cryptocurrency investments. Scammers use these fake endorsements to manipulate unsuspecting victims and steal their hard-earned money. Furthermore, fraudsters can easily create images of fake products, services, or product reviews to deceive victims via car sale scams. Put together, there are lots of reasons for prioritizing investigating deepfake images, video, and audio, including voice cloning.

Is voice cloning an issue?

As mentioned before, voice cloning is capable of recreating a person’s voice after only three seconds of listening to them speak. Worryingly, this comes at a time when banks and government agencies increasingly use voice biometrics to authenticate client phone calls. Ultimately, this means that scammers can use AI to quickly and convincingly replicate a key piece of your personal identity. As with all their other ploys, their ultimate goal is to access your personal details and money. Furthermore, fraudsters can also use voice cloning to impersonate a friend or family member in distress and in need of money. Basically a more sophisticated version of the “Hi Mum” text message scam, scammers manipulate victims’ familial love and protectiveness. It’s only once scammers have drained their bank accounts that victims realise there is no family member in trouble. Sadly, we are seeing increasing cases and warnings about AI-enhanced family emergency scams.

deepfake, voice cloning, fake

What other threats are there?

Unfortunately, voice cloning is also enhancing the quality and believability of mass fraud via automated phone messages, aka ‘robo-scams’. Most recipients of ominous messages from tax offices, telecommunication providers, or other government/corporate entities could usually tell they were fake. However, with improvements in AI voice and script quality, distinguishing real messages from fake ones is going to become harder. Finally, scammers are using AI to create fake compromising “evidence” to blackmail targets. For these reasons, Cybertrace believes investigating deepfake images, video or audio is increasingly important.

What can we do to protect ourselves against deepfake images, video and audio?

In a digital world increasingly awash with deepfake images, video and audio, we need to protect ourselves. Luckily, there are several ways in which we can do this. Firstly, in a cat-and-mouse race with deepfake generators, tech companies like Intel and OpenAI are working on deepfake detection tools. While these will hopefully become sophisticated enough to keep up with deepfakes, they’re also not one hundred percent reliable. Consequently, it’s important that we rely on good-old-fashioned critical thinking and media savvy by asking: who made the video? Has it been corroborated by other media outlets? What else has its subject said? In short, we all need to take a stake in investigating deepfake images, video and audio.

protection-against-scam-cyber-fraud, voice cloning, fake

One great example of how to proactively foster this capacity comes from the Nordic country of Finland. Faced with a concerted disinformation campaign from its neighbour Russia, it brought in global experts to help design effective countermeasures. By combining fact-checking with critical thinking and voter literacy, Finland has cultivated a robust citizenry less likely to be manipulated. This represents a powerful outcome and one that can be replicated in other countries. Likewise, comedy and satire provide effective avenues for educating the public about the dangers of deepfakes. A great example of this is comedian Jordan Steele’s famous Obama video. Whatever the method, education plays a crucial role in the community investigating deepfake, images, video, and audio, including voice cloning. But what if scammers or extorters got past our defenses?

How can Cybertrace help me?

Sometimes, determined fraudsters can get past even the most vigilant defences. That’s when you need some serious help on your side! Suppose scammers swindled you out of your hard-earned money by impersonating your grandchildren, potential romantic partners, or celebrities. Alternatively, someone might be distributing deepfake images or videos of you in order to harass or extort you. Fortunately, Cybertrace’s team of experienced investigators can help you get to the bottom of who is responsible. Whether it’s tracing cryptocurrency across the blockchain, forensically analysing websites to see who hosts them, or painstakingly combing through social media profiles to identify offenders, we can help with investigating deepfake images, video, and audio, including voice cloning. Find out who is behind the scam, blackmail, or harassment, so you can go to the police or your lawyers. Cybertrace has the resources, capabilities, and dedication to get the job done. Contact our experienced investigators today.

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Post

can a private investigator find someone?
Can a Private Investigator Find Someone?

Can Private Investigators Find People? With the right....

Read more
A padlock representing cyber literacy and Two-Factor Authentication (2FA) and Multi Factor Authentication (MFA)
Cyber Literacy: Understanding Two-Factor & Multi-Factor Authentication (2FA…

Digital Literacy: Understanding Two-Factor & Multi-Factor Authentication (2FA....

Read more
Cybertrace banner with logo and the words social media investigations. Also features logos for social media platforms instagram, facebook, snapchat, twitter and discord.
How to Find Out Who Is Behind a…

Fake Twitter Accounts Getting to the bottom of....

Read more

Contact Us

Contact our friendly staff at Cybertrace Australia for a confidential assessment of your case. Speak with the experts.

Email icon Email: [email protected]
Phone Icon International +61 2 9188 7896