Is it a Real Person or Deepfake?
Have you ever wondered, Is it a Real Person or Deepfake? Well, imagine getting— knock on wood— catfished. On top of it being a horrible experience, it can also lead to emotional distress and financial loss. But just like Charles Darwin’s theory of evolution, living species gradually change over time and learn to adapt through it all, we went from worrying about, “Is the person I’m talking to who they say they are?” to “Is the person I see on the screen an actual living breathing human being?”
It is not exactly the kind of evolvement worth picturing however, Artificial Intelligence (AI) is not going anywhere and will only continue to make its mark in the modern age of technology. In recent years a new term has been coined surrounding AI and has become a threat to cybersecurity— deepfakes.
While it is true that the technology behind deepfakes has made major advancements in education and entertainment, its misuse has raised moral and legal challenges.
Deepfake Deception
Being deceived by deepfakes is somewhat like, but not entirely, falling victim to catfishing. They are both rooted in fraud and deception but operate in different ways. Deepfakes are much more advanced since this technology creates new images or content rather than relying on pre-existing ones to create a false identity online.
What are deepfakes and how dangerous can they be? In this article, we will discuss the origin of deepfakes, how they affect your day-to-day, the legality of it all, and how to spot them.
Meaning and Origin
Deepfakes came from the words “deep learning” and “fake”. It is a type of synthetic media that uses AI to create realistic fake images, videos, and audio to misrepresent someone as saying or doing something that the person did not say or do. The term “deepfake” originated in 2017 when a Reddit moderator with the same username began posting manipulated pornographic videos of celebrities. Although the term itself was popularised in the late 2010s, the act of deliberate falsification, like using Adobe Photoshop, has been around for decades and only continues to evolve.
Social Impact
The main issue surrounding deepfakes is when there is no explicit consent from the person being misrepresented. Deepfakes aren’t just sexual in nature, they also can manipulate not just a single individual but the opinions of the general public as well.
People have used deepfakes for malicious purposes, such as, but not limited to:
- Extortion: these include being portrayed in compromising videos such as the threat to release nonconsensual pornography for ransom
- Emotional manipulation: in romance scams, a fabricated persona can be made online where a victim believes they are developing real connections with a person who does not exist
- Financial fraud and scams: victims can be led to believe they are communicating with real and trusted individuals which can lead to financial loss.
- The spread of false information: which can create distrust and confusion in political settings or during emergencies
From politics to romance, nothing is what it seems.
Are there laws against deepfakes?
Legislation regarding deepfakes is still evolving as the technology poses unique challenges to cybersecurity.
At present, the Criminal Code Amendment (Deepfake Sexual Material) Act 2024 (Cth) has recently taken effect last 3 September 2024 to combat sexually explicit deepfakes. The Act introduces the new offence of sharing non-consensual sexual images or videos with the use of deepfake technology with a penalty of imprisonment for 6 years or 7 years if aggravated.
The Privacy Act 1988 (Cth) and Cyber Crime Act 2001 (Cth) may also be applied to victims of deepfakes as well as various privacy and defamation state laws in the country.
While the aforementioned laws are a start, legislative efforts should be made to protect individuals against deepfakes.
How to detect Deepfakes?
According to Cyber Daily AU, seventy-nine percent of Australian social media users found difficulty in spotting AI content online, and only 25 percent were confident in being able to detect whether a call from a real friend or relative was being faked.
Deepfakes are becoming more real and difficult to spot and more accessible to the public. But here are some signs to look out for:
- blurring, pixelation, glitches in certain parts of the video
- skin discolouration and inconsistent shadows
- changes in lighting
- bad lip-syncing or inconsistency in audio quality
- emotions being conveyed do not match the facial expressions
- not enough blinking or too much blinking
Always cross-reference information and verify your sources. There is no such thing as being too safe. Especially when the phrase, “to see is to believe” is no longer applicable in this day and age.
If you need further information or have been impacted by a Deepfake, contact our team of Investigators who can assist you with your enquiry.