1300 669 711 - Australia

ChatGPT Scam Investigations

random user

Cybertrace Team

May 1, 2023 · 8 min read

Share On

Lately, ChatGPT has been in the news a lot. Whether it’s revolutionizing writing, eroding public trust or upending white-collar work, there’s been plenty of press on OpenAI’s creation. Catching the world’s attention, the AI-powered language tool can generate authentic-sounding human prose at the touch of a mere button. But is it just a harmless experiment for professionals, students and nerds, or could it become dangerous in the hands of cybercriminals? Getting ahead of the curve, Cybertrace has compiled this report to alert the broader public to ChatGPT’s potential dangers. Let’s look at ChatGPT, its downside and how criminals are exploiting those for ChatGPT scams. Finally, we will consider what you can do to protect yourself.

What is ChatGPT and how does it work?

ChatGPT is a free online application that synthesizes information from multiple websites to provide answers to user prompts and queries. Basically, it works like a search engine but one which combines various sources of information into one cohesive ready-to-use answer. Part of what is known as artificial intelligence, ChatGPT is a large language model (LLM) developed by the company OpenAI. Trained on huge amounts of language, ChatGPT is able to recognize, replicate and respond to natural human language patterns. Importantly, ChatGPT’s developers trained it to interact in a conversational way, meaning it can answer follow-up questions and refine answers. Not only does it have the potential to revolutionise online search engines but users can also deploy ChatGPT to write emails, reports and any kind of document for them. Released in late 2022 and refined in early 2023, ChatGPT has generated a huge amount of public and commercial interest.

ChatGPT-scam-investigation-Cybertrace

What are the downsides of ChatGPT?

Like every technology, ChatGPT has some serious downsides, some of which relate to its technical limitations. For one, ChatGPT’s answers do not include references about where it sourced its information, which may make them biased. Additionally, while often sounding highly plausible, its answers are frequently inaccurate, wrong or even completely made up. That is because ChatGPT only understands the structure and patterns of human language, not the meaning or truth behind it. For that reason, the model produces simple answers and is not capable of accurate advanced analysis.

However, other downsides are not due to technical limitations but bad-faith actors utilising ChatGPT for nefarious purposes. In a recent alert, Europol warned of a number of potential criminal uses of the technology. For one, crooks use ChatGPT as a fast track for their criminal education. Whether it’s plans for terrorist attacks, kidnapping plots or burglaries, the internet contains a wealth of information for aspiring criminals. While this information was available before the advent of ChatGPT, the bot’s summarizing capabilities means it significantly accelerates the process. Thus, ChatGPT significantly lowers the entry barrier to criminals when it comes to skills and knowledge. Secondly, it lends itself to easily creating hate speech, disinformation and propaganda, especially due to its authoritative tone, even when incorrect. With the underlying technology expected to further improve dramatically, it may become more and more difficult to ascertain whether any text is human- or bot-generated.

ChatGPT-scam-investigation-Cybertrace

How do Criminals and Scammers use ChatGPT and Other AI?

Exploiting the Hype

Scammers have immediately taken advantage of the hype around OpenAI and ChatGPT by setting up fake websites and apps. By some estimates, there are more than fifty fake ChatGPT apps/extensions that try to steal customers’ personal or financial data. The most prominent of these is a fake Google Chrome extension that harvests unsuspecting victims’ cookies and Facebook account data. Additionally, Unit 42 researchers at PaloAltoNetworks have uncovered a rapid rise in OpenAI and ChatGPT squatting domains following ChatGPT’s launch. Squatting domains are impersonation websites which acquire similar domain names to popular sites to try and deceive visitors into thinking they’re the real deal. Needless to say, it’s nothing more than social engineering with the aim of identity theft and fraud.

[google_adsense_shortcode]

Writing Malware Code

Referring to software intentionally designed to disrupt, damage or gain unauthorized access to a computer, malware represents a significant threat. While cybercriminals were able to write malware code before, ChatGPT significantly lowers the entry barrier for malicious hackers. Now even those with minimal coding skills can easily deploy ChatGPT to generate malware. In addition, they can even work with its conversational model to fix any subsequent bugs in the code. Although OpenAI programmers set up ChatGPT so that it cannot create harmful material, crooks have found easy ways around it. For instance, they might simply break the code up into smaller chunks or use synonyms for terms flagged as harmful. The newly created malware is then used to infect victims’ computers, for example to remotely access their online banking or crypto wallets. Alternatively, they might illegally harvest their targets’ personal/login details or infect their computer with ransomware for extortion purposes.

artificial-intelligence--scam-investigation

Producing More Sophisticated Email and Text Scams

This is probably the most significant way in which fraudsters are now making use of ChatGPT. Traditionally, many phishing, investment, romance and tech support scammers struggle with making their text-based communication appear genuine. This is because international fraud syndicates often forcibly employ people who speak English as a second language in slave-like conditions. As a result, they often make grammar or syntax mistakes and cannot match the required tone, context and nuance. Usually, this has meant that many recipients of such emails or texts are suspicious and disregard/don’t respond to the scammers.

Using natural language text generation via ChatGPT, scammers can now quickly produce tailored prose that looks correct, convincing and authoritative. Unfortunately, that means it sounds exactly like the official and corporate email tone they are trying to imitate. For example, you may receive an urgent email from your company’s IT administrator asking you to install a security update. Or a warning from your bank or service provider that your account is at risk of being deactivated. The email looks so genuine that you don’t notice the slight difference in the sender’s email address and subsequently either click on a link that downloads malware onto your computer or enter your login details into a real-looking site controlled by the scammers. Either way, the fraudsters have now gotten access to your computer, passwords and money.

automation-acceleration-expansion-cyberfraud

Accelerating and Expanding Scams Through Automation

Finally, ChatGPT and other AI also work as accelerants and expansion agents for all types of scams. This is because the key limitation for human-run scams is the labour-intensive persuasion process, i.e., the time and effort it takes to gain victims’ trust and infiltrate their finances. In scamming business terms, the cost savings and productivity gains of moving from paid labour to automated scamming are phenomenal. Using ChatGPT-bots, a single scammer with a laptop can now run hundreds of thousands of scams in multiple languages simultaneously.

Additionally, the ChatGPT natural language model makes text-based communication more subtle, sophisticated and personalised, thus greatly extending its effectiveness. Finally, scammers can combine ChatGPT and voice cloning to create highly targeted, personalized and responsive dialogue over the phone. In effect, you won’t be able to tell that the human speaking with you is actually a highly sophisticated scam-bot. Basically, ChatGPT and other AI are facilitating the creation of billions of scammers that never sleep. So how can we deal with this as it infiltrates all aspects of our lives?

Cybertrace-help-ChatGPT-scams-investigation

How can I protect myself, my family and my business from ChatGPT scams?

With ChatGPT scams increasing drastically in reach, sophistication and effectiveness, some of the old warning signs no longer apply. Unfortunately, robotic voices, bad grammar and terrible spelling are no longer the easy inherent giveaways they once were. Instead, it might be necessary to adopt a broader suspicious critical-thinking stance towards any unsolicited contact. Also, it’s important to realise that scammers design phishing emails/phone calls to exploit our emotions, causing us to react hastily. If a message evokes anger, fear or excitement, pause and think carefully before acting. Usually, contacting the alleged sender via their company’s official webpage (not the link/number within the email) will clear things up. With family members, call directly to verify their identity or agree on a safe word that signals a real emergency. Most importantly, do not share any personal identifying information or allow others remote access to your computer.

Can Cybertrace Investigate ChatGPT Scams?

Even with the best precautions, ChatGPT scammers might still find a way around your defences and steal your hard-earned money. Luckily, Cybertrace has all the investigative expertise and technological know-how to successfully investigate ChatGPT scams. Whether it’s website forensics, cryptocurrency tracing or all kinds of scam investigations, our experienced investigators will leave no stone unturned. With customised tools and techniques, the goal is always to identify those responsible. Only if you find the brains behind a ChatGPT scam operation can you take action to recover your assets. Thankfully, constantly staying abreast of the latest technological developments as well as criminals’ ever-evolving methods is Cybertrace’s trade. Whether as the first provider of cryptocurrency tracing products to the Australian public, our pioneering work on NFT scams or our emerging expertise for ChatGPT scam investigations, we’ve got your back. Talk to our experienced investigators today to discuss how we can help you.

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Post

A Guide on How to Report a Scam in 2024
How to Report a Scam [Updated 2024]

Reporting a Scam Scams have become an unfortunate....

Read more
woman who has been hacked and needs to understand what to do with cyber literacy
Cyber Literacy: What to Do If You Suspect…

What to Do If You Suspect You've Been....

Read more
can a private investigator find someone?
Can a Private Investigator Find Someone?

Can Private Investigators Find People? With the right....

Read more

Contact Us

Contact our friendly staff at Cybertrace Australia for a confidential assessment of your case. Speak with the experts.

Email icon Email: [email protected]
Phone Icon International +61 2 9188 7896