ChatGPT, the fastest growing app of all time, has given people the power of an intelligent artificial intelligence (AI) big language model – trained on a huge database of information gathered from the internet – at their fingertips.

This new power has already resulted in changes in the way many people go about their work or search the internet for information, with OpenAI’s technology both fueling excitement and fear for the promise of the future AI can deliver awakens about the changes she is ushering in.

One of the fears surrounding AI technologies like ChatGPT is what criminals and other bad actors will do with that power.

That’s what Europol, the European Union’s law enforcement agency, said in their recent report to ChatGPT entitled “The impact of large language models on law enforcement‘.

ChatGPT, built on OpenAI’s GPT3.5 Large Language Model technology, could “make it significantly easier for malicious actors to better understand and subsequently execute different types of crimes,” the report said.

That’s because the information used to train ChatGPT is already freely available on the web, but the technician is able to provide step-by-step instructions on all sorts of subjects given the right contextual questions from a user receives.

Here are the types of crimes Europol warns that chatbots or LLMs could potentially help criminals with.

Fraud, Counterfeiting and Social Engineering

ChatGPT and other chatbots like Google’s Bard have amazed users with their abilities to provide human-like text on any topic based on user prompts.

They can mimic celebrity writing styles and learn a writing style from input text before creating more text in that learned style. This opens the system to potential use by criminals looking to mimic a person’s or organization’s writing style, which could potentially be used for phishing scams.

Europol also warns that ChatGPT could be used to lend legitimacy to various types of online scams, for example by creating masses of fake social media content to promote a fraudulent investment offer.

One of the sure-fire signs of potential fraud in email or social media communications are obvious spelling or grammatical errors made by the criminals writing the content.

With the power of LLMs at their fingertips, even criminals with little knowledge of the English language would be able to generate content that no longer exhibited these red flags.

The technology is also ripe for those who want to create and spread propaganda and disinformation, as it is capable of creating arguments and narratives at great speed.

Cyber ​​crime for beginners

ChatGPT is not only good at typing words, but also masters a number of programming languages. According to Europol, this could have an impact on cybercrime.

“With the current version of ChatGPT, it is already possible to create basic tools for a variety of malicious purposes,” the report warns.

These would be basic tools to create things like phishing pages, but it allows criminals with little to no programming knowledge to create things they couldn’t create before.

The inevitable improvements in LLM skills mean that criminal exploitation offers a “bleak outlook” for years to come.

The fact that OpenAI’s latest version of its Transformer GPT-4 is better at understanding the context of code and fixing bugs means “this is an invaluable resource” for criminals with little technical knowledge.

Europol warns that AI technology, set to improve, could become much more advanced “and therefore dangerous”.

Deepfakes are already having consequences in the real world

The use cases for ChatGPT that Europol has warned about are just one area of ​​AI that could be exploited by criminals.

There have already been cases where AI deepfakes have been used to trick and harm people. In one case, a woman said she was only 18 when she saw pornographic images of herself circulating on the internet — even though she had never taken or shared those images.

Her face had been digitally added to images of another person’s body. She told Euronews Next it was “a life sentence”. A 2019 report by Deeptrace Labs found that 96 percent of deepfake content online is non-consensual pornography.

In another case, AI was used to use an audio deepfake technique to mimic the sound of a person’s voice in order to scam their family member.

Europol concluded its report by stating that it is important for law enforcement to “stay at the forefront of these developments” and to anticipate and prevent criminal uses of AI.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *