Site icon Stuff South Africa

ChatGPT: The internet’s latest cyber-threat?

ChatGPT

ChatGPT took the world by storm when it launched in November 2022, amazing humans with its almost-human responses to our curious queries. But as time went on, and we got to know ChatGPT better, it became clear that ChatGPT could do a whole lot more than just hold up its end of a conversation. It could write poetry, translate text into any language, compose haikus, write essays, and even generate code in a programming language of our choice. And that was just the tip of the iceberg.

And slowly, our wonder turned to trepidation. Or at least, it did for people involved in cybersecurity, as it slowly became clear that ChatGPT’s capabilities could just as easily be used for evil as it could for anything else.

Having done some research on the subject – including a quick consult with ChatGPT itself  – here are some of the ways that ChatGPT could already be contributing to cybercrime.

Improved Phishing Attempts

Phishing is on the rise in 2023, with the number of “malicious emails” reaching an “all-time high” in Q1 2023 according to Phishlabs. And it will probably only get worse. Up until recently, one of the main reasons people don’t fall for phishing scams is that a lot of them have originated in countries where English is not the first language. As such, the language used in attempted phishing emails has been quite clearly not crafted by a native speaker, which makes them easy to spot.

Now, however, with ChatGPT’s ability to generate content in near-perfect English, there’s nothing stopping the bad guys from creating much more convincing phishing emails.

Attack Automation

ChatGPT’s ability to reply to prompts could feasibly be used to automate negotiations in a ransomware attack. It can also automate the generation of malware code; while OpenAI has built safeguards into ChatGPT’s code to prevent it from responding to direct requests to create malware, people have found workarounds to get it to do that anyway. For example, someone could ask ChatGPT to generate malware for penetration testing (i.e., a good, ethical use), and then tweak the resulting code for their own nefarious purposes.

Malware that Morphs

ChatGPT’s coding capabilities could be used by malware creators to vary their code in millions of tiny ways to help them avoid detection by signature-based security software. This technique is called “polymorphic malware” and is not entirely new, but its use could accelerate thanks to ChatGPT’s arrival on the scene.

ChatGPT is Not Confidential

OpenAI specifically states in its terms and conditions that they “may collect Personal Information that is included in the input, file uploads, or feedback that you provide our Services (“Content).” They also say in their official FAQ that they will use your conversations with ChatGPT to further train their AI language model and that your chats “may be reviewed” by human AI trainers.

What this means is that the things you share with ChatGPT ARE NOT CONFIDENTIAL. So don’t enter any information into ChatGPT that you consider to be sensitive, and be sure to inform your staff of this as well, as you don’t want them leaking confidential business information to OpenAI as these Samsung employees were found to have done.

If you’re not sure, just don’t use it

ChatGPT is a fantastic invention from many perspectives. But as with anything humanity makes, there’s a dark side to what it can do, and some are already taking advantage of that.

We strongly advise you to keep an eye on ChatGPT developments, and that way keep yourself informed about what it can do, is doing, and could potentially do in the future. And if you’re not sure about it yet, consider avoiding it entirely until it’s more mature.

Forearmed is forewarned, after all.

Photo by Sanket Mishra

Exit mobile version