How Hackers using AI complicates Cybersecurity

Despite the increasing use of AI as an enabler and disruptive technology for positive change, the voices of caution start to rise.

In a recent Harvard Business Review article, the potential cybersecurity risks arising from the increasing use of OpenAI’s ChatGPT are being discussed. The new technology offers hackers new ways to create sophisticated phishing scams and trick the AI into generating malicious code. There is a call for action to address the emerging risks and the need to train cybersecurity professionals and equip them with tools to respond effectively is stressed. Nonetheless, government oversight is needed to ensure AI usage does not become detrimental to cybersecurity efforts.

ChatGPT’s ability to converse so seamlessly with users without spelling, grammatical, and verb tense mistakes makes it seem like there could very well be a real person on the other side of the chat window. From a hacker’s perspective, ChatGPT is a game changer. Hackers can use ChatGPT to craft more sophisticated phishing emails that are harder to detect.

Cybersecurity experts need new and improved tools that help in detecting machine generated content. Also, employees need to be trained on a regular basis. They need to understand the current threats emanating from AI and their sophistication.

One of ChatGPT’s strong suits is the generation of code. While ChatGPT is programmed not to generate malicious code intended for hacking purposes, bad actors may be able to trick the AI into generating hacking code. This requires continuous upskilling and resources for cybersecurity professionals to respond to ever-growing threats, AI-generated or otherwise.

In this rapidly changing environment, where regulation and legislation can’t act quick enough, users and experts need to be aware of the ubiquitous risks and companies launching generative AI products must regularly review the security features of their products to reduce potential avenues of misuse.

Excluded from this discussion is the task of dealing with cyber threats that become even more intricate with the rise of seamless connectivity between humans, machines, and the Internet of Things. Although this interconnectivity will have a positive impact on business efficiency, connecting everything will generate large databases that can also be easily analysed and exploited by hackers.

What do you think? Will hackers benefit from better AI systems? Does the positive impact of AI outweigh the concerns?