With the increasing use of generative AI technology, the potential risks of its misuse in cyberattacks are becoming increasingly apparent. One tool cybercriminals have is ChatGPT, a generative AI technology that can be customized and scaled to launch cyberattacks against enterprises.
Experts believe that the technology is not foolproof and that its use has pros and cons. In this article, we’ll take a closer look at ChatGPT and the potential threats it poses to businesses.
The Advantages and Disadvantages of ChatGPT
ChatGPT has the advantage of lessening the knowledge gaps and lowering the barrier to entry, making it accessible to anyone with access to a web browser. However, this accessibility is also a disadvantage, as it can enable cybercriminals to generate well-written bulk messages targeted to individual victims. With its ability to improve the odds of a successful attack, ChatGPT is a cause for concern.
In addition, it is challenging to distinguish between AI-generated and human-written text, and cybercriminals can continue to exploit this gap to launch attacks. Although OpenAI has released a text classifier to help organizations identify AI-generated text, the tool is unreliable.
Phishing: The Major Concern
Phishing is a major concern when it comes to ChatGPT and similar technologies. Cybercriminals can use these tools to craft more believable phishing emails that lure employees into divulging sensitive company information. With over half of IT professionals predicting that ChatGPT will be used in a successful cyberattack within the year, Aaron Kane from MacHero in Chicago suggests educating employees on the risks and empowering them to avoid these attacks is essential.
The Zero-Trust Model
Jorge Rojas with Tektonic Managed Services (https://www.tek-help.com/it-services-mississauga) believes that a zero-trust model effectively prevents cyberattacks. “By verifying the identity of potential attackers and limiting their access to resources, a zero-trust model can make it much harder for attackers to compromise enterprise systems,” Rojas explains. This model requires potential attackers to verify their identity and only provides successful intruders access to limited resources.
Shadow IT and Code-as-a-Service
Another potential threat posed by ChatGPT is its use in shadow IT. Cybercriminals can use AI-powered tools to offer malware Code-as-a-Service, which allows less experienced hackers to improve their technical knowledge. With the ability to gain access and move through an organization’s network quicker and more aggressively than ever, these attacks can have devastating consequences.
Troy Drever with Pure IT in Calgary stresses the importance of gaining visibility into every process and packet and limiting the blast radius with segmentation. “By integrating endpoint and network detection and response and conducting continuous threat hunting, businesses can better protect themselves from the growing threat of AI-assisted cyberattacks,” Drever advises.
Conclusion
While ChatGPT and generative AI technology have the potential to revolutionize the field of AI-assisted writing, they can also be used to launch devastating cyberattacks against enterprises. The cybersecurity community must remain vigilant and proactive to detect and prevent the use of these technologies in cyberattacks.
By adopting a comprehensive approach that includes educating employees, adopting a zero-trust model, and conducting continuous threat hunting, businesses can better protect themselves from the growing threat of AI-assisted cyberattacks.