April 2024

Weighing the cybersecurity pros and cons of ChatGPT

Now that it’s become apparent that generative artificial intelligence (AI) platforms such as ChatGPT are yet another in a long line of tools that from a cybersecurity perspective cut both ways the time has come to determine how to make the best use of them while simultaneously putting guardrails in place that will hopefully limit the harm that can clearly be done.

Weighing the cybersecurity pros and cons of ChatGPT

 

A report from the Cloud Security Alliance (CSA) dives into the threat generative AI platforms represent to cybersecurity, ranging from already detailed examples of how phishing campaigns will not only increase in sophistication and volume but also how these platforms can be used to improve reconnaissance, discover and exploit vulnerabilities more efficiently, and write polymorphic code capable of changing its appearance in a way that makes it possible to evade detection by scanners.

Other types of potential attacks the report identifies as of being should be of concern include:

  •   Prompt injection to expose internal systems, application programming interfaces (APIs) and data sources
  •   Prompts and queries that cause large replies or loop until the service runs out of tokens
  •   Prompt injection to provide product support responses for questions the attacker can they employ their advantage
  •   Prompts that generate legally sensitive output related to libel and defamation
  •   Attacks that might inject data into training models

In addition, the report notes that organizations need to establish policies for employing platforms such as ChatGPT to make sure that data containing personally identifiable information (PII) is not inadvertently shared with a public cloud computing service. The connection to the platform also needs to be secure because there is always the possibility a result of a query might have been tampered with before being surfaced, the report notes.

On the plus side, however, generative AI platforms should make it simpler for developers to leverage tools such as GitHub Copilot to write more secure code, while providing cybersecurity teams with a tool that makes it easier for them to also scan for vulnerabilities in addition to recognizing cyberattacks using the MITRE ATT&K framework, improving incident response times by making it easier to hunt for threats and automate playbooks, analyze files and code, create tests, write better policies, provide end users with better best practices guidance, and even detect code written by a generative AI platform.

OpenAI and Microsoft have established limits to how ChatGPT can be employed.  For example, queries that are harmful or identified as threats are left unanswered. However, there’s clearly still a lot of potential to wreak havoc. The proverbial AI genie is out of the bottle and there’s no way to put it back.

Savvy cybersecurity professionals realize that an ounce of prevention no matter how much AI is being employed is going to be worth a pound of cure. Like it or not, a cybersecurity AI arms race is underway. The only thing left to determine now is how the next cybersecurity battle will be fought and won when it’s clear to do nothing is to accept defeat.

By Mike Vizard

This article originally appeared on Journey Notes, the Barracuda blog.

Link to the original post

Back