December 2023

AI might create more secure code

The rise of generative pre-trained transformer (GPT) models is already established as a cause for consternation among cybersecurity professionals. The fact that machines will soon be generating massive numbers of phishing attacks that will be more challenging to detect is a daunting prospect.

AI might create more secure code

 

However, there is also cause for optimism when it comes to artificial intelligence (AI). It’s not likely that once AI platforms are used to one day write code, they will employ the same high-level programming languages that humans rely on today to interface with machines. Instead, they will rely on lower-level programming languages to communicate machine-to-machine. As a result, there will be fewer coding mistakes for cybercriminals to exploit.

The sad truth is most security vulnerabilities that exist today can be traced back to an issue involving how memory is accessed by an application. Cybercriminals then exploit those vulnerabilities to launch an attack that, for example, takes advantage of a buffer overflow to access data. Developers are transitioning to memory-safe languages such as Rust, Go, C#, Java, Swift, Python and/or JavaScript to eliminate many of these vulnerabilities but if machines start to write more code that shift may not be as critical as it appears today.

In fact, it’s not hard to imagine a future where anyone could simply verbally describe what they want an application to do and then wait for a machine to generate code by simply scouring the Internet for similar examples. There would obviously be some copyright issues to be worked out but the volume of software that is likely to be created will certainly be several orders of magnitude greater than it is today.

Of course, all that software will need somewhere to run that still needs to be secured. Cybersecurity professionals may soon find themselves spending a lot more time locking down hardware to make sure cybercriminals don’t, for example, use a set of stolen credentials to access an application environment. It’s highly probable, for example, that cybercriminals will use an AI platform one day to create a malicious application that has a backdoor in it.

By then the hope is that organizations will be much further along the path toward embracing zero-trust IT than they are today so the opportunity for mischief will be much less. However, complete zero-trust is not likely to be ever achieved. There will always be dependencies, assumptions, and biases that could be exploited. The challenge now is finding a way to arm cybersecurity teams with the AI tools that will be required to thwart an attack that has been crafted using a machine. In effect, AI platforms will be battling one another for cybersecurity supremacy.

In the meantime, cybersecurity teams may want to assume things are likely to get worse before they get better. The most important thing to be done now is to reduce the size of the attack surface that needs to be defended as much as possible. After all, the best cybersecurity strategy always starts with eliminating the potential number of targets.

By Mike Vizard

This article originally appeared on Journey Notes, the Barracuda blog.

Link to the original post

Back