Phishing attacks that convince end users to give up their credentials are, of course, the bane of cybersecurity, but with the generative pre-trained transformer language models such as GPT, things may soon go from bad to even worse.
Researchers have already shown how machine learning can be used to generate text capable of driving phishing and business email compromise (BEC) campaigns at unprecedented levels of scale. While language models are typically trained using a specific corpus of data, researchers have been able to expose these models to additional content that could be used to prompt a specific outcome. For example, an article that made false claims could be used to generate an entire stream of similar harmful or misleading content.
More challenging still, it’s been shown that these models can even mimic styles of writing in a way that makes it difficult for the average person to distinguish between an email sent by their boss versus a machine.
This potential to wreak havoc is already sparking a lot of concern among cybersecurity professionals as interest in artificial intelligence (AI) platforms among hackers has sharply increased. There are currently restrictions in terms of how platforms such as ChatGPT can be employed, but in time there are likely to be other platforms that are operated by entities or even nation-states that might have fewer scruples concerning how AI is applied. Misinformation is at the core of any propaganda campaign, so any tool that makes it easier to generate content at scale will be used for both good and ill. Providing cybercriminals will access to such a platform is one way to pay for its ongoing development.
One way or another, cybersecurity professionals should assume that phishing attacks will soon be increasing in terms of both volume and sophistication. Many digital processes will no doubt have to be reevaluated simply because the current level of trust that is assumed may no longer be reasonable in an era where email can be easily compromised.
Fortunately, there are tools for detecting whether a piece of text was created by a machine. It may only be a matter of time before these types of tools are incorporated into anti-phishing platforms.
In the meantime, cybersecurity professionals should start tracking how generative pre-trained transformer language models are evolving. The pace at which these AI platforms are infused with additional capabilities is only going to accelerate as they move beyond text to include images and video. It’s only a matter of time before senior leaders of organizations start asking cybersecurity professionals to assess the level of business risk these platforms actually represent. Cybersecurity teams need to not just focus on what these platforms can do today. They will soon be able to work with smaller sets of data that not only require less IT infrastructure to train a model but can also be optimized for specific use cases within vertical industry segments.
As William Gibson once famously noted, ‘The future is already here – it’s just not evenly distributed.’ Nevertheless, the eventual outcome of its logical progression can already be widely surmised.
By Mike vizard
This article originally appeared on Journey Notes, the Barracuda blog.
Back