With ChatGPT-4 released this week, security teams have been left to speculate over the impact that generative AI will have on the threat landscape. While many now know that GPT-3 can be used to generate malware and ransomware code, GPT-4 is 571X more powerful, creating the potential for a significant uptick in threats.
However, while the long term implications of generative AI remain to be seen, new research released today by cybersecurity vendor Sophos suggests that security teams can use GPT-3 to help defend against cyber attacks.
Sophos researchers — including Sophos AI’s principal data scientist Younghoo Lee — used GPT-3’s large language models to develop a natural language query interface for searching for malicious activity across XDR security tool telemetry, detect spam emails and analyze potential covert “living off the land” binary command lines.
More broadly, the Sophos’ research indicates that generative AI has an important role to play in processing security events in the SOC, so that defenders can better manage their workloads and detect threats faster.
Identifying malicious activity
The announcement comes as more and more security teams are struggling to keep up with the volume of alerts generated by tools across the network, with 70% of SOC teams reporting that their home lives are being emotionally impacted by their work managing IT threat alerts.
“One of the growing concerns within security operation centers is the sheer amount of ‘noise’ coming in,” said Sean Gallagher, senior threat researcher at Sophos. “There are just too many notifications and detections to sort through, and many companies are dealing with limited resources. We’ve proved that, with something like GPT-3, we can simplify certain labor-intensive proxies and give back valuable time to defenders.”
Sophos’ pilot demonstrates that security teams can use “few-shot learning” to train the GPT-3 language model with just a handful of data samples, without the need to collect and process a high amount of pre-classified data.
Using ChatGPT as a cybersecurity co-pilot
In the study, researchers deployed a natural language query interface where a security analyst could filter the data collected by security tools for malicious activity by entering queries in plain text English.
For instance, the user could enter a command such as “show me all processes that were named powershelgl.exe and executed by the root user” and generate XDR-SQL queries from them without needing to understand the underlying database structure.
This approach provides defenders with the ability to filter for data without needing to use programming languages like SQL, while offering a “co-pilot” to help reduce the burden of searching for threat data manually.
“We are already working on incorporating some of the prototypes into our products, and we’ve made the results of our efforts available on our GitHub for those interested in testing GPT-3 in their own analysis environments,” said Gallagher. “In the future, we believe that GPT-3 may very well become a standard co-pilot for security experts.”
It’s worth noting that researchers also found that using GPT-3 to filter threat data was much more efficient than using other alternative machine learning models. Given the release of GPT-4 and its superior processing capabilities, it’s likely this would be even quicker with the next iteration of generative AI.
While these pilots remain in their infancy, Sophos has released the results of the spam filtering and command line analysis tests on SophosAI’s GitHub page for other organizations to adapt.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.