Gen AI is transforming the cyber threat landscape by democratizing vulnerability hunting

Niță said that he uses LLMs to research certain topics or to generate payloads for brute force, but in his experience, the models have not changed when it comes to identifying certain types of errors.

“With the current state of AI, we can sometimes produce effective and useful exploits or payload variations to bypass discovery rules,” he said. “However, due to the many opportunities for falsehoods and inaccuracies, it is not as reliable as one would hope. While this may improve over time, for now, most people still find manual labor reliable and effective, especially for complex tasks where precision is important.”

Despite the clear limitations, many risk researchers find valuable LLMs, using their skills to accelerate the discovery of risks, to help in writing exploits, to recreate malicious payloads to avoid detection, and to propose new attack methods and tactics with varying degrees of success. They can even automate the creation of vulnerability disclosure reports – a time-consuming task that researchers often dislike.

Of course, malicious actors may use these tools. It is difficult to determine whether the exploit or payment was written by LLM when it was found in the wild, but researchers have noted cases of attackers who clearly put LLMs to work.

In February, Microsoft and OpenAI released a report highlighting how some well-known APT groups were using LLMs. Other TTPs discovered include LLM-informed information retrieval, LLM-enhanced writing techniques, LLM-enhanced anomaly detection avoidance, and LLM-assisted vulnerability research. It is safe to assume that the adoption of LLMs and productive AI among threat actors has grown since then, and organizations and security teams should strive to keep up with them by using these tools.


Source link