Rich Stroffolino


White Hats Love Them Some Chatbots

An interesting study on how generative AI tools are being used in security research:

Many of the respondents are already using generative AI in their work, including in automating tasks (50%), analyzing data (48%), identifying vulnerabilities (36%), validating findings (35%) and conducting reconnaissance (35%). The report noted a trend of hackers using AI chatbots to help write reports, with the initial text generated by AI “a good jumping off point.”

Interestingly, researchers are doing this with off-the-shelf chatbots, overwhelmingly ChatGPT. What’s interesting is that Microsoft offers its own Security Copilot model for these types of tasks. Obviously the interest in using these tools is outpacing the speed that organizations can validate them (or at least the more specialized ones). Justin Robert Young made this point on Daily Tech News Show this week, even if chatbots aren’t “cleared” for a given use case, people are already using them.