Microsoft's top new security tools wants to help keep your shiny new generative AI systems safe for good
Red teaming generative AI tools just got a lot easier thanks to Microsoft
Microsoft has unveiled a new security tool aimed at keeping generative AI tools secure, and safe to use.
PyRIT, short for Python Risk Identification Toolkit for generative AI, will help developers respond to growing threats facing businesses of all sizes from criminals looking to take advantage of new tactics.
As most of you already know by now, generative AI tools such as ChatGPT are being used by cybercriminals to quickly create code for malware, to generate (and proofread) phishing emails, and more.
Manual work still needed
Developers responded by changing how the tool responds to different prompts, and somewhat limiting its capabilities, and Microsoft has now decided to take it a step further.
Over the past year, the company red teamed “several high-value generative AI systems” before they hit the market, and during that time, it started building one-off scripts. “As we red teamed different varieties of generative AI systems and probed for different risks, we added features that we found useful,” Microsoft explained. “Today, PyRIT is a reliable tool in the Microsoft AI Red Team’s arsenal.”
The Redmond software giant also stresses that PyRIT is by no means a replacement for manual red teaming of generative AI systems. Instead, the company hopes other red teaming teams can use the tool to eliminate tedious tasks and speed things up.
“PyRIT shines light on the hot spots of where the risk could be, which the security professional than can incisively explore,” Microsoft further explains. “The security professional is always in control of the strategy and execution of the AI red team operation, and PyRIT provides the automation code to take the initial dataset of harmful prompts provided by the security professional, then uses the LLM endpoint to generate more harmful prompts.”
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The tool is also adaptable, Microsoft stresses, as it’s capable of changing its tactics depending on the generative AI system’s response to previous queries. It then generates the next input, and continues the loop until the red team members are happy with the results.
More from TechRadar Pro
- 6 iPhone automations that are Apple's best kept secret
- Here's a list of the best firewalls around today
- These are the best endpoint security tools right now
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.