Cloud 101CircleEventsBlog
Master CSA’s Security, Trust, Assurance, and Risk program—download the STAR Prep Kit for essential tools to enhance your assurance!

How Pentesting Fits into AI’s ‘Secure By Design’ Inflection Point

Published 03/18/2024

How Pentesting Fits into AI’s ‘Secure By Design’ Inflection Point

Originally published by Synack.

Written by James Duggan, Solutions Architect, U.K. and Ireland, Synack.


The gamechanging potential of generative AI technology has caught the eye of attackers and defenders in the cybersecurity arena. While it’s unclear how the threat landscape will evolve with the growing slate of machine learning tools and large language models, a few AI security best practices have already snapped into focus.

It’s clear rigorous pentesting will play a major role in securing our AI-enabled digital future.

Most recently, the U.K. government joined the U.S. and several international partners to highlight how “secure by design” principles should be adopted to avoid AI-specific threats.

“For the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way,” the U.K. National Cyber Security Centre (NCSC) and the U.S. Cybersecurity and Infrastructure Security Agency (CISA) said in new guidelines for secure AI system development released Sunday.

The guidelines cite “red teaming” as a prerequisite for releasing AI tools responsibly. If AI technology isn’t vetted for potential security flaws, the thinking goes, it shouldn’t be publicly released, where it could wreak havoc if abused by attackers.

“Where system compromise could lead to tangible or widespread physical or reputational damage, significant loss of business operations, leakage of sensitive or confidential information and/or legal implications, AI cyber security risks should be treated as critical,” warned the NCSC, CISA and organizations from 16 other countries from Chile to Israel. The joint document goes on to urge AI system providers to “protect your model continuously” lest flaws creep in at some later date.

As generative AI branches off in new directions, seeing adoption in areas like healthcare and national defense, the stakes are too high to allow cracks to appear in its foundations. But the work can’t stop with a “secure by design” approach. Nor does it rest solely with the tech companies building the most critical AI systems.

Any organization deploying AI technology to help drive its mission should take stock of its security practices. Are you clearing the way for security researchers to report vulnerabilities? Are you ready to “take action to mitigate and remediate issues quickly and appropriately,” as the new U.K.-led guidelines recommend? The security of your AI deployment could count on it.


Pentesting for Safer AI Systems

The AI security guidelines build on a series of government actions aimed at addressing emerging AI-specific cyber challenges. The U.K. is playing a leading role, having hosted the world’s first AI safety summit earlier this month at the historic Bletchley Park, which housed GCHQ and its famous codebreakers during WWII.

The resulting AI declaration signed by 29 countries drew special attention to cybersecurity concerns and called for “evaluations” of certain high-risk, frontier AI systems. And it came on the heels of a “sweeping” AI executive order from the White House that similarly urged “extensive red-team testing” for AI systems.

“Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent,” the Bletchley Declaration said.

An effective, AI-aware security testing program can pick up on many of these potential risks and help teams patch vulnerabilities before they become breaches. Testing should support OWASP Top 10 security vulnerabilities for large language model applications. And cutting-edge AI tools should be integrated to drive efficiencies and improve security outcomes for our customers.

AI will have a profound impact on the field of pentesting. But for now, it’s essential for human pentesters – to give AI technology the support it needs to head off attackers and continue to grow.

Share this content on your favorite social network today!