Cloud 101CircleEventsBlog

Biden’s “Sweeping” AI Executive Order is Here. Is the Cybersecurity Industry Ready?

Biden’s “Sweeping” AI Executive Order is Here. Is the Cybersecurity Industry Ready?

Blog Article Published: 12/08/2023

Originally published by Synack on October 31, 2023.

Written by Katie Bowen, Vice President, Public Sector, Synack.

President Biden made his biggest move yet on artificial intelligence this week, issuing an executive order that trains the full scope of the administration’s authority on emerging risks posed by the technology.

The White House has billed the order as “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.” That may be true for the Oval Office, but the private sector – to include the defense industrial base – will need to take similarly sweeping action to successfully head off AI-related security breaches.

That means companies with a stake in our AI-driven future must kick their own cybersecurity efforts into high gear to keep pace with shifting security requirements.

Human-led, continuous security testing of AI technology is a great (and necessary) place to start. The White House recognizes this: the Biden administration is directing the National Institute of Standards and Technology to set “rigorous standards for extensive red-team testing” to ensure AI systems can be trusted before and after they are released. The Department of Homeland Security will apply those testing standards to critical infrastructure sectors like energy and financial services, according to a fact sheet accompanying the order. This directive is an underscore of the Joint Cyber Defense Collaborative’s 2023 Planning Agenda to deepen relationships with critical infrastructure, such as energy.

Additionally, the order establishes an “advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.” As the technology industry shifts to a “Secure By Design, Secure By Default” stance, AI will aid in uncovering and triaging vulnerabilities and will enable organizations to develop more comprehensive vulnerability management programs.

The need for human-led testing reflects a paradox of AI security: To clear the way for the next generation of AI technologies – including tools capable of automatically finding critical security vulnerabilities – human pentesters have to help. No AI tool has replaced the creativity and ingenuity of expert security researchers. In fact, AI technology has so far introduced whole new categories of security flaws for many organizations, as underscored by the first version of the OWASP Top 10 list of vulnerability types for Large Language Models (LLMs) Applications. In a symposium held for the cybersecurity community in Washington, D.C. earlier this month, we heard loud and clear about the excitement and challenges for industry and government in this area.

That’s not to say AI has no place in security testing. Red team members are already leveraging AI systems to detect certain types of vulnerabilities, like basic SQLi flaws, allowing strong pentesters to become even stronger.


Maybe Just the Beginning for AI

Still, we may be years away from fully harnessing what the White House has described as “AI’s potentially game-changing cyber capabilities to make software and networks more secure.”

The Biden administration faces the unenviable task of putting guardrails on AI while not stifling that kind of “game-changing” innovation. The order covers a wide range of AI topics that extend well beyond the security testing space, and it remains to be seen how it will impact everything from civil rights to government agencies’ AI procurement. It’s clear that responsible AI initiatives, including those spearheaded by the Department of Defense’s Chief Digital and Artificial Intelligence Office, played a foundational role informing this presidential action.

In the meantime, this week’s order is a welcome step toward strengthening the privacy and security safeguards surrounding AI. It builds on the administration’s AI Cyber Challenge, a DARPA-led initiative unveiled earlier this year to drive automated and scalable AI software security solutions with some $20 million in prizes. It also comes on the heels of voluntary commitments from 15 leading AI companies to improve the security of their software products before releasing them publicly, among other steps.

Share this content on your favorite social network today!