Cloud 101CircleEventsBlog
Get 50% off the Cloud Infrastructure Security training bundle with code 'unlock50advantage'

An Explanation of the Guidelines for Secure AI System Development

Published 02/28/2024

An Explanation of the Guidelines for Secure AI System Development

Originally published by Schellman & Co.


Recently, the UK National Security Centre (NCSC) and the US Cybersecurity and Infrastructure Security Agency (CISA)—along with other groups from around the world—released the Guidelines for Secure AI System Development.

A new resource for developers as of November 2023, the document is meant to aid in the development of secure artificial intelligence (AI) systems. Increased use of AI technology continues within both the public and private sectors, and these guidelines will assist the developers of these (and new) AI systems in reducing system risks before security issues arise.

As cybersecurity experts, we have familiarized ourselves with the document, which breaks its suggested process into four stages. In this blog post, we will detail each of these four stages of secure AI system development explained within the Guidelines so that those considering implementing them can more easily do so.


The Four Stages of Secure AI System Development

The new Guidelines focus on the security of four main areas of the system development cycle:

Stage 1

Secure Design

Stage 2

Secure Development

Stage 3

Secure Deployment

Stage 4

Secure Operation and Maintenance


Stage 1 – Secure Design

First things first, secure design must begin with employee security training and an AI risk assessment.

You may already be regularly performing general risk assessments, but if you’re developing AI, you must ensure that risks specific to your AI system are added to and considered within those evaluations because ultimately, all the subsequent decisions you make during AI system development should be based on an AI risk assessment (as well as regularly performed AI threat modeling).

One of those key decisions you’ll be able to make once you’ve completed that risk assessment and threat modeling is whether to develop the AI system in-house or outsource the project:

Should You Design AI In-House?

Should You Outsource AI Design?

The benefit of developing an AI system in-house is that you’ll be in control of all aspects of the project—that’s less hassle.

BUT you’ll need to ensure you and your staff have the proper breadth of experience to develop secure AI systems.

If you lack said experience with AI, it may be more beneficial to outsource to someone who does, as that’ll help reduce the risk of creating vulnerabilities in the system as a result of human error.

HOWEVER not only is it essential that you perform proper due diligence before choosing a third party, but anything provided by your chosen third party should be properly tested before it is implemented.

Other considerations the Guidelines ask you to make during the design stage include:

  • What type of data will be used to tune your model; and
  • The quantity of data that will be used to tune your model.
    • (You reduce risk when you use a large amount of quality data, but of course, that will take more time.)


Stage 2 – Secure Development

Once you’ve secured and worked out who will design your AI, the next stage is putting your ideas into action.

(Again, if you choose to work with an outsourced developer, you must complete your proper due diligence ensuring that your vendors are trusted and will adhere to the security standards of your organization before moving forward into this stage.)

When embarking on AI development, the guidelines recommend that you go ahead and implement technical controls within the AI system at this stage—that includes things such as:

  • Configuring system logs and controls to protect the confidentiality, integrity, and availability of the logs.
  • Documenting any data that was used to train and tune the AI model for future reference in case the system does not perform as expected.
  • Maintaining a baseline version so the system can be rolled back in the future if a compromise occurs.


Stage 3 – Secure Deployment

Once you’ve developed your AI system, you then must take the proper steps before implementing it for use, and that begins with segregating the environments within your organization to ensure that—if your AI is compromised—your other systems will not be affected.

Proper access controls should be put in place, including a few specific controls for AI systems that should be incorporated at this stage:

  • Controls to prevent users from exfiltrating confidential information while interfacing with the system; and
  • Model hashes to maintain the integrity of the AI model.

This is also the point in the process where your organization’s incident response team should be trained on AI-specific incidents and the appropriate response process—your team and process should also be tested regularly using different incident scenarios.


Stage 4 – Secure Operation and Maintenance

Once the AI system is implemented and begins operating, you must continue to monitor it for a few reasons—to evaluate system performance and to quickly identify any compromise that may arise.

(Should you discover that your system is indeed not performing as expected or has been hacked, you should roll it back to the baseline established in the aforementioned Stage 2 Secure Development.)

Regarding the ongoing operations and evolution of your AI system, the guidance also recommends that you:

  • Test any system software updates before their deployment;
  • Configure all AI systems within your organization to automatically install updates;
  • Document any lessons learned throughout the process so as to improve future projects.


Guidelines for Secure AI System Development and Your Compliance

At this point, you may have noted several security practices recommended throughout these four stages of AI system development are some that your organization may have already implemented—employee security awareness training, maintaining base versions of systems, and conducting tabletop exercises of incident response programs are all requirements for compliance with security standards such as SOC or HITRUST.

So, if you’re regularly undergoing those or related compliance assessments, it may not be as daunting a task as it seems to secure the AI system development process using this new guidance.


Moving Forward with AI

That being said, whether you follow these guidelines or not, you must take the proper precautions to secure every step of AI system development since the consequences for vulnerabilities could include:

  1. Breaches of personal information being used or stored by the system
  2. Unexpected outputs to users of the AI system
  3. AI system bias created through the use of inaccurate data

Following the Guidelines for Secure AI System Development can help, but you also have other options to help secure this new technology—to learn more, check out our other content regarding AI cybersecurity:

Share this content on your favorite social network today!