Cloud 101CircleEventsBlog
Master CSA’s Security, Trust, Assurance, and Risk program—download the STAR Prep Kit for essential tools to enhance your assurance!

AI & Software Security: How to Implement AI Responsibly and Successfully

Published 02/21/2024

AI & Software Security: How to Implement AI Responsibly and Successfully

Originally published by ArmorCode.

Generative AI (GenAI) dominated the technology landscape in 2023 prompting many technology companies to formulate an AI strategy – from adopting AI-enabled tools for performance and productivity gains to developing and building upon large language models (LLM) to compete. Reading Gartner Predicts 2024: AI & Cybersecurity - Turning Disruption Into an Opportunity provides compelling insights and recommendations for organizations to consider particularly around the security implications—positive and negative—of AI.

On one hand, GenAI has emerged as a beacon of hope to address the persistent skills shortage and widening gap between security challenges and the resources and capacity to address them. On the other hand, GenAI introduces additional complexity and risks into environments where security teams already struggle to keep pace. However, organizations that successfully navigate AI adoption—and stay ahead of new AI threats—have the opportunity to reduce business risk, lower costs, and increase productivity.

In this article we'll explore causes for caution, cases for optimism, and recommendations to build a strategy for successful and responsible AI adoption.


3 Causes for Caution: AI & Software Security Risks and Costs

Software flaws and vulnerabilities are found, created, and disseminated faster than they can be fixed. The ability to create and distribute software - including insecure software - has increased exponentially. Meanwhile, the capacity to remediate and secure software has only improved incrementally. While there is the potential that GenAI can help security teams increase remediation capacity, the immediate reality is GenAI tools introduce new risks and layers of complexity requiring additional security effort and spending.

At least initially, it seems new costs will outweigh security efficiencies. According to Gartner, “Through 2025, generative AI will cause a spike of cybersecurity resources required to secure it, causing more than a 15% incremental spend on application and data security.” Amid the hype and rush to adopt AI, here are three cybersecurity cautions to temper that excitement and understand the realities.

  1. AI capabilities are immature - especially from a security perspective. 2023 saw cybersecurity vendors primarily using AI to add natural language processing interactions - particularly on the SecOps front. These fall short of the hype and promise of automation and can prove a distraction. However, most of the GenAI conversation for software development has revolved around GenAI coding assistants. Here too there is cause for caution. Software generated by and in collaboration with generalist LLMs (i.e. those trained on code generally and not specifically trained for security tasks) is prone to the same errors and insecurities as the code they are trained on. While there is hope tools will emerge, evolve, and improve to generate secure code, in the immediate term GenAI can accelerate the production of insecure code and further widen the security gap.
  2. AI introduces new threats and layers of risk. Every technological leap introduces new challenges. We witnessed this pattern with the proliferation of open-source software, cloud-native development, and now again with AI. The OWASP Top 10 for Large Language Model Applications capture many of these threats and the additional AI attack surfaces security teams need to manage. Beyond additional risk in application development and management, AI also introduces a new adversarial weapon to develop malicious scripts, escalate attacks, and exploit novel attack vectors like prompt injection and model poisoning.
  3. Organizations face additional costs to address AI risk. As mentioned above, adopters of AI face new risks that require additional security coverage and practices. These range from validating the output of GenAI tools to avoid inaccurate or illegal actions (for example, avoiding copyright infringement or referencing inaccurate data) to preventing data leaks and breaches of confidentiality from exposure to external LLMs to weaving AI Trust Risk and Security Management (TRiSM) into AppSec practices. The majority of early adopter implementations of GenAI will likely face increased expenditures to deal with these risks that outweigh the productivity and cost-saving gains.

When assessing your AI strategy for 2024 and beyond, it is important to sift through the hype to understand the value and benefits against the risks and costs associated with AI adoption. Awareness of these potential pitfalls can help you navigate to more cost-effective and secure implementations of AI. So, where are those opportunities and where are we seeing early signs of how AI can deliver on its promises?


3 Cases for Optimism: Securing AI Applications and Looking to the Next Generation of AI

Amidst the cautions, there are glimpses of success. Specialized AI applications can augment employees with security skills, and the emergence of solutions to secure AI applications will help manage new risks. There are also promising signs multimodal, multiagent, and composite AI could deliver powerful capabilities. Here are three cases for optimism about the near future and next generation of AI capabilities:

  1. AI can augment humans to improve security outcomes. According to Gartner, “Cybersecurity leaders focusing on human augmentation will achieve better results than those jumping too quickly on solutions promising full automation.” An example of this is AI-assisted remediation for insecure code. In isolation, “auto-remediation” capabilities do not account for performance, reliability, and other code quality factors. On the other hand, developers may lack secure code knowledge and struggle to adapt generic remediation examples to their specific code. In tandem, developers with AI-assisted remediation - particularly those that present multiple coding options - retain control of their code while alleviating the burden of writing the secure code fix from scratch.
  2. Specialized solutions help manage risk. Two categories of solutions are emerging to help tackle risk. The first covers AI TRISM solutions which extend security and risk management coverage to include the AI attack surface. The second is the emergence of specialized AI security solutions. In contrast to generalist AI tools trained on massive quantities of data, specialized AI solutions are trained on highly curated datasets to excel at a more limited and focused task. Both of these emerging solutions can help manage risk; however, they add additional layers of complexity to the security ecosystem. Already the proliferation of tools to cover everything from code security and dependencies to cloud infrastructure and containers leaves organizations struggling to manage this complexity. It is important to consider how these tools fit into your broader ecosystem to deliver practical benefits.
  3. Multiagent AI improves security and risk management. Between AI-augmented human enablement and specialized solutions, there is evidence the next generation of AI solutions will revolve around facilitating collaboration among people and intelligent solutions in what Gartner calls Multiagent Systems (MAS) - a “type of AI systems composed of multiple, independent but interactive agents.” This could have a big impact on security and risk management. Gartner predicts the use of multiagent AI in threat detection and incident response will reach 70% in 2028 to augment staff.

Near-term decisions and plans should account for immediate AI risk management needs and consider future capabilities. Among these are strategies that incorporate security into the software development and security ecosystem and facilitate collaboration between employees and MAS solutions. Next, we will cover recommendations you can start implementing now to position your organization for successful AI adoption.


3 Recommendations: How to Lead Responsible and Successful AI Adoption

Organizations—and particularly security leaders—face difficult decisions around AI adoption. Moving too fast can introduce significant risks and costs without significant benefits. However, laggards may struggle to integrate and implement future generations of AI capabilities and find themselves at a disadvantage. Ultimately, applying risk management principles will guide leaders to responsible and successful implementations of AI. Here are three recommendations:

  1. Learn from the history lessons of open-source and cloud-native software. While GenAI seems novel, it follows familiar patterns where excitement and adoption initially outpace security concerns but awareness of risk and disillusionment quickly follow. Anticipate threats and proceed responsibly. Understand the GenAI attack surface and implement AI policies and practices like TRISM to prevent the proliferation of unmanaged risk within the organization.
  2. Prepare your ecosystem and take a multi-year approach to manage AI complexity and leverage next-gen solutions. GenAI tools and TRISM further complicate the security landscape. ASPM and RBVM solutions streamline the adoption and risk-based management of additional security tools, following Gartner’s advice to start AI security initiatives with application security. They also create an architecture to support MAS solutions that augment teams with AI-enabled solutions.
  3. Leverage data to prioritize AI initiatives and measure performance. AI adoption will come with additional costs. Organizations need relevant metrics to assess GenAI costs—both direct and indirect—against the benefits. If you have not already, you should invest in baseline visibility to prioritize where to direct your AI investments and then measure and track outcomes over time.

The most important advice is to educate yourself and stay informed. The AI technology and threat landscape is evolving rapidly, but collecting insights and predictions from reliable sources will help you make better decisions today and plan for the future.

Share this content on your favorite social network today!