AI and Compliance for the Mid-Market
Published 01/17/2025
Originally published by Scrut Automation.
Written by Jayesh Gadewar.
Over the past year and a half, artificial intelligence (AI) has been impossible to ignore—and with good reason. Beyond the broader business implications, AI has the potential to accelerate cybersecurity and compliance efforts across organizations of all sizes.
However, small and medium-sized businesses (SMBs) must approach this new technology with caution. Deploying AI securely and responsibly requires a structured approach that incorporates cybersecurity best practices, complies with existing privacy regulations, and addresses emerging AI-specific requirements.
In this post, we’ll cover the top considerations for SMBs when implementing AI-powered tools and technologies.
Cybersecurity standards
AI can improve cybersecurity while also introducing potentially novel risks. Foremost among these is unintended training. Because tools like ChatGPT train on inputs by default, it is possible to accidentally expose sensitive information to other users.
Similarly, prompt injection attacks against Large Language Models (LLMs) have the potential to cause serious damage if not mitigated properly.
That is why SMBs looking to leverage AI should consider some emerging standards and resources in the space such as:
- OWASP Top 10 for LLMs: Put together by the Open Web Application Security Project (OWASP), this lists the top 10 identified vulnerabilities of certain AI systems and provides recommendations to remediate them. The accompanying security and governance checklist is another resource that security and business teams can use when developing their approach.
- MITRE ATLAS: The MITRE Corporation is a non-profit funded by the United States government to help develop techniques and technologies that solve national-level challenges. They developed the Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) as part of their cybersecurity efforts. This is a structured catalog of potential AI-related risks along with a wealth of case studies.
- U.S. CISA and UK NCSC guidance: In late 2023, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom National Cyber Security Centre (NCSC) released joint guidelines for secure AI system development. The guidelines provide best practices for designing, developing, deploying, operating, and maintaining AI applications and systems.
Frameworks like these can be quite helpful in enumerating risks and prioritizing development efforts. However, SMBs generally benefit from customizing these approaches to their specific needs, use cases, and business operations.
Regulatory requirements
AI can create novel privacy challenges because it processes large volumes of data. Some systems can even generate personal data through inference, making it harder to ensure privacy. This type of sensitive data generation can make compliance with existing frameworks and regulations challenging, demanding new approaches. We’ll look at some of the key challenges below.
- European Union (EU) General Data Protection Regulation (GDPR): Although the GDPR has been in effect for almost six years, its interpretation and application continue to evolve. Complying with its requirements to track and regulate sub-processors will become increasingly challenging as AI tools proliferate through software supply chains. Similarly, tackling sensitive data generation may require techniques such as machine unlearning. This evolving technique can potentially prevent AI models from reproducing personally identifiable information.
- EU AI Act: Passed in early 2024, the EU AI Act will greatly impact the types of tools and techniques SMBs can use when they operate in the EU or interact with its citizens.
- California Consumer Privacy Act (CCPA): Much like the GDPR that inspired it, the CCPA and subsequent California Privacy Rights Act (CPRA) are also shifting the compliance landscape. As enforcement actions continue, SMBs would be well-advised to monitor developments related to AI. California has also proposed rules on automated decision-making technologies, which are likely to impact many businesses if and when they come into force.
- New York City Local Law 144: Passed at the individual metropolitan level, this statute prohibits employers from using automated employment decision tools unless they conduct a bias audit and provide required notices. While implementation is still ongoing, New York’s status as the global financial capital means this rule is likely to have significant reach.
External certifications
As organizations become increasingly aware of the potential impacts of AI, understanding and auditing its use throughout supply chains is becoming vitally important. Whether from a cybersecurity perspective or a broader operational one, both existing and new standards are addressing these concerns.
- ISO/IEC 42001: Released at the end of 2023, ISO/IEC 42001 lays out how to develop an Artificial Intelligence Management System (AIMS). In addition to cybersecurity issues, it also touches on effective governance, explainability, and data integrity. While not many companies have achieved this standard yet, it is bound to grow in popularity as organizations seek external attestation regarding their practices and procedures related to AI.
- ISO/IEC 5338: Also released in 2023, this standard is focused more on the development and lifecycle management of AI systems. For organizations developing artificial intelligence products themselves, this might be an interesting standard to look at.
- ISO/IEC 27001: Updated in 2022, the global standard for information security will be
highly applicable to those using AI. Companies pursuing or maintaining this standard
will need to carefully consider the implications of such systems on:
- Incident response
- Decommissioning procedures
- Third-party risk management
- ISO/IEC CD 27090 and WD 27091: Building upon the ISO 27001 standard, these documents (still under review as of early 2024) will provide specific guidance for organizations seeking to enhance their information security and privacy programs, respectively, while leveraging AI.
- SOC 2: The “gold standard” for business-to-business security for companies
operating in North America, SOC 2 does not have any AI-specific provisions as of the
standard’s 2022 update. That being said, there are several areas where AI intersects
with SOC 2 requirements, including the need to:
- Protect confidentiality against threats like prompt injection
- Prevent data poisoning and corrupted model seeding
- Manage risks across the software supply chain
Conclusion
As AI becomes more integrated into business operations, mid-market companies must adopt
a strategic approach to harness its potential while mitigating the associated risks. From
cybersecurity vulnerabilities to evolving regulatory requirements, SMBs have a lot to
consider when deploying AI solutions.
By leveraging existing frameworks and certifications, businesses can stay ahead of the curve, ensuring both compliance and innovation as they navigate this rapidly evolving landscape. With the right precautions in place, AI can become a powerful tool for growth and security.
Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
Mitigating GenAI Risks in SaaS Applications
Published: 02/28/2025
How is AI Strengthening Zero Trust?
Published: 02/27/2025
The ISAC Advantage for Collective Threat Intelligence
Published: 02/27/2025