Cloud 101CircleEventsBlog
Register for CSA’s free Virtual Cloud Trust Summit to tackle enterprise challenges in cloud assurance.

Practical Ways to Combat Generative AI Security Risks

Practical Ways to Combat Generative AI Security Risks

Blog Article Published: 01/05/2024

Originally published by Astrix.

Written by Idan Gour.

As many have come to realize in the cyber world, all that glitters is not gold. Generative AI, and its ability to automate work processes and boost productivity, is increasingly being used across all business environments. While it’s easy to get wrapped up in the excitement of these tools, like Otter.ai being able to recap a Zoom meeting with a click of a button, we can’t push security and rational thinking to the back burner – the risks to the enterprise are too high. Knowing how to combat the risks these AI tools pose will keep your organization gleaming.


Supply chain risks are around the corner with Generative AI

When it comes to generative AI apps, such as ChatGPT and Jasper.ai, there are two main risks for security leaders to be aware of. The first (and more obvious one) is data sharing. A general good practice here is to be aware of the app’s data retention policies when using a third-party application.

How is your data being used and retained by the solution? Where is your data being moved to and from, and how? During this data sharing process, it’s important to monitor the actual data transfer between your environment and a third party’s environment in order to detect potential security issues. Otherwise, you could result in a similar situation as Samsung – just recently, Samsung employees were interacting with ChatGPT and shared a source code that invariably leaked the electronic conglomerate’s sensitive data on three different occasions.

The second area of risk is unverified AI tools – not the ChatGPTs of the world – but the subsection of AI tools that might lead to supply chain exploits. For example, one person may connect a new AI tool to its Slack environment without thinking of the possible security issues that come with such action, or the permissions this tool receives to a sensitive corporate environment. Unverified AI tools are being created at such a rapid pace that controlling and monitoring them once they are within your business environment is virtually impossible. That’s why employing a policy around connecting these tools to core business environments is crucial.


How to stay ahead of attackers when embracing innovation

Since the name of the game is staying ahead, here are five practical ways to prepare for such risks:

  1. Get a full inventory of all of the AI-tools within your organization and implement both onboarding and offboarding processes. When onboarding, look at inventory and see what tools and data they have access to. If a tool isn’t valuable or is dangerous, make sure during the offboarding process that it’s not only disconnected from all parts of the environment but also deletes your data.
  2. Internalize to lower risks. Companies should be proactive in deciding on tech providers and if possible, develop internal AI solutions to lower the risk of data breaches. If you can’t internalize these solutions, evaluate the level of sensitivity of the data used in these tools before deciding the type of service to use – publicly available models, local deployment, cloud, etc.
  3. Minimize your attack surface. You can do this by ensuring that all AI-based tools accessing your core systems have least privileged access, are monitored on a daily basis and also by removing unused connections.
  4. Continuously monitor your data. Companies should consider how their data is being used and retained by the solution they are implementing. Track solutions’ behavior so you can detect anomalies as your data is being shared over time.
  5. Practice makes perfect. Test these tools first within your organization before utilizing them for outside consumption. Using an AI tool that you’re not familiar with for a customer-facing initiative could backfire and negatively impact the business.


Start securing now, not later

Gartner predicts that by 2025, 30% of enterprises will have implemented an AI-augmented development and testing strategy. Security is always top of mind for any organization. Adding AI tools into the mix certainly raises the bar on maintaining a secure business environment. Setting up precautionary measures today can lay a great foundation to combat the risks posed to your company.

Share this content on your favorite social network today!