Why the Cloud Security Alliance Needs to Help Secure AI (And You Do, Too)
Blog Article Published: 04/24/2023
When I frame a very big technology trend, I have a somewhat annoying habit of paraphrasing a quote that revolutionary Leon Trotsky may or may not have ever said. In this case it goes:
You may not be interested in artificial intelligence, but artificial intelligence is interested in you.
Artificial intelligence (AI) is one of the world’s hottest topics right now, due to one specific generative AI offering, ChatGPT. How it is becoming pervasive brings to mind Ernest Hemingway’s quote about how bankruptcy happens:
Gradually, then suddenly.
There are a tremendous amount of use cases for ChatGPT or the APIs that are spreading like wildfire: taking fast food orders, passing the bar exam, and starring in an episode of South Park. There are also examples of it being flat wrong. For example, its retelling of my biography has a few embellishments. All of the above is creating excitement, fear, curiosity, and more fear. We even have a group of experts, including Elon Musk, calling for a pause in AI development. We have a great deal of corporate and country prohibitions being put in place as well.
My view is that generative AI is a powerful technology full of possibilities. I don’t agree with the idea of a development pause, for the simple reason that someone is going to cheat and attempt to gain a competitive edge or at least catch up. Also, while I think it is impressive, it is not “Matrix” take-over-the-world impressive. My solution? Let’s work very hard on developing a body of best practices to govern the usage of AI today and let the developers of AI use those experiences to improve the next generation. Maybe raising a good AI child will create a responsible AI adult.
At the CSA Summit at this year’s RSA Conference, we have released our first whitepaper to get a handle on this topic. Security Implications of ChatGPT provides analysis across four dimensions: How ChatGPT can benefit cybersecurity, how it can benefit malicious attackers, how it might be attacked directly, and guidelines for responsible usage.
I have a sense of déjà vu related to my early experiences with cloud computing. At the time, there were factions including aggressive adoption, cautious analysis, antagonistic to cloud entirely, and investigation of cloud as the foundation to address cybersecurity problems writ large. We launched the Cloud Security Alliance in 2009 to start a whole-of-industry effort to quickly create a portfolio of best practices, education, certification, and chapters in order to promote responsible adoption of cloud. I feel the same formula works today: stay calm, be neither an AI cheerleader nor naysayer, and develop the best practices and tools to help stakeholders manage the risk.
The CSA community must take a leading role in one aspect of AI, AI as a Service. It is CSA’s expectation that market adoption of AI will parallel cloud adoption trends and primarily use the cloud delivery model. From the standpoint of a typical enterprise today, they must perform security assurance over a handful of cloud infrastructure providers and thousands of SaaS providers. One can imagine an iteration of several of our works: the Cloud Controls Matrix, Shared Security Responsibility Model, STAR, Top Threats, and others to help AI consumers manage risks and AI providers show the transparency of their assurance program.We are making an appeal to our community of volunteers to help build the AI roadmap the industry needs. I would like our CxO Trust community to particularly take note. We want leaders with a variety of risk appetites around AI to weigh in on what is needed. Bear in mind, as our initial paper showed, we documented over a dozen very cool applications of ChatGPT to improve cybersecurity. I have no doubt that a concerted industry effort now will put us in a very good position when GPT 5 and 6 arrive.
Sign up to receive CSA's latest blogs
This list receives 1-2 emails a month.