Around the Horn with a Cybersecurity Summer
Blog Article Published: 07/25/2023
Like many of you I presume, summer is my favorite time of the year. Where I live, the weather is perfect and life seems to be an endless parade of beaches, cookouts, and baseball. The challenge is to stay focused on work. For this update, I decided to cheat a bit, leverage some baseball nomenclature, go “around the horn” and provide some “quick hit” commentary on a few items.
SolarWinds CISO’s Wells Notice or reason #1,387 why being a CISO is a thankless job.
Last month, SolarWinds reported that some of its executives, including its CISO, were served with an SEC Wells Notice. An SEC Wells Notice refers to a formal communication issued by the U.S. Securities and Exchange Commission (SEC) to inform individuals or companies that the SEC staff intends to recommend enforcement action against them. As you may remember, in 2020 it was found that more than 18,000 of SolarWinds Orion software customers installed compromised updates that included a command and control trojan that was able to exfiltrate sensitive information to a malicious nation-state actor. This is the main reason why SBOM (software bill of materials) is a term cybersecurity professionals must be familiar with. Without knowing the specifics of the Wells Notice, the obvious conclusion is that they seek to punish the SolarWinds CISO for not sufficiently protecting the corporation and its customers from this attack.
As a 30+ year cybersecurity professional sitting in my man cave, I happily throw Styrofoam bricks at my computer monitor and chant “Lock them up!” when a company is forced to be accountable for a failure to prevent a cyber attack that hurts so many of its customers. At the same time, I feel as though it is a gut punch when the CISO is held personally accountable for the attack. Yes, the CISO is the #1 cybersecurity professional in any corporation. However, there remains a lack of a formalized legal framework supporting CISOs to do the right thing. Like most of you, I have read through the technical aspects of this compromise of the Orion update system as an innovative supply chain attack. What I don’t know is how well-funded and empowered the SolarWinds CISO was to manage risks and respond to incidents. Perhaps the SEC has done its due diligence and is taking a reasonable path. I just wish the SEC would have had jurisdiction over the US Office of Personnel Management back in the day.
Generative AI, LLMs, and data security.
I continue to hear that generative AI is at or near the top of boardroom agendas as directors seek to better understand the trend and also understand if their executive team has a solid plan. Almost uniformly, large enterprises tell me that they are restricting most uses of generative AI services while at the same time having innovation teams aggressively looking for game-changing applications of it. There are many areas of potential concern: bad PR associated with the automation of jobs, a rogue chatbot that spews nonsense or worse, potential legal issues that result from queries related to copywritten information and a host of other questions. As I have mentioned previously, CSA is moving aggressively to address these issues and provide guidance for generative AI – we see this as a cloud service at its core and we can apply new best practices to the same leverage points.
One of the biggest concerns today is the question of data security. How do we protect ourselves from sensitive corporate information being sent to ChatGPT for example, and will that information become part of the Large Language Model’s training data and potentially be exposed? A few months ago I had no idea how to respond to this concern and I am sure I still do not have the authoritative answer. What I do understand is that LLMs are essentially statistical models for predicting the next word (actually a token) in a sequence. The process of breaking down words into tokens and transforming them into the LLM via an algorithm seems like it actually performs a de facto data obfuscation. In other words, if the LLM was somehow able to respond with sensitive information, it would be as a result of many, many people submitting that sensitive information.
I think the more relevant issues would be to understand how an AI service provider protects (and hopefully deletes) the submitted raw data and the ability of the AI provider’s staff to override or preempt the algorithm to manage the output of the LLM. All in all, it seems like standard data exfiltration best practices. If you are really concerned about sensitive data being queried inside of an LLM, let me tell you a scary bedtime story about search engines and what they are capable of!
I hope you will be following CSA on our AI journey and will attend the CSA AI Summit on August 2-3.
We have had Shadow IT and Shadow Data - I suppose CSA should create some generic best practices to encourage practitioners to just simply stay out of the shadows for their own protection. In all seriousness, this fairly new term “Shadow Access” makes a lot of sense to me as I have observed a lot of abuses of access control over the years. I would define Shadow Access as the tendency to grant too many permissions to too many identities in IT systems. In the cloud world, this is a consequence of a highly accelerated DevOps implementation that values speed and minimizes examination of privileges for the sake of expediency. There are several studies available that indicate cloud microservices and containers tend to have too many unpatched vulnerabilities and unused privileges attached to them.
I am excited that CSA has research in progress that addresses this topic, looks at the root causes, and provides guidance. In the long run, I believe that Shadow Access is something that we want to address by applying Zero Trust principles towards it. Zero Trust is making great strides within organizations. Essentially, Zero Trust encourages us to define a protect surface, minimize access to it, and monitor the system continuously. We now have known for a long time that identity is crucial to the fast moving, virtual cloud world. Using Zero Trust in concert with DevOps and microservices to provide very granular cloud security may not make the headlines, but it is very exciting to me. Stay tuned for this paper, the peer review is completed on July 26, after which we should have a quick publication.
Wherever you are, I hope your summer is great, your systems are secure, and your board of directors is supporting your mission!
Trending This Week
#1 What You Need to Know About the Diaxin Team Ransomware Group
#2 How ChatGPT Can Be Used in Cybersecurity
#3 Mitigating Security Risks in Retrieval Augmented Generation (RAG) LLM Applications
#4 The 6 Phases of Data Security
#5 Roadmap to Earning Your Certificate in Cloud Security Knowledge (CCSK)
Sign up to receive CSA's latest blogs
This list receives 1-2 emails a month.