Driving AI Value in Security and Governance
Published 08/21/2024
Originally published by CXO REvolutionaries.
Written by Rob Sloan, VP, Cybersecurity Advocacy, Zscaler.
At Zscaler’s latest Women in Technology and Security CXO event at the RSA Conference in San Francisco, EVP of Customer Experience and Transformation, Kavitha Mariappan, hosted tech leaders to discuss AI’s potential for achieving more in the areas of security and governance. The following captures the key points and portions of the discussion.
The digital age has ushered in an era of unprecedented data creation. Every single day, according to some estimates, we are generating 328 terabytes of new data, and the rate of data creation is only going to increase.
This ‘data deluge’ isn’t just a problem of data volume, but also of variety. Structured data, such as customer records and financial transactions, creates one set of challenges, but unstructured data, for example social media posts, emails, and sensor data from internet-accessible or IoT devices, creates a larger set of problems.
Organizations will be wrestling with the issue of capturing, storing, analyzing, and using data effectively for years to come. More data also means a greater risk of becoming a target for hackers, meaning governance and security must go hand-in-hand.
Incredible opportunity
Despite the scale of the task, we must keep in mind that this is an incredible opportunity to optimize business processes, personalize customer experiences, unlock insights, and gain a competitive advantage. The key to extracting that value lies in effective data governance.
Traditional methods of data governance have relied heavily on manual processes, which can be resource-intensive, time-consuming, and prone to error, thereby presenting an ideal use case for AI.
Panelist Trish Gonzalez-Clark, vice president of IT at energy industry manufacturer NOV, shared details of a project her team is working on that demonstrates how AI can allow us to do things now that we couldn’t previously do. Gonzalez-Clark’s colleagues are uploading large amounts of NOV data into an AI model, cataloging it, and running queries across that dataset to search for patterns and trends that human analysts would not be able to identify.
Gonzalez-Clark said the project had generated “a ton of excitement” and highlighted what an exciting time it is to be a technologist.
One theme that came out throughout the discussion was the need for collaboration between IT, the cybersecurity team, and business leaders. A clear understanding of the needs of each group will ensure that AI-powered solutions are effective and aligned with overall data governance goals.
Gonzalez-Clark described another project NOV’s cloud team is conducting that involves an integration of OpenAI into the company’s security operations center. NOV executes hourly searches over the SOC’s five million daily log events for signs of malicious activity, such as known bad DNS or IP addresses. The activity is scored to narrow the alerts created and OpenAI translates the machine logs into a human-readable format and adds context for analyst review. She described the project as “immature” in that the team is using foundational models out of the box and not fine-tuning them: “it’s low effort and brings immense value.”
Not a 'silver bullet'
AI is not a silver bullet, but certainly is a powerful tool that can significantly enhance data governance by automating tasks, improving data quality, and enabling deeper insights. However, experts advise remembering that AI is a tool, and one that needs a lot of nurturing to perform at its best. It is in no way a replacement for strong data governance policies and human oversight.
Panelist Laura Kohl, chief information officer at investment research firm Morningstar, said the company is taking a proactive approach with AI to structure and classify documents. However, AI is not ready to act alone just yet.
Kohl said Morningstar has to make sure that “there's still that accountability on the human side to ensure that what we're producing and sending out to our clients is actually good data.” She said the company has security tools, builds in validations and checks, and has ‘playgrounds’ where developers can test different models, but it is still early days.
Gonzalez-Clark echoed the sentiment: “We're advising people that this technology is not a perfect technology. It needs human oversight, so don't make any big decisions on it.”
Crucial considerations
Human expertise remains vital in areas like setting guidelines and policies, which require regular updating given the speed of technological change and the evolving ways AI is being used by organizations. There are, of course, also ethical considerations.
Data that models are trained on must be collected with privacy in mind and used in an ethical and privacy-compliant manner. Fairness and non-discrimination are also important, but there was wide recognition that many of the datasets AI is trained on likely do contain biased data. Testing and validating results is essential before AI can be deployed to customers.
While it’s still early days for many companies, leaders are recognizing that cases where AI can add value may be more refined than they previously thought. This is raising questions which all leaders seeking to understand AI’s impact must ask; where is AI adding value and how much? Have proof-of-concept projects proven themselves to be an effective use of resources? Will AI displace other technologies and how can those savings best be reinvested?
One thing is certain: no leader will make the right decisions on their own. Collaboration with internal and external peers, sharing ideas, listening to case studies, and constant learning will be required to drive innovation and business value.
Related Resources
Related Articles:
Reflections on NIST Symposium in September 2024, Part 1
Published: 10/04/2024
How to Maximize Alignment Between Security and Compliance Teams
Published: 10/04/2024