Cloud 101CircleEventsBlog

Charting the Future of AI in Cybersecurity

Charting the Future of AI in Cybersecurity

Blog Article Published: 10/24/2023

Written by Sean Heide, Technical Research Director, CSA.

Upon the conclusion of this year’s SECtember event, CSA put together an AI Think Tank Day in order to bring together interested attendees to discuss the current and future state of AI in relation to cybersecurity. We wanted an event where everyone in attendance would be given an opportunity to hear from industry leaders, all while also being able to voice their current questions or concerns regarding this technology. It ended up going beyond expectations for live discussions and questions from the community, so a big thank you to all in attendance.

We would especially like to thank our speakers for hosting four interactive sessions to help build the outline for potential future content delivery. There were a myriad of topics, and it was not short of helpful insights and opinions from the audience. This truly was a stepping stone for future research at CSA.

Key topics discussed during this event were AI and GenAI’s impact on the overall market, AI foundational knowledge, AI governance, enterprise readiness and shared responsibility, and transforming cybersecurity with GenAI. A major takeaway from these sessions was the need for experts and contributors to come together to take those questions that were raised, and to begin approaching them from a research initiative perspective. We have been taking this very seriously at CSA and it has become top of mind, since this is something we know the community will need to have addressed for the foreseeable future.

As such, CSA is happy to formally announce our official call for contributors for our artificial intelligence-focused working groups:

Each of these working groups aims to address specific key topics that blend into a well-structured research portfolio that will provide credibility and approach to the overall AI landscape, and ensure a safe future for usage among the entire community. With a strategic approach and ensuring key leadership is in place, we anticipate these communities to grow just as fast as the technology driving it has over the last year.

Drawing back from the Think Tank Day, CSA Research compiled, analyzed, and determined suitable topics from every single question that was raised in order to have a solid foundation for where to begin. The following list is not exhaustive, but rather a few of the possible topics that will be addressed in the coming months through working group initiatives.

  • Methodology for Risk Assessment: Develop a comprehensive methodology for assessing the risks, threats, and vulnerabilities associated with new and existing AI technologies. This methodology will be used for both internal evaluations and external communications.
  • Educational Materials on Latest Threats: Produce and disseminate educational content that outlines the latest risks, threats, and vulnerabilities in AI. These materials aim to inform both technical and non-technical stakeholders.
  • Benchmark Creation: Establish a set of governance and compliance benchmarks that serve as key performance indicators for AI responsibility.
  • Ethical and Responsible Use of AI: Explore the ethical implications of AI and establish guidelines for responsible use.
  • AI Accountability and Transparency:
    • Investigate methods and models for making AI decisions explainable and accountable.
    • Giving GenAI systems user attributes (human) within a system if that AI model’s duty is to provide access or change management.
    • Accountability to log and audit.
  • Securing AI Deployments: Explore security aspects and best practices during the deployment of AI models, especially in cloud environments.
  • Future Interaction Modalities with AI: Explore the evolution of human-AI interaction, particularly through voice, image, and video.

Sign up to stay informed about CSA's AI Safety Initiative activities.

Share this content on your favorite social network today!