Perspectives on AI: A Conversation with Torq's CTO
Published 06/22/2023
This interview with Leonid Belkind, Co-Founder & CTO, Torq, is the first in a series of many conversations with experts operating at the nexus of artificial intelligence and cybersecurity.
AI seems to be the top boardroom topic today according to my network. Heavily hyped topics often confuse professionals. Where do you think the hype is most accurate as to the importance of AI at this moment in time?
A very hyped topic nowadays indeed, making the distinction between the hype and the real potential value more challenging than usual. While everyone is fascinated with what could be accomplished with AI, there already are a few fields where generative AI has significantly upleveled the capabilities for the masses, allowing people to achieve things that were not within their reach previously. Most significant "proven" outcomes are:
- Content Generation: The ability to generate professional-looking/sounding blogs, articles, presentations, messages, etc. This is one of the fields where the hype of "the world is not the same anymore" is, probably, the most accurate. Indeed, the quality and the speed of generation of various content assets to people who needed significant help with that before - nothing short of revolutionary.
- Code Generation (and Assistance): Bridging the gap for security subject matter experts and technical means of querying data sources, transforming data, and more. Generating code snippets in any programming, scripting, or querying language is a relatively safe and proven value that is seeing growth in adoption.
- Conversational Interfaces: Bots providing either self-service operations or access to information, or involving humans in a "human-in-the-loop" fashion in various automated processes, have received a significant boost in sophistication and capabilities in general with the public availability of OpenAI and similar interfaces.
- Information Extraction: Humans, especially in stressful situations, have a better ability to comprehend concise information. Processing larger bodies of data and summarizing them into concise actionable points with a good approximation is something that is widely needed and is being adopted.
The Cloud Security Alliance made its initial foray into generative AI research. We are thinking about this in terms of pillars: 1) Responsible corporate usage guidance and policies; 2) Understanding how generative AI platforms can be directly attacked; 3) How malicious actors will leverage generative AI; and 4) How generative AI can improve cybersecurity. I would love to get your viewpoint on that final pillar and how that is driving you.
Our strategy at Torq distinguishes between generative AI applications on cybersecurity during "Design Time" (i.e., when building security architectures and operational practices) and "Real Time" (i.e., when operationally handling security findings, alerts, and incidents).
Most prominent areas in "Design Time" that are already emerging are:
- Validation/verification of security architecture documents/plans with industry-accepted frameworks, blueprints, and best practices, uncovering gaps and suggesting resolutions.
- Validation/verification of actual configurations, similar to the above points.
- Assisting security personnel in building automatic processes with various technological tools (either generating snippets/components or co-piloting the authoring process, speeding it up and making it more efficient).
- Generating documentation and training based on existing configuration and operational processes to assist collaboration between teams and ramp-up of new employees.
Areas in "Real Time" that are already being released to the market are:
- Automatic agent interviewing "Information Employees" involved in a potential security event (Suspicious Behavior, Credentials Reset, Non-Compliance, etc.), enriching the security incident with their feedback.
- Security Case Co-Pilot functionality, enriching and explaining the essence of the security event being handled and additional data discovered during the investigation to security analysts and suggesting blueprints for remediation.
- Extraction and summarization of relevant information from large amounts of threat intelligence and threat hunting data to generate concise actionable points for security analysts.
When you think about needing to deliver cybersecurity at scale, is it realistic for organizations to defer AI adoption for much longer?
In recent months, it seems that the race to adopt generative AI for various business needs has become a mainstay in all executive discussions across various industries. We are constantly hearing about executives in different departments being explicitly requested to provide a roadmap and a plan for AI adoption. As exciting as the newly available AI capabilities are, we need to remember that no technology that is being adopted "just" because it is available for adoption will end up making a significant impact. AI can solve real issues, can solve them better than the available solutions, and that is what should be the driver for AI adoption.
So, I don't think that any organization should "defer" AI adoption. Instead, organizations should look at their largest unsolved problems (things that really "move the needle" for their respective businesses) and systematically analyze to what extent these problems can be solved with AI.
How do you see AI serving as connective tissue between disparate cybersecurity systems and platforms?
There are a number of ways this may happen. Two main "planes" where AI can serve as a connective tissue between disparate systems may be:
- Data Plane - Extracting information from a joined body of data collected from multiple disjoint systems. Correlating it and summarizing it can definitely contribute as a connective tissue and allow harvesting the information provided by disjoint sources more efficiently.
- Control Plane - Driving operation of a cybersecurity event investigation with autonomous or semi-autonomous agents choosing the path and the next steps from a choice of available operations via different systems.
What would you say are key things that cybersecurity professionals should know about generative AI today?
I'd suggest the following framework:
- As with any major technological development: make sure to educate yourself about it, try it hands-on, and get "comfortable" with it.
- If you won't be innovative and think about how to adopt this for the benefit of your goals, someone else will and you'll be on the "losing" side.
- People in the organization (in various departments) will find creative ways to adopt AI. Trying to block them by corporate policies will cause frustration and miss out on efficiency that can be gained. Developing a strategy for responsible AI in the organization will go a long way in terms of creating a strategic platform.
What do you think the impact of AI will be on cybersecurity in the next 3-5 years?
Considering the pace of development around this field - this is truly something that is very difficult to predict.
Related Resources
Related Articles:
How to Demystify Zero Trust for Non-Security Stakeholders
Published: 12/19/2024
The EU AI Act and SMB Compliance
Published: 12/18/2024
Zero-Code Cloud: Building Secure, Automated Infrastructure Without Writing a Line
Published: 12/16/2024
Test Time Compute
Published: 12/13/2024