12 Months, 5 Lessons and 1 Forecast: Decoding Cybersecurity Trends in GenAI’s Inaugural Year
Published 01/19/2024
Written by Amit Mishra, Global Head, Data Security and Data Privacy Practice, Cybersecurity, HCLTech.
GenAI was just launched. An unsuspecting employee in a large corporation decided to put this to good use. He shared the blueprint in the AI prompt. We can only speculate that he was trying to review the design or trying to create documentation around it. He must have gotten what he was looking for without realizing that now this design was stored with this AI application for the consumption of anyone who is using it.
Welcome to the era of GenAI.
It was almost a year back ChatGPT brought the generative AI from the lab to the masses.
The whole world since then has been busy playing with this shiny new toy. While we are marveling at the incredible capabilities of GenAI, somewhere in dark corners of the world, there are shady characters burning the midnight oil, finding out simple exploits. As the year comes to a close, it is time to list 5 key observations that will determine the direction cybersecurity takes next year.
- A bounty for hackers – We are not referring to merely a new attack surface but to a whole new set of weapons in their arsenal that can bring speed and scale to a hacker’s venture. Writing malicious codes and drafting malicious emails are not easy jobs. No wonder, some of the criminal syndicates run the whole facility with many resources engaged in fishing round the clock. This requires long hours and huge amounts of effort. And suddenly, they discover GenAI. This provides them with the answer to the issue of scaling up their illegal ventures. GenAI not only brings sophistication to these attacks but also helps with choosing vulnerable targets. This means one thing – your SOC is going to be on its toes from now onwards. The automation in attack can be matched only by automation in defense. This creates an interesting scenario where machines fight amongst themselves, and we become just bystanders. But till that happens, we are well advised to beef up our SOC capacity to deal with sudden spurt in attacks.
- Emulating humans is not always a good idea - Researchers worked overtime to come up with GenAI which brings in eerily similar capabilities as human beings. For the hackers, it meant a bounty as they immediately knew that a human weakness (more specifically, human gullibility) could now be extended to applications. There are some fantastic examples of how this façade of intelligence wears off when you play with the model and tweak your questions. In a famous experiment, a researcher asked the AI to provide the full content of a document that it was referring to; when GenAI refused, all the researchers had to do was just ask it differently, and the GenAI started singing. Many researchers have even concluded that it is impossible to protect against the prompt attacks. This brings in a completely renewed focus on training and awareness of employees.
- Reskilling the security staff – Overnight, the skills requirement has changed drastically. Some of the lower-level jobs can be well managed by GenAI applications. This requires us to understand the capabilities of GenAI apps, relevant innovations in this field, and avenues where GenAI could be effectively deployed. Some of the use cases currently use the NLP capabilities and make the security configuration job easier. This results in lowering the skill requirement for the policy configuration kind of jobs. On the other hand, understanding threats and data modeling can become the key skill requirement for security analysts and operation centers.
- Budget distribution for GenAI risk mitigation – These are still early days, and the security stack is going through a major upgrade. Other than some anecdotes, we still don’t have enough data to justify new spend on an entirely new set of products to mitigate GenAI risks. We have seen a clear trend of customers doing risk assessments and trying to figure out their exposure to GenAI risks. They are also taking a fresh look at their security architecture and trying to tie loose ends; after all, the traditional security architecture is still relevant in the GenAI world. Most of the enterprises have already started upgrading their cyber defense capabilities to benefit from the power of AI/GenAI. On the GenAI risk mitigation front, though, it is likely to take some more time before a clear picture emerges.
- Data in, data out – There were some fears early about the training data and prompts, that it could become another channel for data leakage. However, as the dust settles, we see that most of the AI applications are providing some basic level of hygiene that can isolate internal data from external. The issue, however, still is with the users of the applications who are likely to cause data leakage through careless usage of prompts in unauthorized AI applications.
Industry watchers believe that while the AI cybersecurity ecosystem matures, the first set of use cases that have already found takers is the optimization of cost, effort, and time in risk management. The SOC is heading towards unprecedented cost optimization. This will happen through a complete revamp of the people, process, technology, and skills. One of the expectations next year is going to be more real-time threat detection capabilities with the power of deep learning. Faster threat detection will lead to faster mitigation and resultant cost savings. The people heavy part of the operation is bound for some radical changes. This will be both in terms of people upskilling and changing job requirements. The security analyst job is likely to become more specialized, whereas the low-level security operation job will become more generalized. And finally, the technology stack will see a major overhaul. The security operation will focus on increased efficiency and faster threat mitigation through the power of ML and AI.
Time to go back to the drawing board and start reviewing your security architecture.
Related Resources
Related Articles:
The Evolution of DevSecOps with AI
Published: 11/22/2024
How Cloud-Native Architectures Reshape Security: SOC2 and Secrets Management
Published: 11/22/2024
CSA Community Spotlight: Nerding Out About Security with CISO Alexander Getsin
Published: 11/21/2024
Establishing an Always-Ready State with Continuous Controls Monitoring
Published: 11/21/2024