Cloud 101CircleEventsBlog
Call for Presentations: Share your expertise at SECtember.ai 2024! Submit your proposals by June 28th.

AI: Both a Help and a Hindrance for the Public Sector

AI: Both a Help and a Hindrance for the Public Sector

Blog Article Published: 12/15/2023

Originally published by Synack on October 27, 2023.

Written by Luke Luckett.

Last week, we hosted the Synack Security Symposium in Washington, D.C. In an open forum, Wade Lance, Synack’s Global Field CISO, facilitated a lively discussion on cybersecurity in the age of AI.

Several themes came up throughout the conversation: advantages of AI, interoperability, advantages of human-led testing, among others. These themes are worth consideration by government-focused CIOs and CISOs as the industry stance shifts from AI being a novel thing to test to being the brilliant tech that has massive risks as well as mission rewards.


AI Poses Multiple Attacker & Defender Advantages

AI will enable vulnerabilities to be found more quickly, both by blue teams and by adversaries. By 2025, AI may help create “faster-paced, more effective and larger scale” cyberattacks, according to a UK government study sourced by the BBC. Cyber criminals will become exponentially faster in their research processes with AI, increasing the speed and accuracy needed to exploit vulnerabilities and misconfigurations.

At the same time, defenders across the organization – in IT and cybersecurity and beyond – will need extensive training on when (and when not) to use public / private AI engines, as the risk for data leakage is real. Data leakage can result in inaccurate or outdated information being generated as fact or results that include sensitive data release (such as PII) can lead to identity theft.

Defenders (and their management) will also need sanitization scripts and other processes to ensure that the use of public AI engines pose minimal risk of sharing PII, CUI, or FOUO material for later use by platform users or training models. Consistent training around this should be required for involved internal team members, with special focus paid to sensitive, mission-specific datasets that may not be appropriate for AI use.

Finally, AI deployments, such as chat bots or enhanced search engines, will add their own risks to attack surfaces via common vulnerabilities like those listed in the OWASP LLM Top 10. Organizations will need to consider these tools in planning their security testing programs, lest they provide new attack vectors to bad actors.


Sanitization of Scripts is Critical

Generative AI prompt data should be sanitized for accuracy, clarity, and to ensure engagement with text-to-speech programs are not vulnerable to undesirable outputs.

Prompt vulnerabilities may be kept at bay from the get-go with presence penalties, max tokens and appropriate sampling. Additionally, testing for prompt injection can reduce possible user incidents.


Interoperability is a key consideration for purple teams leveraging AI for speed

The teamwork between red and blue teams is breaking down historical silos across cyber and IT. Offensive (red) teams have historically tested, identified and triaged vulns, while defensive (blue) team members patch vulns, while also hunting and investigating incidents. AI-powered technology, from scanning tools to SIEM and SOAR technology have made correlation of vulnerability reports with logging data and threat intel sources a more common practice in modern purple teams.

For collaboration to continue, data source analysis must be consistent, which is a big ask for government agencies. Legacy IT, with its related workflows and policies, can become rapidly connected through modernization. AI will be critical to achieving the speed necessary to model and visualize outputs from such data; continuously testing the configuration of such systems is critical to minimizing vulnerabilities throughout such workflows.


AI can aid the Secure by Design, Secure by Default mission

The call to “shift left” has expanded this year through Jen Easterly and CISA’s efforts to encourage technology companies to ship products that are Secure by Design, Secure by Default. As the cadre of specialists devoted to DevSecOps gets larger across critical industries, product and engineering teams are being pushed to identify vulnerabilities and patch code earlier in the development process, especially when doing small batch development. Such practices will free security leaders to invest in testing legacy IT and the “connection points” between older systems and modern cloud- and SaaS-based systems.


Humans remain the most important part of offensive and defensive solutions

While keeping government cyber talent engaged and motivated is critical to long-term success, empowering all agency employees to remain vigilant in their cyber practices is key. As attackers find their formerly easy success hampered by the increased usage of MFA, social engineering, as a practice, will continue to grow. A major tactic here is Vishing, or voice phishing, where hackers gain control of a system through a very convincing phone call utilizing AI.

This has proven to be a real concern for organizations of late (just ask MGM!) Business email compromise, while by no means a new practice, has also become easier for criminals to attempt with speed and AI-enabled decision-making. Staffers should be encouraged to flag suspicious emails and avoid sharing any detail via email with unknown parties. Human led security testing of AI attack surfaces and Large Language Models (LLM) is also crucial.

Taken together, the variety of talent investigating AI for business use across government is impressive and the outcomes are expected to be significant for productivity in and out of the software development cycle. But the potential impacts seem to be ever-changing and enormous.

Share this content on your favorite social network today!