How ChatGPT Can Be Used in Cybersecurity
Published 06/16/2023
ChatGPT is a large language model trained by OpenAI. Due to the massive amount of data it was trained on, it can understand natural language and generate human-like responses to questions and prompts at a truly impressive level. New use cases for ChatGPT are developed every day. In this blog, we’ll take a brief look at several ways defenders can use it within their cybersecurity programs. Make sure to check out CSA’s Security Implications of ChatGPT publication to dive deeper into each of these use cases.
Filter Out Security Vulnerabilities
The recent update to GitHub Copilot (an AI tool developed by GitHub and OpenAI) introduces an AI-driven vulnerability filtering system that enhances the security of its code suggestions. By detecting and preventing insecure code patterns in real-time, this innovative feature assists programmers in avoiding common coding mistakes. As a result, more secure applications can be developed, preventing vulnerabilities from propagating through the DevSecOps toolchain.
Generate Security Code
Using the Microsoft 365 Defender Advanced Hunting tool, you can generate code that helps investigate security incidents, reducing time to action. For example, figuring out which employees executed the malicious code in a phishing email.
Transfer Security Code
The underlying Codex models of ChatGPT can take source code and config files and translate them into other programming languages. It also simplifies the process for the end user by adding key details to its provided answer and the methodology behind the new creation. This allows you to, for example, utilize Microsoft 365 Defender Advanced Hunting, even if your system does not work with the KQL programming language.
Vulnerability Scanner
OpenAI's Codex API is an effective vulnerability scanner for programming languages like C, C#, Java, and JavaScript. Additionally, scanners are already being developed to detect and flag insecure code patterns in various languages, helping developers address potential vulnerabilities before they become critical security risks.
Find the Solution to Cybersecurity Problems
Users can submit cybersecurity scenarios to ChatGPT that they’re struggling to solve. For example, how to prevent the upload of classified documents to OneDrive. After some refinement of your query, ChatGPT usually comes up with a useful solution. And, solutions that traditionally would have required a lot of human involvement, like scanning content for sensitive information, can now be automated with the help of ChatGPT.
Integration with SIEM/SOAR
On March 8th, Microsoft announced the integration of Azure OpenAI service with a built-in connector, enabling the automation of playbooks through Azure Logic Apps. This development accelerates incident management by leveraging the autocompletion capabilities of OpenAI models. This type of automation is essential when teams are dealing with millions of events to address each day.
Convert Technical Code/Files Into Straightforward Language
A prominent feature of ChatGPT is its capacity to elucidate its own thought process, which allows it to examine and interpret the functionality of various technical files, including source code, configuration files, and more, in clear and straightforward language. This ability enables users, even those without deep technical expertise, to gain insights into the inner workings of these files and understand their purpose, structure, and potential implications.
Explaining Security Patches and Changelogs
For operational staff, a lot of time is spent reading changelogs and other sources of information to see if any security-related information is present and needs to be handled. ChatGPT can easily summarize web pages, but more importantly, can also extract contextual meaning and look for specific information such as, “Are there any computer security related issues listed in [URL],” and what those issues are.
Fuzzing and Testing Code
Currently, ChatGPT is unable to perform fuzz testing directly on your code. However, it can help you understand how you might fuzz test the code and what the steps should be, as well as help you quickly build test cases. This insight is especially useful for someone in an entry-level role to quickly learn the necessary steps and procedures.
Dive deeper into the world of ChatGPT with CSA’s Security Implications of ChatGPT. This 54-page publication explores:
- Further details and examples of how defenders can use ChatGPT
- The background of ChatGPT
- The current limitations of ChatGPT
- How malicious actors can use ChatGPT
- How to use ChatGPT securely
Also check out CSA’s AI Safety Initiative to learn more about safe AI usage and adoption.
Related Resources
Related Articles:
How to Demystify Zero Trust for Non-Security Stakeholders
Published: 12/19/2024
Why Digital Pioneers are Adopting Zero Trust SD-WAN to Drive Modernization
Published: 12/19/2024
The EU AI Act and SMB Compliance
Published: 12/18/2024
Managed Security Service Provider (MSSP): Everything You Need to Know
Published: 12/18/2024