Dumping a Database with an AI Chatbot
Published 06/27/2024
Originally published by Synack.
Written by Kuldeep Pandya.
We’re seeing AI chatbots a lot these days. They’re everywhere from Notion to AWS Docs. Many companies have started implementing their AI chatbots either using OpenAI API or a custom AI model.
While making these AI chatbots is easy, the utmost care should be taken to secure them. Because with bad configurations, many critical vulnerabilities may arise.
For example, I’ll detail a vulnerability I found that allowed me full access to the database as well as the underlying filesystem.
In the blog, I’ll cover:
- Discovery and authentication bypass
- Enumerating permissions
- Dumping the database
This is my first LLM-related vulnerability, and I’m excited to share it with the hacker community.
Discovery and Authentication Bypass
I was on boarded to a Synack Red Team host target. Even on host targets, I mostly probe for HTTP services on common HTTP ports and hunt on them.
I enumerated the HTTP services and ran aquatone to take screenshots. After taking the screenshots, I checked each host one by one.
Doing this, I discovered a host that showed a login page like this:
Following my muscle memory, I entered “admin” into the username field, and to my surprise, I was logged into the application! There was an AI chatbot that I could access.
After doing a little more testing, I discovered that I can enter any random string into the username field and still get a valid authentication cookie. It seemed like there was no authentication at all.
Enumerating Permissions
I had no idea what the chatbot was used for. To understand more, I genuinely asked the chatbot about its capabilities. I had no thoughts about security at this point.
When the AI chatbot replied, my eyes sparkled. It told me that I could query employee data using this chatbot. While this is sensitive enough, it told me that it can run SQL Queries.
Intrigued by the bot’s response, I asked how many departments existed in the organization. It ran an SQL query and gave me the output “73”.
I noticed that it executed an SQL query to answer the question, meaning it had access to the database.
I wanted to see if the bot would allow me to execute any arbitrary queries. So I asked it to execute the following query and provide the output:
SELECT * FROM Department LIMIT 10;
And the bot happily returned the top 10 departments.
Dumping the Database
At this point, I could report this as is, but I wanted to confirm with a few more pieces of evidence.
To confirm, I asked it to show the current database user. This could be found with a query like this:
SELECT CURRENT_USER;
The current user turned out to be postgres.
To further enumerate the database, I asked the bot to list the tables that existed in the database. It listed several tables but the main table that stood out was the person table.
As a proof of concept, I dumped 10 rows from the person table and reported the issue to Synack. This issue belongs to OWASP-LLM02. The Vulnerability Operations team was quick to accept the vulnerability and provide a reward.
After reporting the issue, I discovered that it was also possible for me to list and read local files using a query like this:
SELECT pg_ls_dir('./');
The main takeaway: Stay curious. If you have any questions/doubts, feel free to reach me over Twitter, Instagram or LinkedIn. Happy hacking!
Related Resources
Related Articles:
CSA Community Spotlight: Nerding Out About Security with CISO Alexander Getsin
Published: 11/21/2024
A Vulnerability Management Crisis: The Issues with CVE
Published: 11/21/2024
AI-Powered Cybersecurity: Safeguarding the Media Industry
Published: 11/20/2024
The Lost Art of Visibility, in the World of Clouds
Published: 11/20/2024