The Dangers of Exposed Elasticsearch Instances
This blog was originally published by Open Raven here.
Written by Michael Ness, Open Raven.
Elasticsearch is a widely used text-search and analytics engine. The tooling provides a simple solution to quickly, easily, and efficiently store and search large volumes of data. Elasticsearch is utilized for a wide range of different use cases in applications, from logging request data all the way through to storing sensitive data used within applications.
Despite its usefulness, Elasticsearch instances often pose a security risk due to poorly configured security settings. The most common issue is not enabling authentication over port 9200. This typically happens during the initial testing phase, whereby an engineer will set up the Elasticsearch instance across one or many EC2 instances. To simplify testing of the functionality of these instances locally, the engineer will often fail to enable authentication for the application running on port 9200, which poses many different risks.
Common Issues & Risk
The primary and most significant risk from a data security perspective is of course the data. The risk posed by an exposed instance is directly proportional to the sensitive nature of the data stored within. Non-sensitive data stored for testing in a staging environment poses less risk than personally identifiable information stored in production. However, a common mistake seen within Elasticsearch based research is that companies begin by populating instances with staging data, forget they are exposed, and then populate them with production data. Another common mistake is when companies commit the same misconfigurations when setting up a production environment for Elasticsearch.
There are also more risks than just from the data security perspective. By leaving the instance exposed, attackers can use the functionality not only for data exfiltration but also to leverage published exploits. The Log4j vulnerability is the perfect example. Attackers could potentially achieve remote code execution on exposed, unauthenticated Elasticsearch instances by exploiting Log4j.
Peekaboo Moments suffered a data exposure when thousands of unsecured baby videos and images were made available online. Peekaboo’s app developer, Bithouse, left the Elasticsearch database open and without password protection. The database contained more than 70 million log files comprising nearly 100 GB of data stored from March 2019. The exposed data included detailed device data, links to photos and videos, and around 800,000 email addresses.
Online marketing company Mailfire exposed the data of over 320 million people due to an unsecured Elasticsearch server. A hacker gained access to the notifications being pushed to Mailfire clients. The disclosed information included private conversations between users of an adult dating site in addition to PII.
Identification & Remediation
Identification of open Elasticsearch instances within an organization can be tricky. At the surface, you can scan all EC2 instances and check for port 9200.
Magpie data collection
An EC2 instance listening on port 9200 may indicate the presence of Elasticsearch, however further manual inspection on the service running on the port is needed to confirm.
The remediation for open Elasticsearch instances is simple. Once detected, make sure port 9200 is only accessible to the applications and employees who need access. For example, restrict AWS security groups to only allow access for company applications in the VPC and employees via an authenticated VPN. Locking machines hosting sensitive data within your VPC where only other applications and users via VPN can access it is always a great idea.
Sign up to receive CSA's latest blogs
This list receives 1-2 emails a month.