Key Steps to Follow Before Embarking on Specific DLP Policies
Written by Amit Kandpal, Director - Customer Success at Netskope.
As discussed briefly in the first part of this blog series, it is very important to reduce the risk surface area before jumping into configuring and tuning specific DLP rules. Applying DLP rules on a large risk surface area translates into a very high number of violations, that even if true positives, can overwhelm the security teams and create a risk of high criticality violations getting missed out. There are certain domains like healthcare where the compliance requirements require a company to look into every policy violation (e.g. PHI) above a certain threshold and generating hundreds of thousands of violations can lead to some very challenging scenarios.
A simple framework to reduce risk surface area followed by protecting data is outlined below:
Reduction in Risk Surface Area:
- Start with getting visibility into all user-led, IT-led, or business-led SaaS, IaaS, and web usage
- Understand the risk profile of these cloud services. Most CASB/SSE solutions can provide granular visibility of high and medium-risk users and applications. If not already done, identify sanctioned applications for key categories.
- Block the high-risk cloud services immediately and coach users to use the sanctioned services for different cloud categories like Storage, Collaboration, etc.
- Define and apply appropriate policies for corporate vs personal instances of the sanctioned application (e.g. allowing view-only/no access to a personal instance of a sanctioned application like OneDrive). This is important to avoid personal instances of sanctioned applications becoming an easy way to exfiltrate data.
- Restrict access based on users/groups (e.g. access to CRM data application to only certain functions or levels of users) locations (e.g. blocking access to the certain application from known high-risk locations/countries, devices (e.g. full access from corporate devices but view only access to certain applications from personal devices, etc. if there are clear use cases/policies to do so
- Identify sensitive data levels by incorporating data classification. Leverage Machine learning-based classification opportunistically for use cases that do not lend themselves well to traditional clarification methods (e.g. passports)
- Zero-in on specific data like databases or specific IP using techniques like Exact Match or Fingerprinting
- Find remaining sensitive data via entity, proximity, count, and other traditional methods and tune policies based on analysis of the efficacy of the rule and available bandwidth from the security teams.
We will dive deeper into trade-offs of some of the technologies mentioned in the next blog.
Sign up to receive CSA's latest blogs
This list receives 1-2 emails a month.