Cloud 101CircleEventsBlog
Get 50% off the Cloud Infrastructure Security training bundle with code 'unlock50advantage'

Insider Threat: An Enemy in the Ranks

Published 08/21/2023

Insider Threat: An Enemy in the Ranks

Originally published by NCC Group.

Written by Sourya Biswas, Technical Director and Jared Snyder, Security Consultant, NCC Group.

Recently, an attempt by a Russian crime syndicate to subvert a Tesla employee to plant ransomware in the company’s systems made the news. Thankfully, the employee was not tempted by the half a million dollars offered and instead reported the approach to the authorities. However, companies cannot always rely on their employees doing the right thing. They should act to prevent, rather than react to, the realization of this risk within their ranks.


A historical look at the malicious insider

We have all seen it or heard of it at one time or another. A company is breached and stuck reacting rather than proactively identifying key indicators in the months leading up to the attack. This is becoming all too commonplace, as organizations tend to focus significant funding towards only securing the perimeter against external threats and implicitly trusting internal employees.

We experienced a perfect example of this trust when a client was asked what they are doing to reduce the threat of a malicious insider. The client (who will remain anonymous) answered “We only hire the best”. While this is a praiseworthy sentiment, even the “best” can be compromised.

The problem of the malicious insider is not new. If we take a very long step back to roughly 480 B.C, we can see that malicious insiders have always been around. During the Battle of Thermopylae, even as Leonidas and his 300 Spartans held a much-larger Persian force at bay, the Greek Ephialtes betrayed his homeland and revealed the secret path around the mountain to the Persians.

Other examples of malicious insiders throughout history include:

  • The Cambridge Five, distinguished gentlemen with excellent credentials who were vetted before joining the British bureaucracy but ended up turning into Russian assets during the Cold War.
  • A hacking unit within the Central Intelligence Agency (CIA) failed to secure internal hacking tools, resulting in compromise by a rogue employee. To take it a step further, the CIA wouldn’t have noticed the leak had WikiLeaks not disclosed the breach.


The accidental insider can be equally detrimental

It’s not always the malicious insider who puts the organization at risk—it can be the accidental insider as well. Someone whose actions result in an adverse impact unintentionally (rather than by nefarious design) can result in equally impactful harm to the organization.

  • The most common example of this is phishing. An employee clicking on a link inadvertently resulting in a ransomware attack on internal resources. In fact, unlike the aforementioned attempt on Tesla that tried to subvert an employee into becoming a malicious insider, ransomware gangs have typically relied on user errors in clicking phishing links to infiltrate target networks.
  • Another related use case is the Business Email Compromise (BEC). This is where a criminal impersonates a company executive and convinces the Accounts department to issue fraudulent payments, depending on the recipient not confirming the authenticity of the request through a phone call. That happens because the recipient is usually hesitant about questioning an executive who would be several layers of seniority higher in the company organization.
  • A discussion of user error resulting in an accidental insider is not complete without a mention of cloud misconfigurations. In a veritable Groundhog Day scenario where the same issue is repeated over and over again, data in the cloud is continuously being found exposed due to misconfigurations. Verizon is well known in the cybersecurity universe as the company that publishes the authoritative Data Breach Investigations Report (DBIR). Despite publishing trends regarding cloud misconfigurations, that didn’t prevent the company from experiencing its own data breach that saw 6 million customer records exposed on a misconfigured AWS S3 bucket.
  • For-profit companies are not the only ones guilty at keeping the doors open; even the US Department of Defense, that would be expected to have the highest standards of security guarding the nation’s secrets, is guilty of oversight. Like the CIA hack earlier, even government agencies who are paranoid about security are not immune to the insider threat, malicious and accidental.

As cyber security consultants, we often help clients determine their current state of cyber security maturity and provide recommendations on navigating to a desired target state. This target is driven by the ability of cybersecurity controls to meet applicable threats as identified during a threat modeling exercise. Interestingly, malicious insider and user error consistently rank as the top threat actors during these exercises, supporting the importance of the insider threat, malicious and accidental.


How to mitigate against the insider threat

In order for organizations to detect user error before it occurs and prevent malicious insiders from doing harm, we recommend the following:

1. Controlled access to information. The organization should restrict any type of shared access, and employ the concepts of least privilege and separation of duties. In addition, a defined process for provisioning and adding/removing access should be in place. Administrators as well as employees with sensitive data access should have reviews of access performed periodically.

In our experience, we have seen organizations where “everybody has access to everything”, a clear violation of least privilege where only the minimum access required to do a job should be granted. This can be implemented if accesses are granted based on roles (role-based access control or RBAC) instead of being granted to individuals. Another issue we’ve encountered is entitlement creep, where individuals change roles but retain their older access privileges. This problem too can be addressed through strict RBAC and reinforced through access and entitlement reviews. With respect to separation of duties, we’ve encountered violations where an individual requesting access is also the same individual approving that request. As you can imagine, this is a clear conflict of interest.

2. Training and awareness. Employees should be provided training on key indicators of potential compromise, as well as anonymous reporting procedures to management. On-boarding and annual training should include job-specific security training as well as incorporate updates to the program based upon current threats. Furthermore, providing hands-on learning such as phishing campaigns can increase an employee’s ability to detect and report such things as a spoofed or otherwise malicious e-mail. As the saying goes, “the proof is in the pudding”, and phishing simulations provide a real-world test of security awareness. Providing comprehensive training greatly increases an organization’s chances of identifying a problem before any actual damage occurs.

In our experience, we have seen organizations often relying on generic security awareness materials to train their employees. Not only was such training not customized with respect to the organization’s environment, it was often repeated annually without changes. In a fast-evolving domain like cybersecurity where new threats emerge daily, a static approach towards awareness is not helpful. Also, we often found organizations lacked role-specific training; ideally, an individual responsible for cybersecurity incident response should receive additional security awareness training compared to a vanilla user. The same is true for employees with escalated privileges, like system administrators and executives.

3. User activity monitoring. Activities of ALL users should be monitored, and alerting parameters set to notify requisite individuals or teams when anomalous activity takes place. To take it a step further, a baseline of network operations should be established based upon the collected information to aid in the identification of potential threats. Lastly, all collected logs and user activity metadata should be retained indefinitely, or until an employee’s contract is terminated.

In our experience, organizations either do not monitor user activity at all, or even when they do they focus only on privileged users. That can be a recipe for disaster since even minor issues can result in cascading failures.

4. Information analysis and correlation. The information and metadata surrounding a users’ activity should be analyzed for activity outside of “the norm”. If anomalous activity is found, further research should be conducted to determine if a threat exists; once identified, a response plan should be established and put into effect. All documented findings and related research should be retained indefinitely, or until termination of the employee to aid in future investigations or research.

In our experience, organizations, when they do collect user activity data, rarely correlate and analyze it to obtain insights. Data collection is cheap; data analysis is not. However, to prevent an even more expensive security incident due to an insider threat, organizations must make a risk-informed decision on user activity data analysis.

5. Reaction/De-escalation. This step can go one of two (2) ways:

  • Reaction. Threat has come to fruition and you have begun to react, most likely without a formal response plan, as you are still attempting to identify the extent of the breach.
  • De-escalation. When Insider indicators are recognized through the steps listed above, an organization can step in before it’s too late. This can be accomplished through employee assistance or training programs. It is worth noting that almost all Insider Threat related cases have been found to have noticeable identifiers (such as financial hardship) as a motivating factor in the breach. In our experience, organizations often focus on reacting to insider threats instead of pre-empting, and consequently, deescalating them.

6. Address privacy issues. Any monitoring process is fraught with privacy implications, and Insider Threat Monitoring is no different. Organizations should be careful about following all the laws of the land. This can include obtaining employee approval to collect and process monitoring information, providing them the right to appeal a decision, retention of information as allowable by law, etc. Suitable legal advice should be taken when implementing such a program.


Conclusion

As is evident, protecting an organization from insider threats requires the basics of security to be in place. However, the basics have to be treated as critical and subject to continual improvement. If you consider many of the organizations that have actually been affected by insider threats, they did have the basics in place, or at least the facade. However, crucially, what they didn’t have is sufficient process maturity for those controls and the ability to address vulnerabilities where the process was not operating as intended. For example, an organization can have excellent access controls for its employees but leave the door open for its vendors, thereby risking a third party malicious insider wreaking havoc. While it’s good to start with the basics, the journey shouldn’t end there.

Ensure log reviews are being performed, and not just on an ad hoc basis. Make sure to audit privileged access and establish patterns to define a baseline for normal operations. This allows for organizations to build the foundation to identify and remediate internal threats before they become the next news headline. Include process maturity into these processes with a focus on continual improvement. Finally, consider bringing these processes under the umbrella of a dedicated Insider Threat Monitoring Program. Considering that an employee managing such a program will also technically be an “insider’, third party validation is recommended.