The Deception Game: Negative Trust in Cybersecurity
Blog Article Published: 09/05/2023
Originally published by CXO REvolutionaries.
Written by Sam Curry, VP & CISO, Zscaler.
Cybersecurity is an unfair, asymmetric race. For years, we have studied the opponent, from the Kill Chain™ to MITRE ATT&CK, and have inadvertently lionized the attacker’s course and journey from sniffing at the front door to pwning our systems and networks.
This is only half of the race, though, because there’s a Heroes’ Journey (apologies to Joseph Campbell) happening at the same time, with men and women going through states and phases, from monitoring at rest and triage to higher states of alert and readiness. We talk in this world of heroes and heroines in terms of time to detection, time to understanding, time to remediation, and eventually the recovery and learning stages. If we think of this as a race between two parties (and sometimes more), the paths we run are not the same. I recently presented this at Zenith Live 2023 and focused on how to change the game using deception technology.
I started my presentation with Operation Mincemeat, a fantastic film about the Allies' complex operation to convince Axis forces that the invasion of Southern Europe in 1943 would land in Greece and not Sicily. Well worth watching, it involves, among other things, keeping a corpse fresh, faking military communications, using neutral nations diplomatically, espionage, and more. I also mentioned Operation Fortitude, which was an even more complex operation to convince Nazi Germany that D-Day would occur at Calais and not Normandy. Both are great examples of uses of deception in conflict.
As you might expect, much of Zenith Live was about zero trust; but after some thought, it occurred to me that what we want to do is to get into a world of ”negative trust.” What do I mean by that? Well, if zero trust means getting away from the world of network connectivity and into a world of least privilege and least trust, where we focus on the right connections among the right parties and only those connections as necessitated by the business, negative trust describes minimizing the available trust to exploit by attackers. In a world where we employ deception, lures, decoys, and other tools to mislead, trap, and slow down the opponent as they run their race, then we are talking about introducing doubt and opportunities for failure to them. That is a world where they cannot trust what is in front of them and get tripped up.
Honeypots and tar pits are not new, but applying them in a world with a secure service edge is. The idea is to ring-fence identities from the endpoint with false credentials to the Active Directory with completely false users and ring-fence the entitlements and targets with fake applications, files, and data structures. At every possible step, attackers should be faced with opportunities for mistakes, preferably in ways that do not give away that there is a decoy or lure in place.
Looking at the MITRE ATT&CK framework, of which there’s a simple abstraction below, we tend to think of deception being used up front as a thickening of the perimeter. However, there are opportunities throughout the framework to provide traps, opportunities to trigger early and clear signals, and to frustrate the opponent's aims.
For the full MITRE ATT&CK, see the excellent ATT&CK website and the MITRE Engage Framework website which, among other things, matches defense technologies to tactics and techniques above (i.e. columns and boxes)
Why does all this matter?
Let’s go back to the notion of an unfair and asymmetric race, because another interesting thing was said at one of the Zenith keynotes by Aflac VP, Security Operations & Threat Management DJ Goldsworthy. Eventually, this will be “AI vs. AI.” But before we go there, I should relate something I had the chance to talk to Garry Kasparov about at DEEP 2018.
Mr. Kasparov related on stage and in conversation afterward what it was like to play against AI like Deep Blue in the early days of chess play. He said first he would always win. Then he was beaten, and it was demoralizing. He rallied and played again and would win more than he lost until he didn’t and would lose more than he won. Eventually, there was a long period in which the best games were played not by machines or people but by assisted humans.
This is important because we might otherwise despair. Instead, we should take heart because there are subtle differences between chess and cyber conflict: it is a much broader field of conflict. It is an infinite game, not a finite one, and the game rules and topography evolve and change.
So it is perhaps better said that AI is coming to assist the games of offense and defense alike – and to change these games. We need to keep finding ways to run our respective races better and also to mess around and screw up opponents’ races, what I call Negative Trust.
We, of course, need to invest in zero trust, which is changing the attack topography and reducing options to the opponent. We need to do all of the things we’ve always done in cybersecurity to reduce risk. However, we also must put obstacles in our opponents’ path, present them with crossroads that slow them down and send us signals further to the left in our race (which is a massive advantage since things further to the right grant us less time to respond), confuse them, frighten them and more. They should not be seen as bold masters of IT stomping through our networks, but as mice timidly testing networks that are hostile and frightening where every turn can mean disaster. Because, in a world of AI-assisted humans attacking AI-assisted defenses, we have to change the game and, frankly, cheat to win.
It might have been controversial when Eddy Guerrera said it in wrestling (concerning steroids) or when Joe Montana said it in football, but in cyber, as in war, it is not controversial: if you ain’t cheating, you ain’t trying.
Sign up to receive CSA's latest blogs
This list receives 1-2 emails a month.