SOC Analyst Fatigue: What Our Data Says About Sustaining Investigation Speed and Quality
Published 10/10/2025
If you run or staff a SOC, you already know the story: the longer the shift, the sloppier the notes, the more steps get skipped. The cognitive fatigue hits hard. In CSA’s new benchmarking study, we looked at something teams rarely measure directly:![]()
- Whether analysts can sustain thoroughness over time;
- And how an AI SOC analyst (in this case, Dropzone AI) changes that equation.
This blog post zooms in on two related signals of investigative rigor: investigation completeness and written depth. Together they help determine whether analysts are covering the right bases, even when alert queues don’t let up.
Why focus on completeness (and not just speed)?
Speed is table stakes in the SOC, but speed without quality is just a quicker way to miss something important.
Our study looked at 148 participants working two escalated alert scenarios (AWS S3 and Microsoft Entra). We scored each response against seven core investigative criteria. We derived these criteria from expert-modeled “ideal responses.”
We counted each criterion only if the participant clearly addressed it, with no partial credit given. That provided a concrete completeness score for every investigation, independent of how fast someone finished.
What happened under pressure
Here’s the part that maps directly to analyst fatigue:
- Manual group: Completeness dropped sharply from the first to the second scenario by 29%.
- Dropzone AI group: Completeness dipped only 16% over the same sequence.
The direction is unsurprising; the magnitude is what matters. AI assistance halved the loss of thoroughness across scenarios. Analysts using Dropzone were more resistant to fatigue. They continued to hit the expected steps even as cognitive load built up.
Writing depth is a window into rigor
Completeness tells you what steps the analyst covered. Written depth tells you how thoroughly they did them. We measured the average word count of investigation steps and conclusions across scenarios.
- Manual group: Average steps length fell 27%, conclusions fell 20% from scenario 1 to 2.
- Dropzone AI group: Average steps length increased 7%, conclusions dipped 14%.
On a busy day, undetailed and short notes usually correlate with skipped checks. In the manual cohort, that’s exactly what we observed. The steeper drop in written report detail lined up with the steeper decline in investigation completeness. With Dropzone AI, analysts kept more of the required steps and also preserved their written depth.
Accuracy still matters, so did AI inflate confidence?
Accuracy stayed higher with AI. We measured whether participants correctly identified whether the investigation required further action based on the scenario. In the AWS S3 scenario, Dropzone users were 97% accurate with their conclusions vs. 68% accuracy for manual. In the Microsoft Entra scenario, this was 85% for Dropzone users vs. 63% for manual. The study’s overall takeaway puts it plainly: “Dropzone users were 22–29% more accurate” than the manual condition.
And yes, speed improved substantially. Investigations with Dropzone were 45% faster and 61% faster in each scenario.
The core finding here is that AI didn’t trade quality for speed; it helped analysts keep quality and speed.
Hands-on experience matters
Baseline attitudes toward AI in security started positive (overall 8.3/10). After using the platform, 94% of Dropzone users reported viewing AI more positively. That’s the change you get when analysts experience faster work with sustained quality in real (ish) scenarios.
What this means for SOC leaders
To reduce fatigue and raise baseline investigation quality, treat investigation completeness and report depth as first-class metrics. Then, put an AI SOC analyst into the escalations loop to support them.
Here are two practical moves:
1) Instrument for measuring investigation completeness and track it over time
Define your “expected investigative steps” per alert type (seven worked well in our rubric). Monitor the rate of steps completed and the drop-off across shifts or scenarios. Expect AI assistance to flatten that decline.
2) Use written depth as a quality signal (not just a paperwork chore)
Shorter notes aren’t always bad, but a consistent pattern of shrinking documentation during busy periods is a warning. In our study, the manual group’s 27% step-length drop mirrored the 29% completeness drop. On the other hand, with AI, the notes held up far better in both categories. Align your QA to reward clear, stepwise documentation, and let AI help scaffold it.
Bottom line
Investigation completeness: AI support halved the drop-off across scenarios.
Written report depth: AI support helped maintain detail when it normally erodes.
Accuracy and speed: AI support improved both.
If you’re exploring AI SOC analyst capabilities, the evidence is compelling. For the full methodology, scoring criteria, scenario details, and more, download the study here. It includes the granular data and methodology that we used to measure analyst performance with and without AI assistance. It also includes the following results:
- Years of experience
- Accuracy of the investigation
- Speed of investigation
- Completeness of investigation
- Length of report responses
- Perceived difficulty of investigation
- Confidence in investigation findings
- Attitude toward AI in security
- Likelihood to recommend Dropzone AI
- Perceived change in investigation speed from the platform
- Experience with the platform
- Change in opinion about AI in security
- Themes from open-ended feedback about the platform
The report measures each scenario against seven core investigative criteria, with clear definitions of what counted as “complete.” This allows the report to act as a model you can adapt to evaluate your own SOC’s investigations.
The two scenarios used were an AWS S3 public access alert and a Microsoft Entra failed login attempt. Both are the kind of alerts real SOCs face daily. The report includes step-by-step scoring rubrics and example analyst responses. With these, you can benchmark your team’s investigations against a wider sample of peers.
Finally, the appendices in the report provide practical guardrails that SOC analysts flagged during their investigations. These recommendations directly tied to where analysts stumbled or succeeded in the scenarios.
If you’re evaluating the usefulness of an AI SOC analyst, this report gives you the evidence you need. Download the full study today and see how your SOC could benefit.
Unlock Cloud Security Insights
Subscribe to our newsletter for the latest expert trends and updates
Related Articles:
Using an LLM as a Judge
Published: 10/28/2025
SASE: Securing the New Enterprise Perimeter with Zero Trust
Published: 10/27/2025
Implementing CCM: Supply Chain Management Controls
Published: 10/24/2025





.png)
.jpeg)
.jpeg)
.jpeg)
