ChaptersEventsBlog
Card testing is hitting revenue, not just fraud. What should payment companies do now? Register for this March 10 webinar →

What is a Risk Engineer?

Published 03/02/2026

What is a Risk Engineer?
Written by Tomer Roizman, CTO and Co-Founder of Lema.ai.

I've spent my career as an elite security researcher hunting vulnerabilities. My job has always been to think like an attacker: find the gaps and exploit the loopholes.

When I bring that same mindset to third-party risk, I find exactly what I expect: companies are treating their biggest attack surface with spreadsheets and self-reported questionnaires. The discipline that should be engineering risk is stuck doing compliance theater.

This post is about changing that. It's about what happens when you apply vulnerability research thinking to vendor risk.

 

Key Takeaways

  • Traditional TPRM catches compliance, not exposure—real risk lives in how tools are actually used.
  • AI and modern SaaS silently expand blast radius when defaults, usage, and behavior go unchecked.
  • Risk engineering applies attacker thinking to vendor risk to find what’s actually broken.

 

Why TPRM is Broken

Your vendor passed the assessment. SOC 2 Type II, privacy controls available, approved.

Then you discovered your developers have been sending production secrets to an AI-powered code editor for six months because privacy mode was off by default and nobody knew to turn it on.

Or you learned from a class action lawsuit that your business communications platform has been using customer call recordings to train AI without consent, and they added a Philippines-based transcription service that's processing customer SSNs spoken on support calls.

Or your customer engagement platform quietly removed "we do not sell your data" from their privacy policy after a breach and lawsuit, and you found out months later.

The assessment didn't catch it because it asked if controls exist, not if anyone's using them.

TPRM is an audit process applied to an engineering problem. And audits can't find what vendors don't tell you.

What is Risk Engineering?

A risk engineer finds what could actually go wrong, not what vendors say about their controls.

The output isn't a score. It's: this is broken, this breaks if the vendor fails, here's how to fix it.

It's the difference between an audit and a penetration test. One asks if you're secure. The other proves it and prepares for the moment something breaks.

What Risk Engineering Looks Like in Practice

Risk engineering is required when risk isn't obvious from documentation alone. Sometimes risk emerges because usage changes. Other times, the risk exists from the start but only becomes visible once you understand how the relationship actually works.

Here's how an AI-powered code editor would be treated under TPRM vs. risk engineering.

The Vendor: Provides AI code completion and editing for developers.

Your Environment: 50 backend developers writing code with database credentials, API keys, and customer data queries.

 

Current TPRM

Risk Engineering

What you do

Send security questionnaire

Read vendor documentation, analyze default settings, correlate with how your developers actually use it

Questions you ask

• Do you have data privacy controls?
• Can users control what data is shared?
• Is data encrypted?
• Do you have a SOC 2?

• What does the vendor collect by default?
• Is privacy mode on or off by default?
• What are our developers actually doing in this tool?
• What's in the code they're writing?

What you find

✅ Yes, privacy controls available
✅ Yes, privacy mode available
✅ Yes, TLS 1.3
✅ Yes, SOC 2 Type II

• Privacy mode is OFF by default
• When OFF, vendor collects: all code written, all prompts, all edits, all files opened
• Your developers write backend code with database queries and API integrations
• Developers hardcode API keys during development
• Production AWS credentials likely sent to vendor

Time spent

2-3 hours

90 seconds

Assessment result

Low risk. Privacy controls available. Approved.

High risk. Production secrets exposed.

What the vendor can honestly say

"Yes, we have privacy controls. Users can enable privacy mode."

Same thing. But that doesn't matter.

What actually matters to you

Whether privacy controls exist

Whether anyone is using them

Actions required

None. Vendor approved.

1. Audit all developer installations
2. Enable privacy mode organization-wide
3. Rotate any API keys that may have been exposed
4. Add privacy mode to developer onboarding checklist

 

Why Current TPRM Failed

The vendor answered every question honestly:

  • "Do you have privacy controls?" → Yes
  • "Can users control data sharing?" → Yes
  • "Is data encrypted?" → Yes

All true. All compliant. All useless.

Because the real question isn't "do privacy controls exist?"

The real question is "are your developers sending production secrets to a third party right now?"

Current TPRM can't answer that. Risk engineering can.

What Risk Engineering Offers

‍Current TPRM tools are built for auditors. Risk engineering is built for, well, risk engineers. It gives you three ways to find what vendors don't tell you:

Third-Party Artifacts — Analyzes SOC 2 reports, penetration tests, security policies.

Public Intelligence — Monitors breaches, lawsuits, policy changes, subprocessor additions.

Blast Radius Monitor — Connects to Okta, Wiz, Netskope. Shows who's using each vendor and what permissions they have.

These three sources work together to find actual exposure:

Not "vendor has controls available."

But "privacy mode OFF by default + 50 developers using it + none enabled privacy mode = production secrets exposed right now."

Risk engineers can finally verify what's actually happening, understand their actual exposure, and take specific action.

FAQs

What is the core difference between Risk Engineering and traditional TPRM?

Traditional TPRM is a "check-the-box" audit process that relies on a vendor’s self-reported data (like SOC 2 reports). Risk Engineering is a proactive security discipline. It focuses on the live interface between a vendor and your organization, using forensic artifact analysis and real-time monitoring to identify actual production exposure, not just theoretical compliance.

 

How does Risk Engineering handle "Shadow AI" and vendor sprawl?

Static questionnaires can’t catch what your procurement team doesn’t know exists. Risk Engineering integrates with your security stack (e.g., Wiz, Netskope) to detect unsanctioned AI tools and integrations as they happen. By mapping the blast radius of these tools, it allows security teams to mitigate risks before they bypass governance.

 

Can a vendor be compliant but still pose a high risk?

Absolutely. Compliance is a snapshot of a vendor's past; risk is a reality of your present. A vendor can meet all SOC 2 requirements while shipping a tool with "opt-out" privacy defaults that ingest your IP into their training models. Risk Engineering identifies these configuration drifts that traditional audits miss.

 

Does Risk Engineering replace my GRC tool?

No, it powers it. Risk Engineering replaces the manual, high-latency work inside your GRC. Risk Engineering integrates with your existing workflow to turn a static database into a live, automated defense platform that calculates real-time impact rather than just storing PDF files.


About the Author

With over a decade of experience in cybersecurity, Tomer has a distinguished background in the Israeli Intelligence Community, where he specialized in vulnerability research and led major security research projects. Prior to co-founding Lema, he served as the Research Lead at the API security unicorn Noname Security. Tomer holds an MBA from Tel Aviv University and is a recognized expert in building secure, scalable AI-driven architectures.

Share this content on your favorite social network today!

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates