What changed, what stayed the same, and where your institution is now exposed.
SR 26-2 Replaced SR 11-7. But Left a Massive AI Governance Gap.
Date
Author
Dominick Campagna
If you're still operating like SR 11-7 is the standard, you're already behind.
On April 17, 2026, the Federal Reserve, OCC, and FDIC jointly issued SR 26-2 - the first overhaul of model risk management guidance in 15 years. Most institutions are treating this as a compliance update. It is actually a shift in how examiners will evaluate your risk. SR 26-2 fundamentally changes what examiners will expect to see, how they will evaluate your program, and - critically - it leaves a significant gap around AI governance that most banks have no infrastructure to fill.
Here is what changed, what did not, and where your institution is now exposed.
What SR 26-2 Actually Is
SR 26-2 replaces SR 11-7 (2011) and SR 21-8 (2021), consolidating both under a single modernized framework. It was issued jointly by the Federal Reserve, OCC, and FDIC - three agencies instead of two - which signals broader regulatory alignment and greater staying power than prior guidance.
The stated goal is to clarify model risk management principles and establish a risk-based approach tailored to each institution's size, complexity, and model risk profile.
But the implications go well beyond a terminology refresh.
What Changed - And Why It Matters
The Supervisory Standard Shifted From Checklist to Judgment
This is the change most institutions will underestimate.
Under SR 11-7, compliance was largely structural. If you had annual reviews, a model inventory with the right fields, documented validation reports, and structurally separate validators, you were meeting the guidance. The checklist was the target.
SR 26-2 changes that. The guidance clarifies that supervisory criticism will not be issued simply for deviating from the guidance alone. Instead, findings will arise when insufficient risk management is linked to unsafe or unsound practices.
What this means in practice: examiners are no longer grading you on whether you followed a checklist. They are grading whether you can demonstrate that you understand and are actively managing your actual risk.
That is a harder bar for many institutions than it sounds.
The Definition of "Model" Was Narrowed
Simple arithmetic calculations and deterministic, rule-based processes are now explicitly excluded from the definition of a model. A model under SR 26-2 is a complex quantitative method that applies statistical, economic, or financial theories to process input data into quantitative estimates.
This is meaningful relief - but it also sharpens the scrutiny on what is in scope. If it is complex and consequential, there is no longer a gray area to hide in.
Materiality Now Drives Everything
SR 26-2 introduces a formal framework for assessing model risk based on four components: inherent risk, exposure, purpose, and materiality. Your governance effort is expected to be proportional to actual risk - not uniform across all models.
High-materiality models still require rigorous validation. Lower-materiality models can be governed more efficiently. But the institution is now responsible for making that determination and defending it to an examiner.
BSA/AML Is Consolidated
SR 21-8 is absorbed into SR 26-2. One framework now governs all model risk management, including BSA/AML compliance models. The table of what SR 26-2 replaces is longer than most people realize - it supersedes multiple OCC bulletins going back to 1997 in addition to the Fed and FDIC guidance.
Vendor Models Are Held to the Same Standard
This one does not get enough attention. SR 26-2 explicitly reaffirms that vendor and third-party models are subject to the same model risk management principles as internally developed models.
If a vendor enables an AI feature by default and your team starts using it, you own the governance. The model being external does not make the risk external.
Gen AI Is Out of Scope for MRM - But Not Out of Scope for Risk
Buried in footnote 3 of SR 26-2 is the most consequential sentence in the entire document:
Generative AI and agentic AI models are novel and rapidly evolving. As such, they are not within the scope of this guidance.
At first glance this sounds like relief. It is not.
The guidance immediately follows with: banks are still expected to apply their existing risk management and governance practices to determine appropriate controls for any tools and systems outside this document's scope.
Translation: Gen AI and agentic AI are out of scope for formal MRM - but they are fully in scope for governance. And the agencies have explicitly signaled a forthcoming request for information on AI models, including generative AI and agentic AI. Formal guidance is coming.
The gap this creates is real. Right now, most banks have robust MRM infrastructure for traditional models and almost nothing purpose-built for AI governance. That is exactly where examiners will look.
What Examiners Will Actually Ask For
This is where institutions will get caught off guard.
Examiners are not going to walk in and ask whether you read SR 26-2. They are going to ask:
Show me your AI inventory. What systems are running, who approved them, and when?
Walk me through how a vendor AI feature gets evaluated before your team uses it.
You have Copilot agents built by individuals across business units. Where are they documented? What controls exist?
You have ML models developed in Python and Dataiku. Are they in your model inventory? Who validated them?
What happens when a model is used beyond its original intended purpose?
If the answer to any of those is "we are working on it" or "that lives in a spreadsheet somewhere," that is a finding waiting to happen. Not because you violated a rule, but because you cannot demonstrate control.
Real-World Scenarios Playing Out Right Now
These are not hypothetical. They are happening at institutions across the country:
Scenario 1: Copilot agents are developed by individuals across business units and used in day-to-day operations - but never formally approved, inventoried, or monitored. The business sees productivity. The examiner sees uncontrolled AI deployment.
Scenario 2: A vendor enables an AI feature by default as part of a software update. The team starts using it. No one evaluated the model, documented the approval, or assessed whether it falls under MRM. The institution owns the risk.
Scenario 3: An internal ML model built in Python sits outside the model inventory because the team that built it did not know they needed to register it. It is influencing real decisions. No one validated it.
Each of these is a governance gap. Each of them is exactly what SR 26-2 - and the AI guidance that follows - will hold institutions accountable for.
The Gap SR 26-2 Creates - And What Fills It
SR 26-2 does something important by carving out Gen AI: it acknowledges that the regulatory framework has not caught up to where institutions actually are. Banks are already deploying Gen AI tools, vendor AI features, and internally built agents. The technology is in production. The governance infrastructure is not.
That gap - between AI already in use and the governance needed to manage it - is exactly where most institutions have zero infrastructure today.
What examiners will expect to see, even without formal Gen AI MRM guidance:
A centralized inventory of AI systems, vendors, and use cases
Documented approvals and audit trails for AI tools entering the environment
Continuous monitoring across vendor and internally developed AI
A clear, documented approach to AI governance and oversight that you can walk an examiner through
How LucidTrust Is Built for This Moment
SR 26-2 leaves institutions responsible for governing AI without prescribing how to do it. That is the layer LucidTrust is built for.
We give institutions the infrastructure to govern AI across the full environment - vendor AI, internally developed models, and agentic tools moving toward production - with the visibility, documentation, and audit trail that examiners will ask for.
The institutions that build this now will demonstrate examiner-readiness today and be positioned ahead of the formal Gen AI guidance that is coming. The institutions that wait will be building under pressure, after the fact, with an examiner already in the room.
SR 26-2 created the gap. LucidTrust closes it.
If you are preparing for an exam or trying to understand your exposure, we are happy to compare notes.
Key Takeaways
SR 26-2 replaces SR 11-7 and SR 21-8 as of April 17, 2026
The standard shifted from checklist compliance to demonstrated risk judgment - a harder bar than most institutions realize
The definition of "model" is narrowed; simple calculations and rule-based processes are excluded
Materiality now drives governance rigor - institutions must assess and defend their own risk tier decisions
Vendor and third-party models are held to the same standards as internally developed models
Gen AI and agentic AI are excluded from formal MRM scope - but fully in scope for governance
A formal regulatory request for information on Gen AI governance is forthcoming
The governance gap between AI already in use and infrastructure to manage it is where most institutions are exposed today
Sources
Federal Reserve SR 26-2: https://www.federalreserve.gov/supervisionreg/srletters/SR2602.htm
SR 26-2 Full Guidance Document: https://www.federalreserve.gov/supervisionreg/srletters/SR2602a1.pdf
OCC Bulletin 2026-13: https://www.occ.treas.gov/news-issuances/bulletins/2026/bulletin-2026-13.html
OCC News Release 2026-29: https://www.occ.treas.gov/news-issuances/news-releases/2026/nr-occ-2026-29.html
LucidTrust helps banking organizations build audit-ready AI governance programs. To learn more, visit lucidtrust.ai or reach out to our team.

