A step-by-step guide to building an AI system register and scoring AI risk at banks, credit unions, and fintechs. Includes a practical scoring framework aligned to the EU AI Act and FS AI RMF.
How to Build Your AI Register: A Risk Scoring Framework for Financial Institutions
Date
Author
Dom Campagna
Every AI governance framework starts the same way: know what you have.
The EU AI Act requires it. The US Treasury's new Financial Services AI Risk Management Framework assumes it. ISO 42001 demands it. And yet, when most compliance leaders at banks, credit unions, lenders, and fintechs are asked to produce a complete inventory of the AI systems their institution uses, the answer is usually a spreadsheet with six rows and a lot of question marks.
This is not a documentation problem. It is a visibility problem. And until you solve it, every other compliance activity, obligation mapping, risk assessment, control implementation, audit preparation, is built on an incomplete foundation.
This post walks through how to build an AI register that is actually useful: what to include, how to score each system for risk, and how to turn the register into the operational backbone of your AI governance program.
What Is an AI Register and Why Does It Matter Now?
An AI register (also called an AI inventory or AI system inventory) is a centralized catalog of every AI system your institution develops, deploys, or procures from a third party. For each system, the register captures what it does, what data it uses, who it affects, and what risk it presents.
If that sounds like a Records of Processing Activities (RoPA) for AI, that is roughly the right mental model. Like the GDPR's RoPA requirement, an AI register is the foundational artifact that makes all downstream governance activities possible.
The urgency is real. The EU AI Act's high-risk obligations take effect on August 2, 2026. Before your institution can complete a conformity assessment, implement required controls, or register high-risk systems in the EU database, you need to know which systems are in scope. That starts with the register.
On the US side, the Treasury's FS AI RMF, released in February 2026 with 230 control objectives across governance, data, model development, monitoring, and third-party risk, assumes an institution has a working AI inventory as the starting point for its entire risk and control matrix. Without one, you cannot even complete the framework's AI adoption stage questionnaire, which is the first step in the process.
The Problem with Most AI Inventories
Most financial institutions that have started this work are tracking AI systems in a spreadsheet. Typically it looks something like this:
AI System | Owner | Risk? | Status |
|---|---|---|---|
Credit model | TBD | High? | No review |
Chatbot (vendor) | ??? | Unknown | Ask IT |
Fraud detection | Security | Not assessed | Outdated |
This approach has predictable failure modes.
Incomplete coverage. The spreadsheet captures the systems someone remembered to add. It misses the AI features embedded in your core banking platform, your CRM, your document management tools, and the generative AI tools your teams adopted without telling anyone.
No standardized risk classification. Everything is labeled "high risk" or "unknown," which tells leadership nothing actionable and does not hold up under regulatory scrutiny.
No connection to obligations. The inventory exists in isolation. It does not map to specific regulations, control objectives, or evidence requirements.
No ownership. Systems are listed, but nobody is accountable for keeping entries current or responding to incidents.
Static. The spreadsheet was accurate on the day it was created. It has not been updated since.
An AI register needs to solve all five of these problems to be worth the effort of building it.
What to Include in Your AI Register
For each AI system, capture the following. This is not an exhaustive list, but it covers the fields that matter most for regulatory compliance and internal governance.
System identification: System name, version, vendor (if third-party), internal system ID, deployment date.
Use case context: What business process does this system support? What decisions does it inform or automate? What is the intended purpose? This matters because the EU AI Act classifies risk based on intended use, not the underlying technology.
Data profile: What data does the system ingest? Does it process personal data? Customer financial data? Where does the training data come from? Is the data sourced internally, from a vendor, or from a public dataset?
Affected populations: Who is impacted by this system's outputs? Customers, employees, applicants? In which jurisdictions? This drives your fundamental rights impact assessment obligations under the EU AI Act.
Ownership: Who is the business owner? Who is the technical owner? Who is responsible for ongoing monitoring? The FS AI RMF is explicit that control objectives require named owners and documented accountability chains.
Regulatory applicability: Which regulations apply to this system? EU AI Act (and which tier?), FS AI RMF, ISO 42001, Colorado AI Act, GDPR, sector-specific guidance from your prudential regulator?
Human oversight: Is there a human-in-the-loop? What are the escalation and override mechanisms? For high-risk systems in credit and lending, this is a non-negotiable requirement.
Risk score: A composite score based on a standardized framework. More on this below.
A Practical AI Risk Scoring Framework
Here is a scoring framework designed for financial institutions. It is deliberately simple. Compliance teams need something they can apply consistently across dozens of systems without getting lost in methodology debates. You can refine it over time as your program matures.
The Five Risk Dimensions
Score each AI system on five dimensions, each rated 1 (low) to 5 (critical):
1. Decision Impact How consequential are the decisions this system influences?
1: Internal operational support only, no customer impact
2: Informs internal decisions that indirectly affect customers
3: Directly influences customer-facing decisions with moderate impact
4: Automates or substantially drives decisions on credit, pricing, or access to financial products
5: Sole or primary decision-maker for outcomes that materially affect a customer's financial standing
2. Data Sensitivity
What type of data does the system process?
1: Non-personal, non-financial data only
2: Aggregated or anonymized data with low re-identification risk
3: Personal data (names, contact details, account information)
4: Sensitive financial data (income, credit history, transaction records)
5: Special category data or data that could enable discrimination (race, health, biometric)
3. Autonomy Level
How much human oversight exists in the decision path?
1: Fully manual process; AI provides informational support only
2: AI recommends, human always reviews and decides
3: AI decides by default, human reviews exceptions or samples
4: AI decides with limited human review (spot checks only)
5: Fully autonomous, no human in the loop
4. Regulatory Exposure
How many regulatory frameworks explicitly apply to this system's use case?
1: No specific AI regulation applies
2: One voluntary framework applies (e.g., NIST AI RMF)
3: One binding regulation applies (e.g., Colorado AI Act)
4: Multiple regulations apply (e.g., EU AI Act + GDPR + sector-specific)
5: Classified as high-risk under the EU AI Act Annex III
5. Vendor Dependency
How much control does your institution have over the AI system?
1: Fully developed and maintained in-house
2: In-house with open-source components
3: Third-party system with full configuration access and transparency
4: Third-party system with limited transparency into model behavior
5: Black-box vendor system with no visibility into training data, model updates, or decision logic
Calculating the Composite Score
Add the five dimension scores for a total between 5 and 25. Then classify:
Score Range | Risk Tier | What It Means |
|---|---|---|
5-8 | Low | Minimal governance overhead. Document and monitor annually. |
9-13 | Moderate | Standard governance applies. Assign an owner, map obligations, review semi-annually. |
14-18 | High | Enhanced governance required. Full obligation mapping, bias testing, continuous monitoring, board-level reporting. |
19-25 | Critical | Maximum governance. Likely classified as high-risk under EU AI Act. Requires conformity assessment, fundamental rights impact assessment, and documented human oversight controls. |
Applying the Framework: Two Examples
Example 1: Internal meeting summarization tool A vendor-provided generative AI tool used by operations staff to summarize internal meeting notes. No customer data, no decision-making impact.
Decision Impact: 1
Data Sensitivity: 1
Autonomy Level: 1
Regulatory Exposure: 1
Vendor Dependency: 4
Total: 8 (Low). Document it, assign an owner, review annually. Move on.
Example 2: AI-powered credit decisioning engine A vendor-provided model that evaluates creditworthiness for consumer loan applications, processing applicant financial data and producing approve/deny recommendations that loan officers typically follow.
Decision Impact: 5
Data Sensitivity: 4
Autonomy Level: 3
Regulatory Exposure: 5
Vendor Dependency: 4
Total: 21 (Critical). This system requires the full governance stack: conformity assessment, ongoing bias audits, documented human-in-the-loop controls, continuous performance monitoring, incident reporting workflows, and regular board-level reporting.
From Register to Operating System
Building the register and scoring your systems is not the end goal. It is the starting point. The register becomes valuable when it connects to the rest of your governance program.
Map control objectives to each system. The FS AI RMF's 230 control objectives are organized across four functions: govern, map, measure, and manage. For each system in your register, identify which control objectives apply based on its risk tier and the regulations in scope. A low-risk internal tool might trigger 20 control objectives. A critical credit decisioning system might trigger 150 or more.
Tie the register to your risk register. Each AI system should have corresponding entries in your enterprise risk register, with risks linked to specific controls, control owners, and evidence artifacts. When an examiner asks about your AI risk posture, the answer should trace from the system to the risk to the control to the evidence, not to a narrative explanation.
Build monitoring workflows. For high and critical-tier systems, define what continuous monitoring looks like: performance metrics, bias audit frequency, vendor update tracking, incident escalation paths. The EU AI Act requires deployers to monitor system performance in line with provider instructions and report serious incidents without undue delay. Those workflows need to exist before an incident occurs.
Report to leadership. The register gives you the data to produce a governance dashboard that leadership and the board actually care about: how many AI systems are in production, how they are distributed across risk tiers, which have open control gaps, which vendors have changed their AI features recently, and where your regulatory exposure is concentrated.
Why Spreadsheets Break Down
If your institution has three AI systems, a spreadsheet works fine. Most institutions, once they do a thorough inventory, discover they have 15 to 50 or more, especially when they account for AI features embedded in vendor tools they already use.
At that scale, the spreadsheet fails. Version control becomes a nightmare. There is no way to link systems to obligations, controls, or risks without building an increasingly fragile web of cross-referenced tabs. Ownership and accountability live in someone's head rather than in the system. And when the vendor pushes an update that changes how an AI feature behaves, there is no mechanism to flag that your risk profile just shifted.
This is the gap that purpose-built AI governance platforms fill. LucidTrust was designed from the ground up for regulated financial institutions to manage exactly this problem: a centralized AI register connected to use cases, models, vendors, obligation mappings, risk scores, controls, and incident workflows in one platform. Instead of maintaining a static spreadsheet, your compliance team operates from a living system that evolves as your AI adoption scales and as regulations change.
[FREE AI Governance Committee Playbook]
Getting Started This Week
You do not need to build the perfect register on day one. You need to start.
Week 1: Send a cross-functional survey to technology, data science, operations, and business unit leads asking a simple question: what AI-powered tools, models, or features does your team use? Include vendor-provided tools. Include anything with "AI," "ML," "predictive," or "automated" in the description. Cast the net wide.
Week 2: Consolidate responses into your register format. For each system, fill in what you know and flag what you do not. Assign a preliminary owner to each system.
Week 3: Apply the risk scoring framework. Sort systems by composite score. Identify your critical and high-tier systems. These are your priority for full obligation mapping and control implementation.
Week 4: Present the register to your AI governance committee (or equivalent oversight body). Use it to drive a conversation about where your gaps are, which systems need immediate attention, and what resources you need to close the gaps before August 2, 2026.
The institutions that build this foundation now will be the ones that move through the FS AI RMF's 230 control objectives methodically rather than scrambling. They will be the ones that complete EU AI Act conformity assessments with confidence. And they will be the ones that can tell their board, their examiners, and their customers exactly how AI is being governed across the enterprise.
That starts with the register.
[Book a demo to see how LucidTrust helps financial institutions build and manage their AI register]
LucidTrust is an AI governance and compliance platform built for regulated financial institutions. Learn more at lucidtrust.ai.


