Before the inspection letter arrives, every Chief Risk Officer must answer twelve questions with absolute clarity. Not in theory. Not in a steering-committee deck. In the evidence the inspector will actually request — and in the sequence supervisors use. This is the framework that separates banks that shape the inspection narrative from those that scramble through findings negotiation.
When the JST walks in, they are not verifying that you said the right things in your RAS. They are verifying that the institution behaves as the documents claim — under pressure, in real data, under the eye of an inspector who has seen how your peers actually operate. Failure is expensive and asymmetric.
A supervisory add-on of 25–75 bps on P2R is the typical price of structural findings. For a mid-sized SSM bank, €400m–€1.2bn of additional capital consumption.
Follow-up letter, remediation plan, Board validation, TRAIR reassessment — 18 to 30 months of senior-management bandwidth redirected from strategy to defence.
Supervisory perception compounds. A weak OSI feeds the next SREP, the next TRIM cycle, and sets a risk-taking ceiling the Board never explicitly voted for.
CROs rarely leave after one difficult SREP. They leave after the second. The OSI is where the second one is written, months before it is published.
Most OSI failures are sequencing failures. The institution knows the answers — in fragments, in different committees, in different heads — but cannot assemble them in the order and cadence the inspector requires. We have organised the twelve questions around the four phases of the supervisory engagement cycle.
Governance, supervisory intelligence, materiality. What the inspector already believes about you before they arrive.
Data room, retrieval latency, process documentation. The operational tempo the inspection team will judge you by.
Models, stress testing, outsourcing, open findings. The substance the inspector will form a view on — and that the JST will escalate.
Draft report response, Board communication, relationship capital. How you convert a difficult finding into a manageable one.
By the time the mission letter is signed at the JST, the inspector has already read two SREP cycles, three internal audit reports and every supervisory exchange minute. Your score is partially set. These three questions determine the baseline.
The inspector will ask for the last four Board Risk Committee minutes and trace one escalation per meeting through to resolution. "The Board was informed" is not governance. "The Board decided, dated, and required evidence of closure" is.
Supervisors telegraph their concerns. Thematic reviews, horizontal benchmarks, JST speeches and sector "Dear CEO" letters are the inspection brief in plain sight. If you cannot name the three themes your JST is escalating in 2026, you are preparing for the wrong inspection.
Most RAS documents use thresholds that were calibrated in benign conditions and have never been tested. Inspectors look for the gap between what the RAS escalates and what actually gets escalated. Where that gap exists, the inspector will document it — and so will the SREP.
An inspector forms their working hypothesis on the pace at which you deliver the first three document requests. If the initial exchanges are slow, ambiguous or mis-sequenced, the default assumption is that controls are weak elsewhere. These three questions decide that tempo.
The inspection data room is not a SharePoint. It is a tested retrieval capability organised by scope area, indexed to the mission letter, and rehearsed under a fire-drill. Banks that cannot execute a blind retrieval of a board paper from 2023 in under two hours are telling the JST something about their control environment.
Credit granting, model validation, collateral revaluation, NPL workouts, stress testing — each must be documented at the level a new hire could execute without a call. Version control matters. "Last reviewed 2022" is the single most common finding in credit OSIs and it is avoidable.
Uncoordinated meetings are the most common self-inflicted wound of an OSI. Three different SMEs answering the same question with three different numbers is a finding before it is a problem. An intake protocol — named single point of contact, pre-briefed interviewees, rehearsed responses to the 30 most likely questions — is table stakes.
Inspectors are less interested in whether your PD model is calibrated correctly than in whether you can prove how it was calibrated, who approved it, and what the override trail looks like when the result is inconvenient. These three questions determine the substantive grade of the inspection.
The model approval file must reconstruct: initial development, independent validation, Board sub-committee approval, regulatory non-objection (where applicable), annual back-testing, material changes and their re-approval. TRIM-style inspections have industrialised the request for exactly this trail. Gaps get documented; gaps get findings.
Reverse stress tests, idiosyncratic scenarios, narrative plausibility, transmission of macro shocks into P&L and capital — if the story breaks under cross-examination, the finding writes itself. Your ICAAP is a supervisory-facing document whether you wrote it that way or not.
DORA, EBA Outsourcing Guidelines, and SSM expectations have converged: the bank remains responsible for operations it has outsourced. Inspectors pull the register, pick two critical vendors, and request the last three KPI reviews, the exit plan, and evidence of Board oversight. If any of those are missing, the inspection changes character.
Between the end of the on-site phase and the Final Written Report, there is a window — usually six to twelve weeks — in which the institution shapes the wording, severity and cadence of the findings it will live with for two SREP cycles. The last three questions decide whether you use that window.
Inspectors distinguish between action plans and remediation. An action plan fixes the symptom. Remediation fixes the mechanism. When the Draft Written Report lands, the institutions that can demonstrate root-cause discipline receive different language — and different severities — than those that cannot.
Inspectors negotiate findings with the JST coordinator. A bank that supplies evidence-grade rebuttals — dated, sourced, concise — gives the inspector material to soften or re-rank a finding. A bank that supplies commentary strengthens the original wording. This is an art form that should not be learned live.
Those are not the same thing. "Well-prepared" is the posture of a bank that has rehearsed its defences. "Transparent" is the posture of a bank that escalates what it sees, invites supervisory dialogue on difficult issues, and accepts the short-term friction of doing so. Every OSI is scored partly on this distinction. Relationship capital compounds — or corrodes — across every cycle.
The banks that navigate OSIs well are not the banks with the thickest files. They are the banks whose CROs have rehearsed the conversation — and whose documents match what the institution actually does.
If the inspection letter lands on your desk today, these are the four checkpoints you hit — in this order, at this cadence — between notification and the first on-site visit. This is what the best-prepared G-SIBs in Europe actually do. It is not theory. It is a timetable.
The first decision is structural: the OSI response is taken out of line-management and given to a named programme director reporting directly to the CRO (or CEO for severe inspections). Within 24 hours, three artefacts are produced.
This is not a self-assessment. Senior practitioners — ideally ex-supervisors — stress-test each of the twelve dimensions against what the institution can evidence today, not what it believes it can evidence. The output is a heat-map with three categories of gap: remediable in 30 days, remediable in 60, structural.
Ten named interviewees — CRO, CFO, Head of Credit, Head of Models, Head of Internal Audit, CISO, Head of Outsourcing, COO and two line-1 SMEs — are interviewed under OSI conditions, in English, over three days. The transcript reveals inconsistencies before the inspector does. Governance artefacts are simultaneously tested in a second blind fire-drill.
On the morning the inspection team walks in, the CRO opens with a 40-minute institutional narrative: risk profile, material evolutions since the last SREP, known weaknesses the bank is already addressing (with dated commitments), and the three topics where the bank would welcome supervisory dialogue. This posture — confident, transparent, prepared — sets the temperature for the entire mission.
Drawn from anonymised patterns observed across recent significant-bank inspections in France, Belgium, Luxembourg and Germany. None of these are model errors. All of them are preparation errors.
The institution responds to inspector questions by referring back to three-year-old Board papers or prior supervisory correspondence. The inspector hears: the bank has not moved on. The finding is written as governance weakness, not as a technical issue — and governance findings travel further in the SREP.
Typical costOne additional P2R bp per finding, escalation to JST coordinator, and 18 months of "enhanced monitoring" added to the supervisory dialogue.
Risk, compliance or internal audit raise a material issue mid-mission — often honestly, often correctly — but in front of the inspector rather than in the Day-7 diagnostic. The institution loses the ability to frame the issue. The inspector frames it, and the framing becomes a finding.
Typical costPublic finding on self-identified weakness, accelerated remediation timeline (6 months vs 18), and a permanent loss of narrative control for that topic.
The Draft Written Report lands with a tight response deadline. The institution has no pre-built response protocol, no legal sign-off path, no Board validation sequence and no library of evidence. It submits a commentary — not a rebuttal. The finding goes into the Final Written Report at its original severity. Two years of SREP follow.
Typical costForgone opportunity to downgrade 2–3 findings per mission, locked-in capital add-ons and a weaker position at the next supervisory dialogue.
An inspector who receives live system queries — not a pre-compiled PDF — walks out with a different view of your controls. Decks show what you want to see. Live data shows what you can see.
The best OSI outcomes come from CROs who have been in structured contact with their JST long before the letter. You do not negotiate your way out of findings. You prevent them — through disclosure discipline.
Inspectors look for discipline, not perfection. Dated decisions, tested controls, audited closures. Banks known for follow-through negotiate post-inspection with credibility; the rest negotiate from weakness.
Every Ezelman OSI engagement is led personally by senior practitioners with direct SSM, PRA and G-SIB risk-function experience. No pyramid. No first-year associates running your data room. Choose the depth that matches where you are in the cycle.
For CROs who suspect they have an OSI coming — or who want a calibrated benchmark before the letter.
A focused two-week diagnostic: the twelve-question framework applied to your institution, a red-team scoring exercise, and a Board-ready dossier with a prioritised gap list.
For institutions with a mission letter in hand — or expected within the next two quarters.
The full Day-0 through Day-60 playbook delivered shoulder-to-shoulder with your risk function. War-room mobilisation, senior interviewee rehearsals, evidence fire-drills, and a rehearsed CRO opening narrative.
For banks operating under an FWR with open findings and an impending follow-up.
Structural remediation: root-cause discipline, remediation governance, evidence architecture, and calibrated supervisory dialogue to restore relationship capital and close findings credibly.
Book a 45-minute confidential scoping call with the founder. We will walk the twelve questions against your institution, identify where the real exposure sits, and tell you candidly whether an engagement with us makes sense — or whether you can run the readiness internally.