Why It Matters
The builder of a model cannot independently verify their own assumptions. Not because they lack skill — but because independence is impossible from inside the model.
The Core Problem
When a financial model is challenged in due diligence, at a board meeting, or after a decision has produced unexpected results, the problem is rarely that the mathematics was wrong. The problem is that the reasoning underneath the mathematics — the assumptions — was never tested by anyone who didn't already believe them.
A model's assumptions are set by the person building it. The scenarios are defined by the same person. The sensitivity analysis is run by the same person. When the worst case is tested, the threshold is naturally chosen at a level the builder considered plausible.
"A -15% revenue scenario looks rigorous until you ask whether that threshold was chosen because it represents a realistic external shock — or because it is the largest drop the model can absorb while still showing acceptable results."
This is not a failure of integrity. It is a structural feature of how models are built. The model was built to answer the questions the builder was comfortable asking. Independent verification exists to ask the questions they were not.
Where This Appears
The model shows three years of growth. The assumptions behind that growth — market penetration rates, sales cycle lengths, churn at scale — were set by the team that needs the raise to close. Investors are paid to find the gap between what was assumed and what is independently supportable.
The deal model was built to evaluate a thesis. Synergy estimates, integration timelines, and revenue retention assumptions were all constructed within a framework that expected the deal to work. The counterparty and the board will challenge exactly those assumptions once the model reaches them.
The business case was prepared by the team requesting the capital. Boards are meant to challenge it — but they are reviewing a model whose assumptions have only ever been tested by the people presenting it. The scrutiny gap is structural.
When a model's assumptions fail in practice, the moment of discovery is always the most expensive one. Verification before the decision transforms a potential crisis into a correctable finding.
Our Principles
Six principles that define what independent model verification means in practice.
The value of verification comes entirely from the verifier's independence from the model's construction. We do not build models. We do not advise on their construction. We verify what has already been built.
A fully documented model can still carry structurally fragile assumptions. Documentation tells you what was assumed. Verification tests whether those assumptions would survive a challenge from someone who did not already believe them.
Before asking whether assumptions are correctly calibrated, we ask whether the model is measuring the right things. A model built for continuity cannot stress-test discontinuity — regardless of how carefully its parameters are set.
Assumption failures discovered during due diligence or after capital allocation are expensive and often irreversible. Discovered before either, they are correctable without loss of credibility or capital.
"How did you build this?" is answered by the model's structure and documentation. "Why did you assume this?" requires an independent answer that the builder cannot give for themselves. Both questions will be asked.
We do not provide assurance that projections will be achieved. We provide precise identification of which assumptions are independently defensible and which require further substantiation before the decision is made.
Our Methods
Independent model verification is not a qualitative review. It requires quantitative methods applied to the assumption layer specifically — not to the model's outputs, but to the reasoning that produced them.
Our verification methodology draws on statistical analysis, Monte Carlo simulation, and structured assumption stress-testing. These are applied not to predict outcomes but to identify the conditions under which the model's assumptions break — and whether those conditions are plausible in the context of the decision being made.
The goal is not to find errors. It is to map the fragility of the assumptions before the decision is made. A model with well-documented, independently-challenged assumptions is a fundamentally different instrument from one that has only been reviewed by its authors.
In Practice
Model receipt and scoping
We receive the model and define which assumption categories are in scope for the engagement.
Assumption extraction and mapping
We identify and document the explicit and embedded assumptions driving key outputs.
Independent stress-testing
We apply quantitative methods to test whether assumptions hold under conditions the builder did not examine.
Verification report
We deliver a structured report identifying which assumptions are independently defensible and which require additional substantiation.
Independent verification and validation is most effective — and least costly — when it happens before the model reaches the room where decisions are made.
Discuss Your Engagement →