Models fail at the assumption layer, not the calculation layer

When a financial model is challenged in due diligence, at a board meeting, or after a decision has produced unexpected results, the problem is rarely that the mathematics was wrong. The problem is that the reasoning underneath the mathematics — the assumptions — was never tested by anyone who didn't already believe them.

A model's assumptions are set by the person building it. The scenarios are defined by the same person. The sensitivity analysis is run by the same person. When the worst case is tested, the threshold is naturally chosen at a level the builder considered plausible.

"A -15% revenue scenario looks rigorous until you ask whether that threshold was chosen because it represents a realistic external shock — or because it is the largest drop the model can absorb while still showing acceptable results."

This is not a failure of integrity. It is a structural feature of how models are built. The model was built to answer the questions the builder was comfortable asking. Independent verification exists to ask the questions they were not.

In fundraising

The model shows three years of growth. The assumptions behind that growth — market penetration rates, sales cycle lengths, churn at scale — were set by the team that needs the raise to close. Investors are paid to find the gap between what was assumed and what is independently supportable.

In M&A

The deal model was built to evaluate a thesis. Synergy estimates, integration timelines, and revenue retention assumptions were all constructed within a framework that expected the deal to work. The counterparty and the board will challenge exactly those assumptions once the model reaches them.

In capital allocation

The business case was prepared by the team requesting the capital. Boards are meant to challenge it — but they are reviewing a model whose assumptions have only ever been tested by the people presenting it. The scrutiny gap is structural.

After the decision

When a model's assumptions fail in practice, the moment of discovery is always the most expensive one. Verification before the decision transforms a potential crisis into a correctable finding.

How we approach verification

Six principles that define what independent model verification means in practice.

I

Independence is the product

The value of verification comes entirely from the verifier's independence from the model's construction. We do not build models. We do not advise on their construction. We verify what has already been built.

II

Documentation creates transparency; challenge creates robustness

A fully documented model can still carry structurally fragile assumptions. Documentation tells you what was assumed. Verification tests whether those assumptions would survive a challenge from someone who did not already believe them.

III

The measurement problem precedes the calibration problem

Before asking whether assumptions are correctly calibrated, we ask whether the model is measuring the right things. A model built for continuity cannot stress-test discontinuity — regardless of how carefully its parameters are set.

IV

Timing is the difference between a finding and a crisis

Assumption failures discovered during due diligence or after capital allocation are expensive and often irreversible. Discovered before either, they are correctable without loss of credibility or capital.

V

Two questions require two different answers

"How did you build this?" is answered by the model's structure and documentation. "Why did you assume this?" requires an independent answer that the builder cannot give for themselves. Both questions will be asked.

VI

Precision over assurance

We do not provide assurance that projections will be achieved. We provide precise identification of which assumptions are independently defensible and which require further substantiation before the decision is made.

Rigorous by design

Independent model verification is not a qualitative review. It requires quantitative methods applied to the assumption layer specifically — not to the model's outputs, but to the reasoning that produced them.

Our verification methodology draws on statistical analysis, Monte Carlo simulation, and structured assumption stress-testing. These are applied not to predict outcomes but to identify the conditions under which the model's assumptions break — and whether those conditions are plausible in the context of the decision being made.

The goal is not to find errors. It is to map the fragility of the assumptions before the decision is made. A model with well-documented, independently-challenged assumptions is a fundamentally different instrument from one that has only been reviewed by its authors.

What the process looks like

Step 1

Model receipt and scoping

We receive the model and define which assumption categories are in scope for the engagement.

Step 2

Assumption extraction and mapping

We identify and document the explicit and embedded assumptions driving key outputs.

Step 3

Independent stress-testing

We apply quantitative methods to test whether assumptions hold under conditions the builder did not examine.

Step 4

Verification report

We deliver a structured report identifying which assumptions are independently defensible and which require additional substantiation.

The question before the decision

Independent verification and validation is most effective — and least costly — when it happens before the model reaches the room where decisions are made.

Discuss Your Engagement →