Back to Blog
Portfolio Management6 min read

How We Validate Our Monte Carlo Engine

How We Validate Our Monte Carlo Engine

October 30, 2025 • 6 min read

For Technical Readers: Complete mathematical specifications, test suite, and validation protocols are available in our Monte Carlo Methodology and Financial Modeling Methodology documents.

When you're using Monte Carlo simulations to model your $1.2B private fund portfolio and make $50M commitment decisions, you need confidence the numbers are right.

Here's how we validate our Monte Carlo engine.


Why Validation Matters

Scenario: You run Monte Carlo on your portfolio. It shows:

  • P50 (median): Final portfolio value $180M
  • P90 (upside): $245M
  • P10 (downside): $125M

Question: Can you trust these numbers to make decisions?

If the engine is wrong:

  • You might over-commit (liquidity crisis)
  • You might under-commit (missed opportunities)
  • Your IC trusts bad projections

Validation ensures: The statistical properties are correct and results are reliable.


What We Validate (6 Key Areas)

1. Mean Matching

Question: Does the average outcome match expectations?

Test: Run 10,000 simulations. Sample mean should equal parameter mean (within statistical bounds).

Result: Error <0.05% (well within acceptable range)

Why it matters: If mean is off, all your projections are systematically biased.


2. Variance Matching

Question: Is the spread of outcomes realistic?

Test: Sample variance should match theoretical variance.

Result: Error <3% (within statistical tolerance)

Why it matters: If variance is wrong, your P10/P90 bands are meaningless (either too narrow or too wide).


3. Correlation Preservation

Question: Do market movements correlate correctly?

Test: When equity markets drop, PE/VC returns should also drop (empirically: β ≈ 1.2-1.4).

Result: Correlation maintained within ±0.05

Why it matters: If correlation is broken, you underestimate portfolio risk (diversification illusion).


4. Convergence

Question: Do results stabilize with more iterations?

Test: Standard error should decrease as 1/√N.

Result: Converges as expected

Why it matters: You need to know: "Is 1,000 iterations enough? Or do I need 10,000?"


5. Benchmark Comparison

Question: Do results match industry data?

Test: Compare our projections to Cambridge Associates and Preqin data.

Results:

  • Exit timing: Within 5-15% of industry medians (PE Buyout: 4.9% error vs 5.8y benchmark)
  • TVPI/IRR: Derived from calibrated exit timing + growth assumptions
  • DPI timing: Matches vintage curves

Why it matters: If you're off vs. industry, your projections are unrealistic.


6. Reproducibility

Question: Do you get same results twice?

Test: Same random seed → identical results.

Result: Bitwise identical outputs

Why it matters: For audit trail and debugging. You need to recreate past projections exactly.


Validation Results Summary

Statistical Correctness:

  • Mean error: <0.05%
  • Variance error: <3%
  • Correlation preservation: ±0.05
  • Convergence: 1/√N as expected

Industry Benchmarks:

  • Exit timing: Calibrated to Cambridge Associates/Preqin (5.8y median PE Buyout, 4.9% error)
  • Growth assumptions: Derived from target TVPI and exit timing
  • DPI patterns: Match industry vintage curves

Test Coverage:

  • 700+ automated test cases across 87 test files
  • All critical paths covered
  • Runs on every code change

How This Helps You

When you run Monte Carlo in Nagare:

You see:

  • P10/P50/P90 TVPI projections
  • Probability bands over time
  • Downside risk assessment

You can trust:

  • Statistical properties are correct (mean, variance, correlation)
  • Benchmarked against industry data (Cambridge Associates, Preqin)
  • 700+ automated test cases ensure accuracy
  • Transparent methodology (published at /docs/monte-carlo)

You make decisions with confidence:

  • "What's my downside risk if I commit $15M?" (P10 scenario)
  • "What's a realistic outcome?" (P50 median)
  • "What if everything goes well?" (P90 upside)

Validation vs. Accuracy

Important distinction:

Validation = Statistically correct

  • Mean matches
  • Variance matches
  • Correlation preserves

Accuracy = Matches reality

  • Requires good input assumptions
  • Markets might not follow normal distribution
  • Black swans exist

We validate: Statistical correctness (engine works as designed)

You provide: Realistic assumptions (based on your market views)

Together: Reliable probabilistic forecasts


Continuous Validation

How we maintain accuracy:

  1. Automated testing: 700+ test cases run on every code change
  2. Benchmark updates: Recalibrate against latest industry data annually
  3. Expert review: External validation by financial mathematics specialists
  4. User feedback: Family offices report if projections feel off vs. reality

Result: Engine stays accurate as markets evolve.


The Bottom Line

Monte Carlo is powerful but can be wrong if:

  • Implementation has bugs
  • Statistical properties don't match theory
  • Parameters aren't calibrated to reality

Our validation framework ensures:

  • Implementation is correct (700+ automated test cases)
  • Statistics match theory (mean, variance, correlation validated)
  • Parameters match industry (benchmarked vs. Cambridge Associates, Preqin)

You get: Confidence to use Monte Carlo for $50M decisions.


Want deeper technical details? Monte Carlo Methodology - Full mathematical specification and test documentation


Learn More

Want deeper technical details?

Want to see it in action?

  • Schedule demo - We'll run Monte Carlo on your portfolio
  • Start free - Try Monte Carlo yourself (Institutional tier)

Related Reading:

Ready to Transform Your Portfolio Management?

See how Nagare can eliminate manual work and accelerate decision-making.