FVA (Forecast Value Added)
A simple way to prove which forecasting steps help — and which ones add noise.
Forecast Value Added (FVA) compares each step in your forecasting process against a consistent baseline. The goal is to make the process measurable, repeatable, and focused on actions that improve accuracy, bias, and decision quality.
FVA: Forecast Value Add Practices
Use these practices to make FVA a decision tool — not just a reporting exercise.
FVA Playbook View
Does this step improve the forecast?
Compare each step (statistical, overrides, review meetings, events) against a baseline using WMAPE / Bias, and segment results by product or family.
Measure with
- WMAPE (volume-weighted accuracy)
- Bias (direction + magnitude)
- Segment (stable vs volatile)
- Step deltas (vs baseline)
Start with a baseline
Choose a fair benchmark (statistical model, seasonal naïve) and keep it consistent.
Map & measure the process
Define steps (system → overrides → consensus) and measure each one separately.
Track & analyze adjustments
Capture who changed what, why, and expected impact (assumptions + evidence).
Remove negative adjustments
If a step worsens WMAPE or increases bias, stop it or tighten rules and thresholds.
Focus on significant changes
Prioritize high-value / high-variance items. Don’t waste time on low-impact noise.
Evaluate bias & accuracy over time
Trend WMAPE + Bias by segment and by step (not just “one overall accuracy number”).
Guide process & training improvements
Use results to coach better overrides, strengthen event planning, and improve discipline.
Integrate into reviews & CI
Build FVA into monthly reviews: what worked, what didn’t, and what changes next cycle.
FVA is process accountability — with data
FVA shows the value (or damage) created by each step in the forecasting workflow. It helps leaders stop debating opinions and start improving the system.
1) Measure steps, not people
- Separate system forecast vs overrides vs consensus
- Use the same baseline each cycle
- Segment (ABC/XYZ) so results are meaningful
2) Fix governance & decision quality
- Who can override, when, and with what evidence
- Thresholds for “material” changes
- Assumption tracking and learning loops
3) Improve execution outcomes
- Better inventory positioning
- Reduced expediting and firefighting
- More stable S&OP decisions
A simple, practical way to implement
Start small: pick 1–2 product families, run 8–12 cycles, then scale after the patterns are clear.
Typical FVA setup
- Define a baseline forecast (stat model or seasonal naïve)
- Define each step (system → planner → sales/marketing → exec)
- Measure each step with WMAPE + Bias
- Segment items (stable vs volatile) to avoid misleading averages
- Review results monthly; adjust rules and training
Common pitfalls to avoid
- Changing the baseline every cycle
- Only looking at “one overall accuracy number”
- Measuring without capturing reasons for overrides
- Letting low-impact items consume review time
- Blaming individuals instead of improving the process
What improves when FVA is used correctly
FVA helps teams focus effort where it actually improves the plan — and stops wasteful activities that don’t.
Want to implement FVA in your planning process?
If you want to measure what truly improves forecasts and tighten the planning workflow, let’s talk. I can help set up the baseline, metrics, segmentation, and a repeatable review cadence.