When Forecasts Break: A Board Under Stress

What happens to flow metrics and forecasts when a disruption hits — told through three features on the same board.

A Board That Looks Healthy — Until It Doesn’t

Most of the time, a well-managed feature board produces predictable results. Items enter the pipeline, flow through columns at a steady pace, and exit within a reasonable range. Forecasts work because the historical data they’re built on reflects reality.

Then something disrupts the flow. A key person goes on leave. A compliance audit freezes approvals. An executive drops priority items into the queue. A new strategic initiative floods the pipeline with unplanned work.

This guide follows three features through a disrupted board — showing what the visual tells you, what happens to forecasts, and why the right response is almost never “forecast better.” If terms like p50, p85, variance ratio, or WIP are unfamiliar, the Cycle Time Guide covers them in detail.


The Board

A team manages features through seven columns:

New → Discovery → UX Design → Development → Validation → Mkt/Legal Review → Release → Done

Under normal conditions, features flow through at about 1.5 per week. Each column takes roughly a week at the median. The one exception is Mkt/Legal Review, where external stakeholder signoff introduces inherent unpredictability — even on a healthy board, its p85 is nearly 3× its p50.

Four quarterly cohorts of features are planned: Q1 (6 features), Q2 (12 — the largest batch), Q3 (8), and Q4 (5).


The Disruption

In late May, four things happen simultaneously:

  1. Development capacity drops to 30% for four weeks — a key team member goes on extended leave and another is cherry-picked for an executive special project.
  2. Mkt/Legal Review is blocked for three weeks — a compliance audit freezes all stakeholder signoffs.
  3. Four executive-priority features jump the queue, displacing planned work.
  4. A surge of new requirements floods Discovery at 2.5× the normal rate for two weeks.

This cluster hits when Q2’s large 12-feature batch is mid-board and Q3’s features are about to enter.


What the Visual Shows

The effects are immediate and concentrated. Mkt/Legal Review’s p85 explodes to 31.9 days — over 4× the p50 of 7.6 days. The scatter plot shows a bimodal pattern: features that passed through before the disruption cluster near the bottom, while features caught in the blocked period stack up to 100+ days.

Development shows the capacity hit with a p85 of 13.0 days. The WIP sparklines across the board tell the story of the ripple — a spike in one column creates pressure upstream and starvation downstream.

Board-level result: Cycle Time p85 reaches 76.3 days. One column’s disruption dominates the entire board’s statistics.


Three Features, Three Fates

The same board, the same disruptions. Three features entered the danger zone at different moments and experienced fundamentally different journeys.

Feature #2048 — “Just Missed It”

Q2 feature, entered the pipeline in early April. It reached Development just as the capacity dropped — spending 9 days there (long enough to trigger an outlier warning). But it escaped before the worst, passed through Mkt/Legal Review before the compliance audit, and closed on May 17. Look for the blue square with a darker border in each column — that’s #2048’s journey across the board.

Board scatter plot with Feature 2048 highlighted — dots cluster in healthy ranges across most columns with a brief spike in Development

The forecast story:

DateColumnForecast p50Forecast p85
Apr 9New (day 0)Jun 4Jun 19
Apr 16Discovery (day 4)May 30Jun 20
Apr 23Development (day 2)May 20Jun 6
Apr 30Development (day 9) ⚠May 25Jun 24
May 7Mkt/Legal Review (day 0)May 21Jun 23
May 14Release (day 4)May 16May 26
May 17Done

The Development capacity hit is visible — day 9 triggered an outlier warning, and p85 jumped from Jun 6 to Jun 24. But the feature escaped Development before the worst, sailed through Mkt/Legal Review in a day (before the compliance audit), and closed May 17 — 18 days ahead of its original p50 estimate. The disruption touched this feature but didn’t define it.


Feature #2016 — “Trapped”

This feature entered Mkt/Legal Review in early March — weeks before anyone knew a compliance audit was coming. It was still there when the audit hit, and it stayed there for 134 days. The orange diamond with a dark border is #2016 — find it sitting alone at the top of the Mkt/Legal Review column, far above every other item.

Board scatter plot with Feature 2016 highlighted — a single dot sitting at the top of the Mkt/Legal Review column at 134 days, far above all other items

The forecast story:

DateColumnForecast p50Forecast p85Note
Mar 12Mkt/Legal Review (day 5)Mar 15Mar 28⚠ Outlier
Mar 19Mkt/Legal Review (day 12)Mar 26Apr 4⚠ Outlier
Mar 26Mkt/Legal Review (day 19)Apr 1Apr 15⚠ Outlier
Apr 2Mkt/Legal Review (day 26)Apr 9Apr 22⚠ Outlier
Apr 9Mkt/Legal Review (day 33)Apr 15Apr 25⚠ Outlier
Apr 23Mkt/Legal Review (day 47)Apr 28May 9⚠ Outlier
May 7Mkt/Legal Review (day 61)May 12May 17⚠ Outlier
May 21Mkt/Legal Review (day 75)May 27May 31⚠ Outlier
Jun 4Mkt/Legal Review (day 89)Jun 10Jun 13⚠ Outlier
Jun 18Mkt/Legal Review (day 103)Jun 24Jun 27⚠ Outlier
Jul 2Mkt/Legal Review (day 117)Jul 8Jul 12⚠ Outlier
Jul 16Mkt/Legal Review (day 131)Jul 23Jul 26⚠ Outlier
Jul 23Mkt/Legal Review (day 138)Jul 29Aug 2⚠ Outlier
Jul 30Release (day 6)Jul 31Aug 9
Aug 6Release (day 13)Aug 13Aug 23
Aug 13Release (day 20)⚠ Insufficient data
Aug 16Done

This is the pathological case. The feature became an outlier on day 5 and stayed an outlier for over four months. Every week, the forecast said “5-7 more days remaining” — because once you exceed all historical samples for the current column, the forecast can only estimate downstream columns (Release: ~6-10 days). It couldn’t model what it had never seen.

The forecast was perpetually “almost done.” For anyone tracking this feature, that’s worse than no forecast at all — it creates false hope every time you check. The ⚠ outlier warning was the honest signal: I don’t know how long this column will take, but everything after it looks normal.

Even after escaping to Release, the feature spent 20 days there and eventually exceeded the data available for that column too — producing a final “no forecast available” on its last checkpoint before closing.


Feature #2078 — “After the Storm”

Q3 feature, entered the pipeline in early July — after the disruption’s peak but before the board had fully recovered. The orange triangle with a dark border is #2078 — notice it sits lower in Mkt/Legal Review than in other columns, benefiting from the team’s efforts to clear the post-disruption backlog.

Board scatter plot with Feature 2078 highlighted — dots flow steadily through all columns, sitting lower in Mkt/Legal Review than other columns after the disruption cleared

The forecast story:

DateColumnForecast p50Forecast p85
Jul 2New (day 0)Aug 26Sep 18
Jul 9Discovery (day 2)Aug 24Sep 16
Jul 16UX Design (day 2)Aug 24Sep 19
Jul 23Validation (day 0)Aug 17Sep 12
Jul 30Mkt/Legal Review (day 2)Aug 16Sep 14
Aug 6Mkt/Legal Review (day 9)Aug 24Sep 23
Aug 13Mkt/Legal Review (day 16)Sep 7Sep 30
Aug 20Release (day 4)Aug 23Aug 26
Aug 22Done

The early forecasts were stable — p50 held around Aug 17-26 as the feature moved through healthy columns. Then it entered Mkt/Legal Review and the conditional survival distribution kicked in: each passing day without progress made the forecast worse, not better.

By day 16, p50 had slipped to Sep 7 and p85 to Sep 30 — a 53-day spread. Anyone asking “when will this be done?” got an honest but uncomfortable range.

Then it escaped. The forecast snapped from “maybe late September” to “August 23” in a single checkpoint. Closed on Aug 22 — one day ahead of the Release p50. The original p50 from New (Aug 26) was only 4 days off — but that accuracy was luck, not stability.


What the Three Stories Tell Us

#2048 (dodged)#2016 (trapped)#2078 (after the storm)
Days in Mkt/Legal Review~1138~18
Total cycle time~38 days~160 days~51 days
Initial p50 accuracy18 days earlyMeaningless4 days off
Forecast behaviorBrief spike, quick recoveryPerpetual “almost done”Unstable, then snapped back

The board didn’t treat these features differently. The process did. The same column, the same disruptions — but the features arrived at different times and experienced wildly different results.

The forecast was honest at every checkpoint

It reflected the uncertainty it had evidence for. When a feature was in a healthy column, the forecast was stable and accurate. When it entered a high-variance column, the forecast widened. When it became an outlier, the forecast said so — and switched to downstream-only estimates rather than guessing.

But honesty isn’t the same as usefulness. A forecast that says “5-7 more days” every week for four months is technically correct (the downstream estimate was accurate) while being practically worthless for planning.

The ⚠ outlier warning was the real signal

For Feature #2016, the outlier warning on day 14 was more valuable than any of the numbers that followed it. It said: this item has exceeded all historical precedent for this column. That’s not a forecasting failure — it’s a process alarm. The right response was to investigate why the feature was stuck, not to wait for the forecast to improve.

Variance is the root cause — not the forecast

No amount of forecasting sophistication would have helped these features. The fix was upstream of the forecast: establish a service-level agreement with the compliance team, batch signoffs, or create an escalation path for stalled items.

When the p85 is 4× the p50, the right response isn’t “pick a number between them” — it’s fix the column.


What Would the AI Analysis Say?

The Analyze tab would flag Mkt/Legal Review as the primary recommendation and suggest:

  1. Review the outlier items above the p85 line — right-click a column header and choose “Copy outliers” to export them, or hover individual dots to identify them — and bring specific examples to the retrospective
  2. Engage stakeholders on service-level expectations for signoff columns
  3. Consider WIP limits to prevent items from piling up during disruptions
  4. Investigate whether the same items appear as outliers across multiple columns (structural problem vs. isolated bottleneck)

The visual tells you where the problem is. The AI analysis tells you what to do about it. The forecast tells you what it costs to leave it unfixed.


This guide uses synthetic data. All work items are simulated — no real team data was used. The board columns, disruption patterns, and forecasts are designed to illustrate flow dynamics at the portfolio level.


Ready to see your own data?

Get Started Try the Interactive Demo

Enjoyed this post?

Get notified when new blog posts and guides are published on AgileViz.

Feedback