Why Intermittent Demand Still Trips Us Up And What to Do About It
- Yvonne Badulescu
- Oct 27, 2025
- 7 min read
You forecast demand for 1,000 SKUs, and half are slow movers with lots of zeros. Your MAPE looks great, but your warehouse is still full, cash is tied up in inventory, and planners are losing trust in the numbers.
Sound familiar?
Intermittent demand is one of the most persistent and costly challenges in supply chain forecasting. It shows up in spare parts, seasonal or niche SKUs, luxury and slow-moving goods, and increasingly, in long-tail assortments in retail and e-commerce.
Despite years of model development, intermittent demand still behaves in ways that frustrate both practitioners and researchers, because it doesn’t just vary in size, it disappears for long periods, then reappears unpredictably.
This article breaks down what makes intermittent demand so hard to forecast, why many standard methods fail, and how to choose models and metrics that actually work.
What Is Intermittent Demand?
Intermittent demand is defined by two key traits:
Zero-inflation: Most time periods have no demand.
High variability: When demand does occur, its size varies a lot.
It’s not just noisy. It’s missing and that’s a crucial distinction. You’re not just forecasting how much demand you’ll get. You’re also forecasting whether you’ll get any demand at all.
Why Standard Forecasting Methods Fail
Most classic forecasting methods like Simple Exponential Smoothing, ARIMA, or even some machine learning approaches were designed for continuous demand with regular patterns and stable seasonality.
Here’s why they fail for intermittent demand:
They treat zero demand as just another value, rather than a structurally distinct event.
They assume demand is always present, just sometimes small.
They can’t distinguish frequency of demand from magnitude of demand.
This leads to chronic over-forecasting (especially when demand drops off entirely) or wild swings when a single demand occurrence skews the average. And it’s why you often see good accuracy metrics but poor inventory outcomes.
The Forecasting Models That Actually Address Intermittency
Most traditional forecasting methods struggle when applied to products with intermittent demand, those with many zero-sale periods followed by sporadic, often small, bursts of activity. This pattern is common in industries like luxury retail, spare parts, and industrial goods, where individual SKUs may only sell occasionally at a store or region level.
Over the years, researchers and practitioners have developed several specialized models to address this problem. Some are widely used in industry due to their simplicity and software support, while others offer more sophisticated handling of demand noise or frequency changes, but remain underused due to complexity or limited awareness.
To help navigate the options, we can group these models into two categories:
Widely adopted models, which are built into most commercial forecasting tools and used in practice. These models are often the starting point for handling slow-moving items. Croston’s method and its variants (SBA and TSB) offer an easy-to-implement foundation that balances simplicity with performance. TSB in particular is gaining ground in organizations where demand frequency shifts more dynamically over time.
However, these models can still fall short when the data is extremely noisy, when products have long idle periods, or when the business context demands greater robustness to change.
Advanced or niche models, which offer additional benefits in more complex or high-variability environments but may require customization or statistical expertise. While they’re less commonly embedded in off-the-shelf systems, they provide meaningful advantages for specialized use cases, such as managing highly erratic demand patterns, working with sparse datasets, or aligning forecasts with multi-level decision-making.
Each model has different assumptions and strengths. Choosing the right one depends not just on your data, but on how forecasts are used in downstream decisions, such as store-level inventory allocation, assortment planning, or reordering.
Each of these models addresses the two-part structure of intermittent demand:
When will demand occur?
How much will it be when it does?
But they do so with different assumptions, sensitivities, and update mechanisms. That’s why tuning and selection matter just as much as the model itself.
Choosing the Right Error Metrics
One of the most common mistakes in intermittent demand forecasting is using the wrong accuracy metrics. MAPE is still widely used because it’s simple and built into most systems, but for zero-heavy or highly variable demand, it can lead you in the wrong direction. When actual demand is zero, MAPE is undefined. When demand is very low, even small differences between forecast and actuals cause huge percentage swings. And because MAPE treats over- and under-forecasting the same way, it can distort how performance is judged and how decisions are made.
That’s why many forecasting teams look to alternatives like sMAPE or MASE. But as Ivan Svetunkov points out in his work, these also have serious limitations for intermittent demand. MASE, for instance, is minimized by the median. If your demand is mostly zeros, that can make the zero forecast look like the best one, even if it’s not helpful for planning. sMAPE, on the other hand, becomes fixed at 2 when demand is zero and tends to favor over-forecasting. These issues are easy to overlook, but they can lead to misleading model comparisons and poor inventory outcomes.
So what’s the alternative?
If your model forecasts mean demand, like Croston or TSB, evaluate it only on the periods when demand occurs (non-zero demand values). That’s what the model is estimating. If you include the zeros, you’re penalizing it for not doing something it wasn’t designed to do.
If you want to penalize large misses more heavily, for example if these misses are very costly for the business, use RMSE. However, be careful as RMSE is sensitive to outliers, such as demand spikes which are much larger than usual when a sale occurs.
That said, even the best forecast accuracy metric may fail to capture what matters most in practice: stock availability, service levels, and inventory-related costs. As Syntetos and Boylan (2005) emphasize, forecast accuracy is only a proxy for operational performance, and not all accuracy measures are equally meaningful for intermittent demand. A model may score well on traditional metrics like RMSE or MAE, yet still result in stockouts or excess inventory due to poor handling of the demand timing. This is particularly problematic in intermittent demand scenarios, where both the occurrence and magnitude of demand are irregular. The authors argue for using scale-independent metrics such as Relative Geometric Root Mean Square Error (RGRMSE) and Percentage Best (PBt), which better reflect the practical utility of forecasts across diverse items. Ultimately, they advocate aligning forecasting evaluation not just with statistical error minimization but with the broader goal of supporting effective inventory decision-making.
A Word on Machine Learning: Skip the Classifiers
In traditional forecasting setups, especially with low data availability, it's common to classify demand types before modeling. Approaches like SBA-INT (Syntetos–Boylan Approximation with Intermittency Classification) use indicators such as ADI (Average Demand Interval) and CV² (Squared Coefficient of Variation) to label demand as smooth, intermittent, or erratic, then assign models accordingly (e.g., SES for smooth, SBA for intermittent).
This rule-based logic works in heuristic-driven or spreadsheet-heavy environments where simplicity and explainability are key. But in machine learning workflows, this extra classification step is usually unnecessary, and often counterproductive.
Why? Because machine learning models such as gradient boosting, zero-inflated regressors, or even recurrent neural nets can learn the structure of the demand pattern directly from raw and engineered features, without needing you to pre-label it. Instead of classifying demand, let the model learn from binary indicators for whether demand occurred, lag variables for time since last sale, rolling averages or volatility of recent demand, cyclic or seasonal indicators, and historical zero run lengths or dispersion metrics.
This approach is particularly effective in domains like retail, fashion, luxury, and spare parts, where sparse, erratic sales patterns are common and where prediction quality depends more on feature richness than on demand type taxonomy.
The bottom line: If you're using ML, don’t waste time assigning demand classes. Instead:
Model demand occurrence and demand size separately
Engineer features to reflect real-world sales dynamics
And let the algorithm do the learning, no ADI thresholds required.
Which Model Should You Use (And When)?
Choosing the right forecasting model for intermittent demand isn’t about finding the “best” method in general, it’s about matching the model to your data characteristics, operational needs, and tooling environment.
If your demand is intermittent and low volume with minimal data, use Croston or SBA. These are simple baseline methods that work well in Excel or standard ERP systems, though they may lag when demand patterns change.
If your demand frequency shifts over time, use TSB (Teunter–Syntetos–Babai). It updates both demand size and occurrence probability continuously, making it more responsive—but potentially sensitive to outliers.
If your demand is erratic but has an underlying structure, use iMAPA or ADIDA. These models aggregate data across time to reduce noise and capture trends, though they require disaggregation for day-to-day planning.
If your demand is extremely sparse with long periods of zero sales, use HES (Hyperbolic-Exponential Smoothing). It applies a decay function that stabilizes forecasts without overreacting to random gaps.
If your demand is driven by multiple external or behavioral factors, use machine learning models (e.g., Gradient Boosting, Zero-Inflated Tweedie). These can model both demand occurrence and magnitude directly, provided sufficient data and features are available.
If your demand needs to be forecasted as a probability distribution rather than a point estimate, use Zero-Inflated or Compound Poisson models. These are powerful for probabilistic planning but require more advanced statistical handling.
What Practitioners Should Actually Do
Let’s end with three concrete suggestions:
Choose metrics suited to intermittent demand and your model: Avoid MAPE as it breaks down with zeros and low demand. While sMAPE and MASE are common alternatives, they can favor poor forecasts in zero-heavy series. For Croston-type models, evaluate accuracy only on non-zero demand periods. And remember, forecast accuracy is just a proxy. True performance lies in service levels, stockouts, and inventory cost outcomes.
Try TSB or iMAPA for better slow-mover forecasting: If you're tying up capital in low-turn inventory, TSB offers better responsiveness to declining demand, while iMAPA stabilizes forecasts by averaging across time buckets.
Skip manual demand classification when using machine learning: Focus on feature engineering, not pre-labeling. Good ML models can learn demand structure directly from data using features like time since last sale, recent volatility, and demand frequency, no need for ADI-based rules.
Thanks for reading!








Comments