If you do a Google search, you can find a huge number of articles providing expert guidance on how to measure forecast accuracy, going into the merits of root-mean-square error and other esoteric topics. But few address the reality that often one key aspect of measuring forecast accuracy is sometimes dictated by corporate politics rather than what’s best for the business. The aspect I’m referring to is the forecast level at which you measure accuracy: at the level of weeks, months or quarters; individual items, product lines, or categories; and so on for customers and geographies.
Corporate Politics Should Not Drive the Forecast Level
First, a story that prompted this blog post. Recently I was part of conversation with a new client demand planning team. They typically measured forecast accuracy at the monthly level, which yielded a 75% error level – not very impressive. So one person proposed measuring the forecast accuracy at the level of two months, which reduced the forecast error to about 50%. Another proposed using the quarterly level, reducing the error to something like 25%, which sounded a lot better.
The motivation behind this line of thinking was to make the quality of the forecast look better, because the planning organization was being criticized for its poor accuracy. But the level at which we measure forecast accuracy should be driven by the business problem we’re trying to solve, not to produce the best-looking number because of political considerations. For instance, consider a producer of yogurt, ice cream, and other dairy products shipped directly to individual supermarkets using a direct store delivery model. This company will likely want a forecast for how many units of each product are sold at each store on each day and want to measure accuracy at this level. With this information, you can keep the store shelves full with all your products. On the other hand, if you are in procurement at the same company and are responsible for buying milk from dairy farms, the item/store/daily forecast is more detail than you need. You will be more interested in forecast accuracy at the level of all your products (normalized for milk content) at the weekly and regional level so that you can plan weekly milk purchases for your regional production plant.
In such a company, the item/store/daily forecast is going to be less accurate than the commodity/region/weekly forecast. This is not because the purchasing department is better at forecasting, but because when you aggregate granular data, many of the under- and over-forecast errors cancel each other out and lead to a much more predictable demand pattern for aggregate milk requirements. But no one would suggest that the replenishment group abandon measuring accuracy at the level of item/store/daily to get a better number. This is the appropriate level for measuring accuracy to meet their business goals, regardless of how challenging it may be to come up with a good number.
What Happens When You Measure Forecast Accuracy at Too High a Level
I don’t want to get too far into the weeds, but a simple example will illustrate what happens when you measure accuracy at too high a level. Imagine the above business produces two flavors of ice cream, vanilla and chocolate. The table below shows forecast and actual values and then the calculated mean absolute percent error (MAPE), the most common forecast accuracy metric. The vanilla ice cream forecast is 33% too low and chocolate is 100% too high. In aggregate, though, the total forecast is only 9% too high because the vanilla under-forecast partially cancelled out the chocolate over-forecast. If your only concern is having adequate milk, cream, and sugar, forecasting at this level is fine. But since you are selling specific flavors of ice cream, forecasting at this level risks losing sales of vanilla ice cream and incurring excess inventory carrying costs for chocolate.
|Forecast||Actual||Mean Absolute Percent Error (MAPE)*|
|Vanilla Ice Cream||100 units||150 units||33%|
|Chocolate Ice Cream||140 units||70 units||100%|
|Ice Cream Category||240 units||220 units||9%|
* MAPE, generally the preferred measure of forecast accuracy, is typically defined as the absolute value of (Forecast-Actual) / Actual.
Time is Different
The above example addresses the forecast level with respect to products. You could make the same point with the level of aggregation of customer locations and time. The best level of product and location is usually pretty clear-cut, but time is different. Customers don’t always place orders on a regular basis and their expected delivery lead times can vary, so some planners may consider the best level for forecasting and measuring accuracy somewhat arbitrary. For this reason, changing how you measure forecast accuracy with respect to the level of time is most susceptible to the influences of corporate politics, as illustrated in the story at the beginning of this article. But customers have expectations for when they need orders shipped. You need to analyze your customer ordering patterns and determine at what level of time you need to schedule production and distribution. Quarterly may be fine for an industrial equipment business, but daily or weekly will likely be required for dairy products.
So How Do You Monitor Forecast Accuracy?
At this point, some readers may be wondering how they should summarize forecast accuracy for management meetings if they don’t want to show the MAPE for every single product but they also don’t want to obscure the actual forecast performance by measuring MAPE at too high a level. For the example ice cream table above, you can simply average the MAPE values for each flavor to give (33% + 100%) / 2 = 67%. Alternatively, you can weight each MAPE by the product volume, giving (33% * 150 units + 100% * 70 units) / (150+70 units) = 54%, the so called Weighted MAPE or WMAPE. Whether you use 67% or 54%, both give a much more meaningful indication of your forecast accuracy than the 9% figure in the table.
Why summarize forecast accuracy in this way? Forecast accuracy is a key determinant of being able to meet customer service level targets while minimizing inventory and overall supply chain costs. Accordingly, forecast accuracy should be monitored by your management team at a summary level as part of your monthly sales and operations planning (S&OP) process. For example, if you forecast at the product/customer distribution center/weekly level, for example, you could average your forecast accuracy figures using the approach in the prior paragraph and present the results by product category, key customer, and month. This way, company management can monitor forecast performance month to month without getting bogged down in the details. Your goal should be to improve forecast accuracy over time. If accuracy is trending the other way, the demand planning team can drill into problem areas in more detail to fix the problem.
This Seems Pretty Basic but It Is Sometime Forgotten
This seems pretty straightforward, and I think most supply chain professionals get this, but not always. I’ve heard many planners claim they have amazing forecast accuracies of over 95%, only to learn that they are referring to an aggregate forecast at the product category, national, and quarterly level. A forecast at that level may be fine for a VP Sales or CFO focused on hitting their quarterly numbers, but it’s not very useful for the operations group, which needs to plan procurement, production, and distribution for specific products that need to ship to specific customers by specific delivery dates. And even for the VP Sales and CFO, they will benefit if their supply chain organizations forecast at a more granular level so that they don’t lose sales because of out of stocks or drive-up costs because of the expedited production and distribution required to meet unanticipated demand.
Another example of what happens when you don’t pay attention to forecast level is apples to oranges comparisons of forecast accuracy among different companies. I constantly see such comparisons in surveys and discussions at industry events, with no mention of the forecast level. Unless everyone is forecasting at the same level, such comparisons are meaningless. (Note that even at the same forecast level, such comparisons are often of questionable value. If you want to compare your forecasting skill among different product categories, whether among companies or within the same company, you really need to take into account the fact that some products, such as butter, with steady demand, are easier to forecast than tortilla chips, for example, where volume tends to fluctuate more because of more intensive promotions, more competition, new product introductions, holidays, football games, etc. This leads into a discussion of forecast value-added, which I’ll leave for another blog post.)
Conclusion: The Best Forecast Level for Measuring Accuracy
As stated up front, the level at which we measure forecast accuracy should be driven by the business problem we’re trying to solve, not politics. You should never compare forecast accuracy among forecasts at different levels, and you should definitely not select the level of forecast accuracy simply to make your numbers look better. Carefully defining how you measure forecast accuracy is a key part of designing an organization’s demand planning process, and companies should use different forecast levels for different users and contexts across the business.
To Learn More
If you’d like to discuss how New Horizon can help your manufacturing, wholesale, retail, or foodservice company forecast more effectively, please contact us – we’d love to talk.