As demonstrated in the figure 1 (below), the Tracking Signal acts as an ealry warning mechanism that allows for forecast correction in times of continuous over or under forecasting the actual sales. The basic logic is dervived from Normal Distribution curve - TS +/- .4 = approx to +/- 1 Standard Deviation from mean (~68% of total population). The limits (+/- 3 Std Dev) are +/- 1.000. Many advanced forecasting software packages contain Tracking Signal logic tha will change and/or adjust the statistical model when the TS exceeded the allowed limits.
The logic for calculations:
Using last nine months to calcualate the TS will take in consideration all types of materials we forecast - stable, dynamic, and seasonal by always looking over the time horizon that may/will contain swings in actual sales caused by seasonality and other external/internal factors.
Using four 9 months TS indicies, will cover full last 12 months (most recent TS is last month + previous 8 months, TS 3 is last month -1 + prev. 8 months and so forth. Weighing the TS to 50% of last nine months and slowly decreasing the impact of older ones, will allow to focus on those skus where - wheather demand planner did or did not adjust the forecast - the latest 9 periods combined with previous ones display constant bias to either over or under forecast.
On its own, the tracking signal displays only the fact that we have been continously either over or under but does not tell us if we need to worry about it. For example - a constant bias in forecast of + or - 10% for a C type of sku may have been very well covered by safety stock and we need not to worry about it. By deploying weighted Bias (forecast error) where we put the heaviest weitght on last 3 rolling months, and conditioning the report to display the skus that have TS > +/- .4000 and bias > +/- 30%, one can very quickly see the materials that are constantly over or under by more than 30%. From my experience, dealing with more than 1000 skus at a time, utilizing these to measures to look for potential troublemakers in my portfolio, I and my team were always a few months ahead from hitting the wall. By simply running this report, we were able to forsee coming problems and by adjusting either the forecast model and/or overrides to the base forecast, based on this report. At any point of time, out of - say 1000 skus - only about 20-30 would be shown on this report. That way we could focus on the future issues right away and then go back to a regular monthly schedule and S&OP cycle.
Forecast Bias - or error, most of the companies use the error for last month as the MOST IMPORTANT one to debate, question and be concerned with. Most of the companies also PLAY OCCASIONAL GAMES with the orders in system to 'hit' the quarterly, by yearly and/or yearly budgets by simply advancing orders that are to be shipped next Monday that happens to be at the same time next quarter or year, or by pushing orders out to next week, (if enough sales were generated already to meet the plans) which - again - falls into another month/quarter/year. By putting more weigth on last rolling 3 months, one will eliminate the impact of these games - had we forecasted properly for longer horinzon, by focusing on last three months, we will clearly disgtinguish between skus that have real issue in forecast accuracy and those that just happened to cancel the error in two consecutive months. I am not suggesting that the last month's error should be eliminated in any way though. Every company needs to know what was its performance for all key KPIs during the last monthly cycle. What I am suggesting is - when applying forward looking forecast logic, one cannot be concerned with the last pot hole they just hit on their way to work. It is looking forward, understanding of what the road ahead of you is like and taking in account the last event as one, we need to avoid by focusing on what is in from of us.
Demand Variability shows the impact of bull whip effect on our data. Generally speeking, unless we can influence the ordering patterns starting at the customer level through the batching, lead times and other supply chain 'issues', we cannot do too much about these. The Demand Variabilit displays the range of error/fluctuation our sales will have should we simply project straight line through the time versus the actual sales. As a rule of thumb, unless one has an opportunity to improve the supply chain issues, this would be the range of error, one could expect on average month to occur. Of cause, there will be months that exceed this index, but, on average, it will fall in the range suggested by the index.
Standard Deviation is one of the key measures of demonstrated error associated with your data. In our example it is calculated on the historical sales only, giving the Supply Planner one of the base / critical indicies they need if calculating recommended safety stock. The forecast on its own is just a number - when provided with an estimate of error, the supply planning can adjust their plans to cover for expected ups vs. the forecast. The forecast error over some rolling time horizon and standard deviation are the needed information points that supply (and demand planning - if involved in supply review of S&OP process) can use to debate the new forecast and adjust the supply plan to cover the unknown portion of the forecast.
The picture/example no 2 shows 2 sets of forecasts - the single exponential smooting using alpha of 0.2 generating high TS at times and the adaptive single exponential smooting that uses the tracking signal to adjust the alpha value based on latest TS index - this way, the model adjusts itself to allow most of the 'lerning' from recently displayed 'bias' in forecst to avoid future issues.