Which measure of error calculates the average absolute value of the actual forecast error?

MAPE and Bias - Introduction

MAPE stands for Mean Absolute Percent Error - Bias refers to persistent forecast error - Bias is a component of total calculated forecast error - Bias refers to consistent under-forecasting or over-forecasting - MAPE can be misinterpreted and miscalculated, so use caution in the interpretation.

Accurate and timely demand plans are a vital component of a manufacturing supply chain. Inaccurate demand forecasts typically would result in supply imbalances when it comes to meeting customer demand. Forecast accuracy at the SKU level is critical for proper allocation of resources.

When we talk about forecast accuracy in the supply chain, we typically have one measure in mind namely, the Mean Absolute Percent Error or MAPE. However, there is a lot of confusion between Academic Statisticians and corporate Supply Chain Planners in interpreting this metric. Most academics define MAPE as an average of percentage errors over a number of products. Whether it is erroneous is subject to debate. However, this interpretation of MAPE is useless from a manufacturing supply chain perspective. The following is a discussion of forecast error and an elegant method to calculate meaningful MAPE.

 

Definition of Forecast Error

Forecast Error is the deviation of the Actual from the forecasted quantity.

  • Error = absolute value of {(Actual – Forecast) = |(A - F)|
  • Error (%) = |(A – F)|/A

We take absolute values because the magnitude of the error is more important than the direction of the error.

The Forecast Error can be bigger than Actual or Forecast but NOT both. Error above 100% implies a zero forecast accuracy or a very inaccurate forecast.

  • Error close to 0% => Increasing forecast accuracy
  • Forecast Accuracy is the converse of Error
  • Accuracy (%) = 1 – Error (%)

How do you define Forecast Accuracy?

What is the impact of Large Forecast Errors? Is Negative accuracy meaningful?
Regardless of huge errors, and errors much higher than 100% of the Actuals or Forecast, we interpret accuracy a number between 0% and 100%. Either a forecast is perfect or relative accurate or inaccurate or just plain incorrect. So we constrain Accuracy to be between 0 and 100%.

More formally, Forecast Accuracy is a measure of how close the actuals are to the forecasted quantity.

  • If actual quantity is identical to Forecast => 100% Accuracy
  • Error > 100% => 0% Accuracy
  • More Rigorously, Accuracy = maximum of (1 – Error, 0)
Sku A Sku B Sku X Sku Y
Forecast 75 0 25 75
Actual 25 50 75 74
Error 50 50 50 1
Error (%) 200% 100% 67% 1%
Accuracy (%) 0% 0% 33% 99%
Simple Methodology for MAPE

This is a simple but Intuitive Method to calculate MAPE.

  • Add all the absolute errors across all items, call this A
  • Add all the actual (or forecast) quantities across all items, call this B
  • Divide A by B
  • MAPE is the Sum of all Errors divided by the sum of Actual (or forecast)

Click to download Tracking and Measurement of Forecast Accuracy and Safety Stock PDF

Using mean absolute error, CAN helps our clients that are interested in determining the accuracy of industry forecasts. They want to know if they can trust these industry forecasts, and get recommendations on how to apply them to improve their strategic planning process. This posts is about how CAN accesses the accuracy of industry forecasts, when we don’t have access to the original model used to produce the forecast.

First, without access to the original model, the only way we can evaluate an industry forecast’s accuracy is by comparing the forecast to the actual economic activity. This is a backwards looking forecast, and unfortunately does not provide insight into the accuracy of the forecast in the future, which there is no way to test. Thus it is important to understand that we have to assume that a forecast will be as accurate as it has been in the past, and that future accuracy of a forecast can be guaranteed.

As consumers of industry forecasts, we can test their accuracy over time by comparing the forecasted value to the actual value by calculating three different measures. The simplest measure of forecast accuracy is called Mean Absolute Error (MAE). MAE is simply, as the name suggests, the mean of the absolute errors. The absolute error is the absolute value of the difference between the forecasted value and the actual value. MAE tells us how big of an error we can expect from the forecast on average.

One problem with the MAE is that the relative size of the error is not always obvious. Sometimes it is hard to tell a big error from a small error. To deal with this problem, we can find the mean absolute error in percentage terms. Mean Absolute Percentage Error (MAPE) allows us to compare forecasts of different series in different scales. For example, we could compare the accuracy of a forecast of the DJIA with a forecast of the S&P 500, even though these indexes are at different levels.

Since both of these methods are based on the mean error, they may understate the impact of big, but infrequent, errors. If we focus too much on the mean, we will be caught off guard by the infrequent big error. To adjust for large rare errors, we calculate the Root Mean Square Error (RMSE). By squaring the errors before we calculate their mean and then taking the square root of the mean, we arrive at a measure of the size of the error that gives more weight to the large but infrequent errors than the mean. We can also compare RMSE and MAE to determine whether the forecast contains large but infrequent errors. The larger the difference between RMSE and MAE the more inconsistent the error size. The following is an example from a CAN report,

Which measure of error calculates the average absolute value of the actual forecast error?

While these methods have their limitations, they are simple tools for evaluating forecast accuracy that can be used without knowing anything about the forecast except the past values of a forecast.

Finally, even if you know the accuracy of the forecast you should be mindful of the assumption we discussed at the beginning of the post: just because a forecast has been accurate in the past does not mean it will be accurate in the future.  Professional forecasters update their methods to try to correct for past errors.  However, these corrections may make the forecast less accurate. Also, there is always the possibility of an event occurring that the model producing the forecast cannot anticipate, a black swan event. When this happens, you don’t know how big the error will be. Errors associated with these events are not typical errors, which is what RMSE, MAPE, and MAE try to measure. So, while forecast accuracy can tell us a lot about the past, remember these limitations when using forecasts to predict the future.

To learn more about forecasting, download our eBook, Predictive Analytics: The Future of Business Intelligence.


Interested in seeing how we can help you with forecasting?

How do you calculate absolute error in forecasting?

There are many standards and some not-so-standard, formulas companies use to determine the forecast accuracy and/or error. Some commonly used metrics include: Mean Absolute Deviation (MAD) = ABS (Actual – Forecast) Mean Absolute Percent Error (MAPE) = 100 * (ABS (Actual – Forecast)/Actual)

Which measurement of error is calculated by dividing the sum of the absolute forecast errors by the number of periods?

The Mean Absolute Percentage Error (MAPE) is one of the most commonly used KPIs to measure forecast accuracy. MAPE is the sum of the individual absolute errors divided by the demand (each period separately). It is the average of the percentage errors.

What are the measures of forecasting error?

Bias, mean absolute deviation (MAD), and tracking signal are tools to measure and monitor forecast errors.

What is average error in forecasting?

The mean absolute percentage error (MAPE) is one of the most popular used error metrics in time series forecasting. It is calculated by taking the average (mean) of the absolute difference between actuals and predicted values divided by the actuals.