models for time series and forecasting中小学PPT教学课件
- 格式:ppt
- 大小:563.00 KB
- 文档页数:58
多维时间序列预测方法Time series forecasting is a critical aspect of many fields, including finance, economics, weather prediction, and business. 多维时间序列预测是许多领域的关键方面,包括金融、经济、天气预报和商业。
It involves predicting future values based on past data, and it plays a crucial role in decision making and planning. 它涉及根据过去的数据预测未来的值,并在决策和规划中发挥着至关重要的作用。
There are various methods for time series forecasting, such as ARIMA, neural networks, and machine learning algorithms. 有各种各样的时间序列预测方法,如ARIMA、神经网络和机器学习算法。
Each method has its strengths and weaknesses, and the choice of method depends on the specific characteristics of the data and the problem at hand. 每种方法都有其优点和缺点,方法的选择取决于数据的特定特征和所面临的问题。
One of the challenges in time series forecasting is dealing with multi-dimensional data. 多维数据的时间序列预测面临的一个挑战是如何处理多维数据。
While traditional methods can be applied to univariate time series data, they may not be directly applicable to multi-dimensional time series data. 虽然传统方法可以应用于单变量时间序列数据,但它们可能不直接适用于多维时间序列数据。
研究生经济学教案:市场需求预测与分析一、引言在当今全球化的经济环境下,市场需求的准确预测和深入分析对于企业战略和决策非常重要。
本课程旨在帮助研究生学习掌握市场需求预测与分析的基本理论、方法和实践技能,提高他们在经济学领域的竞争力。
二、教学目标1.理解市场需求预测与分析的重要性及其在实际经济决策中的应用。
2.掌握市场需求预测与分析的基本概念、理论框架和方法论。
3.能够使用适当的工具和技术进行市场需求预测和分析。
三、教学内容1. 市场需求预测基础•市场需求概念与定义•需求驱动因素分析•市场需求曲线及其变动原因2. 市场调研与数据收集•市场调研方法与设计•数据收集与整理技巧•数据质量评估方法3. 市场需求预测方法与模型•定性方法:专家判断、Delphi法等•定量方法:时间序列分析、回归分析等•结构性模型:系统动力学、供需模型等4. 市场需求分析工具与技术•总体市场趋势分析•市场细分与定位分析•竞争对手行为分析5. 应用案例研究与实践项目•实际市场需求预测案例研究•学生团队合作完成市场需求预测项目四、教学方法与评价方式本课程将采用多种教学方法,包括讲座、案例分析、小组讨论和实践项目。
学生将通过参与课堂活动和完成相关任务来获得积极的学习经验。
评价方式将包括平时表现评价和期末考核。
平时表现包括课堂参与度和小组合作情况。
期末考核将以个人报告和团队项目为主要评估依据。
五、参考文献推荐1.Montgomery, D.C., & Jennings, C.L. (2019). Introduction to TimeSeries Analysis and Forecasting (2nd ed.). John Wiley & Sons.2.Goodwin, P. (2014). Research in Consumer Behavior: Consumerdemand forecasting and estimation. Emerald Group PublishingLimited.3.Kotler, P., & Armstrong, G. (2016). Principles of Marketing (16th ed.).Pearson Education Limited.以上是针对研究生经济学教案的市场需求预测与分析的初步内容安排。
CHAPTER 13FORECASTINGReview Questions13.1-1 Substantially underestimating demand is likely to lead to many lost sales, unhappycustomers, and perhaps allowing the competition to gain the upper hand in the marketplace. Significantly overestimating the demand is very costly due to excessive inventory costs, forced price reductions, unneeded production or storage capacity, and lost opportunity to market more profitable goods.13.1-2 A forecast of the demand for spare parts is needed to provide good maintenanceservice.13.1-3 In cases where the yield of a production process is less than 100%, it is useful toforecast the production yield in order to determine an appropriate value of reject allowance and, consequently, the appropriate size of the production run.13.1-4 Statistical models to forecast economic trends are commonly called econometricmodels.13.1-5 Providing too few agents leads to unhappy customers, lost calls, and perhaps lostbusiness. Too many agents cause excessive personnel costs.13.2-1 The company mails catalogs to its customers and prospective customers severaltimes per year, as well as publishing mini-catalogs in computer magazines. They then take orders for products over the phone at the company’s call center.13.2-2 Customers who receive a busy signal or are on hold too long may not call back andbusiness may be lost. If too many agents are on duty there may be idle time, which wastes money because of labor costs.13.2-3 The manager of the call center is Lydia Weigelt. Her current major frustration is thateach time she has used her procedure for setting staffing levels for the upcoming quarter, based on her forecast of the call volume, the forecast usually has turned out to be considerably off.13.2-4 Assume that each quarter’s cal l volume will be the same as for the preceding quarter,except for adding 25% for quarter 4.13.2-5 The average forecasting error is commonly called MAD, which stands for MeanAbsolute Deviation. Its formula is MAD = (Sum of forecasting errors) / (Number of forecasts)13.2-6 MSE is the mean square error. Its formula is (Sum of square of forecasting errors) /(Number of forecasts).13.2-7 A time series is a series of observations over time of some quantity of interest.13.3-1 In general, the seasonal factor for any period of a year measures how that periodcompares to the overall average for an entire year.13.3-2 Seasonally adjusted call volume = (Actual call volume) / (Seasonal factor).13.3-3 Actual forecast = (Seasonal factor)(Seasonally adjusted forecast)13.3-4 The last-value forecasting method sometimes is called the naive method becausestatisticians consider it naive to use just a sample size of one when additional relevant data are available.13.3-5 Conditions affecting the CCW call volume were changing significantly over the pastthree years.13.3-6 Rather than using old data that may no longer be relevant, this method averages thedata for only the most recent periods.13.3-7 This method modifies the moving-average method by placing the greatest weighton the last value in the time series and then progressively smaller weights on the older values.13.3-8 A small value is appropriate if conditions are remaining relatively stable. A largervalue is needed if significant changes in the conditions are occurring relatively frequently.13.3-9 Forecast = α(Last Value) + (1 –α)(Last forecast). Estimated trend is added to thisformula when using exponential smoothing with trend.13.3-10 T he one big factor that drives total sales up or down is whether there are any hotnew products being offered.13.4-1 CB Predictor uses the raw data to provide the best fit for all these inputs as well asthe forecasts.13.4-2 Each piece of data should have only a 5% chance of falling below the lower line and a5% chance of rising above the upper line.13.5-1 The next value that will occur in a time series is a random variable.13.5-2 The goal of time series forecasting methods is to estimate the mean of theunderlying probability distribution of the next value of the time series as closely as possible.13.5-3 No, the probability distribution is not the same for every quarter.13.5-4 Each of the forecasting methods, except for the last-value method, placed at leastsome weight on the observations from Year 1 to estimate the mean for each quarter in Year 2. These observations, however, provide a poor basis for estimating the mean of the Year 2 distribution.13.5-5 A time series is said to be stable if its underlying probability distribution usuallyremains the same from one time period to the next. A time series is unstable if both frequent and sizable shifts in the distribution tend to occur.13.5-6 Since sales drive call volume, the forecasting process should begin by forecastingsales.13.5-7 The major components are the relatively stable market base of numerous small-niche products and each of a few major new products.13.6-1 Causal forecasting obtains a forecast of the quantity of interest by relating it directlyto one or more other quantities that drive the quantity of interest.13.6-2 The dependent variable is call volume and the independent variable is sales.13.6-3 When doing causal forecasting with a single independent variable, linear regressioninvolves approximating the relationship between the dependent variable and the independent variable by a straight line.13.6-4 In general, the equation for the linear regression line has the form y = a + bx. Ifthere is more than one independent variable, then this regression equation has a term, a constant times the variable, added on the right-hand side for each of these variables.13.6-5 The procedure used to obtain a and b is called the method of least squares.13.6-6 The new procedure gives a MAD value of only 120 compared with the old MADvalue of 400 with the 25% rule.13.7-1 Statistical forecasting methods cannot be used if no data are available, or if the dataare not representative of current conditions.13.7-2 Even when good data are available, some managers prefer a judgmental methodinstead of a formal statistical method. In many other cases, a combination of the two may be used.13.7-3 The jury of executive opinion method involves a small group of high-level managerswho pool their best judgment to collectively make a forecast rather than just the opinion of a single manager.13.7-4 The sales force composite method begins with each salesperson providing anestimate of what sales will be in his or her region.13.7-5 A consumer market survey is helpful for designing new products and then indeveloping the initial forecasts of their sales. It is also helpful for planning a marketing campaign.13.7-6 The Delphi method normally is used only at the highest levels of a corporation orgovernment to develop long-range forecasts of broad trends.13.8-1 Generally speaking, judgmental forecasting methods are somewhat more widelyused than statistical methods.13.8-2 Among the judgmental methods, the most popular is a jury of executive opinion.Manager’s opinion is a close second.13.8-3 The survey indicates that the moving-average method and linear regression are themost widely used statistical forecasting methods.Problems13.1 a) Forecast = last value = 39b) Forecast = average of all data to date = (5 + 17 + 29 + 41 + 39) / 5 = 131 / 5 =26c) Forecast = average of last 3 values = (29 + 41 + 39) / 3 = 109 / 3 = 36d) It appears as if demand is rising so the average forecasting method seemsinappropriate because it uses older, out-of-date data.13.2 a) Forecast = last value = 13b) Forecast = average of all data to date = (15 + 18 + 12 + 17 + 13) / 5 = 75 / 5 =15c) Forecast = average of last 3 values = (12 + 17 + 13) / 3 = 42 / 3 = 14d) The averaging method seems best since all five months of data are relevant indetermining the forecast of sales for next month and the data appears relativelystable.13.3MAD = (Sum of forecasting errors) / (Number of forecasts) = (18 + 15 + 8 + 19) / 4 = 60 / 4 = 15 MSE = (Sum of squares of forecasting errors) / (Number of forecasts) = (182 + 152 + 82 + 192) / 4 = 974 / 4 = 243.513.4 a) Method 1 MAD = (258 + 499 + 560 + 809 + 609) / 5 = 2,735 / 5 = 547Method 2 MAD = (374 + 471 + 293 + 906 + 396) / 5 = 2,440 / 5 = 488Method 1 MSE = (2582 + 4992 + 5602 + 8092 + 6092) / 5 = 1,654,527 / 5 = 330,905Method 2 MSE = (3742 + 4712 + 2932 + 9062 + 3962) / 5 = 1,425,218 / 5 = 285,044Method 2 gives a lower MAD and MSE.b) She can use the older data to calculate more forecasting errors and compareMAD for a longer time span. She can also use the older data to forecast theprevious five months to see how the methods compare. This may make her feelmore comfortable with her decision.13.5 a)b)c)d)13.6 a)b)This progression indicatesthat the state’s economy is improving with the unemployment rate decreasing from 8% to 7% (seasonally adjusted) over the four quarters.13.7 a)b) Seasonally adjusted value for Y3(Q4)=28/1.04=27,Actual forecast for Y4(Q1) = (27)(0.84) = 23.c) Y4(Q1) = 23 as shown in partb Seasonally adjusted value for Y4(Q1) = 23 / 0.84 = 27 Actual forecast for Y4(Q2) = (27)(0.92) = 25Seasonally adjusted value for Y4(Q2) = 25 / 0.92 = 27 Actual forecast for Y4(Q3) = (27)(1.20) = 33Seasonally adjusted value for Y4(Q3) = 33/1.20 = 27Actual forecast for Y4(Q4) = (27)(1.04) = 28d)13.8 Forecast = 2,083 – (1,945 / 4) + (1,977 / 4) = 2,09113.9 Forecast = 782 – (805 / 3) + (793 / 3) = 77813.10 Forecast = 1,551 – (1,632 / 10) + (1,532 / 10) = 1,54113.11 Forecast(α) = α(last value) + (1 –α)(last forecast)Forecast(0.1) = (0.1)(792) + (1 –0.1)(782) = 783 Forecast(0.3) = (0.3)(792) + (1 –0.3)(782) = 785 Forecast(0.5) = (0.5)(792) + (1 – 0.5)(782) = 78713.12 Forecast(α) = α(last value) + (1 –α)(last forecast)Forecast(0.1) = (0.1)(1,973) + (1 –0.1)(2,083) = 2,072 Forecast(0.3) = (0.3)(1,973) + (1 –0.3)(2,083) = 2,050 Forecast(0.5) = (0.5)(1,973) + (1 – 0.5)(2,083) = 2,02813.13 a) Forecast(year 1) = initial estimate = 5000Forecast(year 2) = α(last value) + (1 –α)(last forecast)= (0.25)(4,600) + (1 –0.25)(5,000) = 4,900 Forecast(year 3) = (0.25)(5,300) + (1 – 0.25)(4,900) = 5,000b) MAD = (400 + 400 + 1,000) / 3 = 600MSE = (4002 + 4002 + 1,0002) / 3 = 440,000c) Forecast(next year) = (0.25)(6,000) + (1 – 0.25)(5,000) = 5,25013.14 Forecast = α(last value) + (1 –α)(last forecast) + Estimated trendEstimated trend = β(Latest trend) + (1 –β)(Latest estimate of trend) Latest trend = α(Last value – Next-to-last value) + (1 –α)(Last forecast – Next-to-last forecast)Forecast(year 1) = Initial average + Initial trend = 3,900 + 700 = 4,600Forecast (year 2) = (0.25)(4,600) + (1 –0.25)(4,600)+(0.25)[(0.25)(4,600 –3900) + (1 –0.25)(4,600 –3,900)] + (1 –0.25)(700) = 5,300Forecast (year 3) = (0.25)(5,300) + (1 – 0.25)(5,300) + (0.25)[(0.25)(5,300 – 4,600) + (1 – 0.25)(5,300 – 4,600)]+(1 – 0.25)(700) = 6,00013.15 Forecast = α(last value) + (1 –α)(last forecast) + Estimated trendEstimated trend = β(Latest trend) + (1 –β)(Latest estimate of trend) Latest trend = α(Last value – Next-to-last value) + (1 –α)(Last forecast – Next-to-last forecast)Forecast = (0.2)(550) + (1 – 0.2)(540) + (0.3)[(0.2)(550 – 535) + (1 – 0.2)(540 –530)] + (1 – 0.3)(10) = 55213.16 Forecast = α(last value) + (1 –α)(last forecast) + Estimated trendEstimated trend = β(Latest trend) + (1 –β)(Latest estimate of trend) Latest trend = α(Last value – Next-to-last value) + (1 –α)(Last forecast – Next-to-last forecast)Forecast = (0.1)(4,935) + (1 – 0.1)(4,975) + (0.2)[(0.1)(4,935 – 4,655) + (1 – 0.1) (4,975 – 4720)] + (1 – 0.2)(240) = 5,21513.17 a) Since sales are relatively stable, the averaging method would be appropriate forforecasting future sales. This method uses a larger sample size than the last-valuemethod, which should make it more accurate and since the older data is stillrelevant, it should not be excluded, as would be the case in the moving-averagemethod.b)c)d)e) Considering the MAD values (5.2, 3.0, and 3.9, respectively), the averagingmethod is the best one to use.f) Considering the MSE values (30.6, 11.1, and 17.4, respectively), the averagingmethod is the best one to use.g) Unless there is reason to believe that sales will not continue to be relatively stable,the averaging method should be the most accurate in the future as well.13.18 Using the template for exponential smoothing, with an initial estimate of 24, thefollowing forecast errors were obtained for various values of the smoothing constant α:use.13.19 a) Answers will vary. Averaging or Moving Average appear to do a better job thanLast Value.b) For Last Value, a change in April will only affect the May forecast.For Averaging, a change in April will affect all forecasts after April.For Moving Average, a change in April will affect the May, June, and July forecast.c) Answers will vary. Averaging or Moving Average appear to do a slightly better jobthan Last Value.d) Answers will vary. Averaging or Moving Average appear to do a slightly better jobthan Last Value.13.20 a) Since the sales level is shifting significantly from month to month, and there is noconsistent trend, the last-value method seems like it will perform well. Theaveraging method will not do as well because it places too much weight on olddata. The moving-average method will be better than the averaging method butwill lag any short-term trends. The exponential smoothing method will also lagtrends by placing too much weight on old data. Exponential smoothing withtrend will likely not do well because the trend is not consistent.b)Comparing MAD values (5.3, 10.0, and 8.1, respectively), the last-value method is the best to use of these three options.Comparing MSE values (36.2, 131.4, and 84.3, respectively), the last-value method is the best to use of these three options.c) Using the template for exponential smoothing, with an initial estimate of 120, thefollowing forecast errors were obtained for various values of the smoothingconstant α:constant is appropriate.d) Using the template for exponential smoothing with trend, using initial estimates of120 for the average value and 10 for the trend, the following forecast errors wereobtained for various values of the smoothing constants α and β:constants is appropriate.e) Management should use the last-value method to forecast sales. Using thismethod the forecast for January of the new year will be 166. Exponentialsmoothing with trend with high smoothing constants (e.g., α = 0.5 and β = 0.5)also works well. With this method, the forecast for January of the new year will be165.13.21 a) Shift in total sales may be due to the release of new products on top of a stableproduct base, as was seen in the CCW case study.b) Forecasting might be improved by breaking down total sales into stable and newproducts. Exponential smoothing with a relatively small smoothing constant canbe used for the stable product base. Exponential smoothing with trend, with arelatively large smoothing constant, can be used for forecasting sales of each newproduct.c) Managerial judgment is needed to provide the initial estimate of anticipated salesin the first month for new products. In addition, a manger should check theexponential smoothing forecasts and make any adjustments that may benecessary based on knowledge of the marketplace.13.22 a) Answers will vary. Last value seems to do the best, with exponential smoothingwith trend a close second.b) For last value, a change in April will only affect the May forecast.For averaging, a change in April will affect all forecasts after April.For moving average, a change in April will affect the May, June, and July forecast.For exponential smoothing, a change in April will affect all forecasts after April.For exponential smoothing with trend, a change in April will affect all forecastsafter April.c) Answers will vary. last value or exponential smoothing seem to do better than theaveraging or moving average.d) Answers will vary. last value or exponential smoothing seem to do better than theaveraging or moving average.13.23 a) Using the template for exponential smoothing, with an initial estimate of 50, thefollowing MAD values were obtained for various values of the smoothing constantα:Choose αb) Using the template for exponential smoothing, with an initial estimate of 50, thefollowing MAD values were obtained for various values of the smoothing constantα:Choose αc) Using the template for exponential smoothing, with an initial estimate of 50, thefollowing MAD values were obtained for various values of the smoothing constantα:13.24 a)b)Forecast = 51.c) Forecast = 54.13.25 a) Using the template for exponential smoothing with trend, with an initial estimatesof 50 for the average and 2 for the trend and α = 0.2, the following MAD values were obtained for various values of the smoothing constant β:Choose β = 0.1b) Using the template for exponential smoothing with trend, with an initial estimatesof 50 for the average and 2 for the trend and α = 0.2, the following MAD valueswere obtained for various values of the smoothing constant β:Choose β = 0.1c) Using the template for exponential smoothing with trend, with an initial estimatesof 50 for the average and 2 for the trend and α = 0.2, the following MAD valueswere obtained for various values of the smoothing constant β:13.26 a)b)0.582. Forecast = 74.c) = 0.999. Forecast = 79.13.27 a) The time series is not stable enough for the moving-average method. Thereappears to be an upward trend.b)c)d)e) Based on the MAD and MSE values, exponential smoothing with trend should beused in the future.β = 0.999.f)For exponential smoothing, the forecasts typically lie below the demands.For exponential smoothing with trend, the forecasts are at about the same level as demand (perhaps slightly above).This would indicate that exponential smoothing with trend is the best method to usehereafter.13.2913.30 a)factors:b)c) Winter = (49)(0.550) = 27Spring = (49)(1.027) = 50Summer = (49)(1.519) = 74Fall = (49)(0.904) = 44d)e)f)g) The exponential smoothing method results in the lowest MAD value (1.42) and thelowest MSE value (2.75).13.31 a)b)c)d)e)f)g) The last-value method with seasonality has the lowest MAD and MSE value. Usingthis method, the forecast for Q1 is 23 houses.h) Forecast(Q2) = (27)(0.92) = 25Forecast(Q3) = (27)(1.2) = 32Forecast(Q4) = (27)(1.04) = 2813.32 a)b) The moving-average method with seasonality has the lowest MAD value. Using13.33 a)b)c)d) Exponential smoothing with trend should be used.e) The best values for the smoothing constants are α = 0.3, β = 0.3, and γ = 0.001.C28:C38 below.13.34 a)b)c)d)e) Moving average results in the best MAD value (13.30) and the best MSE value(249.09).f)MAD = 14.17g) Moving average performed better than the average of all three so it should beused next year.h) The best method is exponential smoothing with seasonality and trend, using13.35 a)••••••••••0100200300400500600012345678910S a l e sMonthb)c)••••••••••0100200300400500600012345678910S a l e sMonthd) y = 410.33 + (17.63)(11) = 604 e) y = 410.33 + (17.63)(20) = 763f) The average growth in sales per month is 17.63.13.36 a)•••01000200030004000500060000123A p p l i c a t i o n sYearb)•••01000200030004000500060000123A p p l i c a t i o n sYearc)d) y (year 4) = 3,900+ (700)(4) = 6,700 y (year 5) = 3,900 + (700)(5) = 7,400 y (year 6) = 3,900 + (700)(6) = 8,100 y (year 7) = 3,900 + (700)(7) =8,800y (year 8) = 3,900 + (700)(8) = 9,500e) It does not make sense to use the forecast obtained earlier of 9,500. Therelationship between the variable has changed and, thus, the linear regression that was used is no longer appropriate.f)•••••••0100020003000400050006000700001234567A p p l i c a t i o n sYeary =5,229 +92.9x y =5,229+(92.9)(8)=5,971the forecast that it provides for year 8 is not likely to be accurate. It does not make sense to continue to use a linear regression line when changing conditions cause a large shift in the underlying trend in the data.g)Causal forecasting takes all the data into account, even the data from before changing conditions cause a shift. Exponential smoothing with trend adjusts to shifts in the underlying trend by placing more emphasis on the recent data.13.37 a)••••••••••50100150200250300350400450500012345678910A n n u a l D e m a n dYearb)c)••••••••••50100150200250300350400450500012345678910A n n u a l D e m a n dYeard) y = 380 + (8.15)(11) = 470 e) y = 380 = (8.15)(15) = 503f) The average growth per year is 8.15 tons.13.38 a) The amount of advertising is the independent variable and sales is the dependentvariable.b)•••••0510*******100200300400500S a l e s (t h o u s a n d s o f p a s s e n g e r s )Amount of Advertising ($1,000s)c)•••••0510*******100200300400500S a l e s (t h o u s a n d s o f p a s s e n g e r s )Amount of Advertising ($1,000s)d) y = 8.71 + (0.031)(300) = 18,000 passengers e) 22 = 8.71 + (0.031)(x ) x = $429,000f) An increase of 31 passengers can be attained.13.39 a) If the sales change from 16 to 19 when the amount of advertising is 225, then thelinear regression line shifts below this point (the line actually shifts up, but not as much as the data point has shifted up).b) If the sales change from 23 to 26 when the amount of advertising is 450, then the linear regression line shifts below this point (the line actually shifts up, but not as much as the data point has shifted up).c) If the sales change from 20 to 23 when the amount of advertising is 350, then the linear regression line shifts below this point (the line actually shifts up, but not as much as the data point has shifted up).13.40 a) The number of flying hours is the independent variable and the number of wingflaps needed is the dependent variable.b)••••••024*********100200W i n g F l a p s N e e d e dFlying Hours (thousands)c)d)••••••024*********100200W i n g F l a p s N e e d e dFlying Hours (thousands)e) y = -3.38 + (0.093)(150) = 11f) y = -3.38 + (0.093)(200) = 1513.41 Joe should use the linear regression line y = –9.95 + 0.10x to develop a forecast forCase13.1 a) We need to forecast the call volume for each day separately.1) To obtain the seasonally adjusted call volume for the past 13 weeks, we firsthave to determine the seasonal factors. Because call volumes follow seasonalpatterns within the week, we have to calculate a seasonal factor for Monday,Tuesday, Wednesday, Thursday, and Friday. We use the Template for SeasonalFactors. The 0 values for holidays should not factor into the average. Leaving themblank (rather than 0) accomplishes this. (A blank value does not factor into theAVERAGE function in Excel that is used to calculate the seasonal values.) Using thistemplate (shown on the following page, the seasonal factors for Monday, Tuesday,Wednesday, Thursday, and Friday are 1.238, 1.131, 0.999, 0.850, and 0.762,respectively.2) To forecast the call volume for the next week using the last-value forecasting method, we need to use the Last Value with Seasonality template. To forecast the next week, we need only start with the last Friday value since the Last Value method only looks at the previous day.The forecasted call volume for the next week is 5,045 calls: 1,254 calls are received on Monday, 1,148 calls are received on Tuesday, 1,012 calls are received on Wednesday, 860 calls are received on Thursday, and 771 calls are received on Friday.3) To forecast the call volume for the next week using the averaging forecasting method, we need to use the Averaging with Seasonality template.The forecasted call volume for the next week is 4,712 calls: 1,171 calls are received on Monday, 1,071 calls are received on Tuesday, 945 calls are received on Wednesday, 804 calls are received on Thursday, and 721 calls are received onFriday.4) To forecast the call volume for the next week using the moving-average forecasting method, we need to use the Moving Averaging with Seasonality template. Since only the past 5 days are used in the forecast, we start with Monday of the last week to forecast through Friday of the next week.The forecasted call volume for the next week is 4,124 calls: 985 calls are received on Monday, 914 calls are received on Tuesday, 835 calls are received on Wednesday, 732 calls are received on Thursday, and 658 calls are received on Friday.5) To forecast the call volume for the next week using the exponential smoothing forecasting method, we need to use the Exponential with Seasonality template. We start with the initial estimate of 1,125 calls (the average number of calls on non-holidays during the previous 13 weeks).The forecasted call volume for the next week is 4,322 calls: 1,074 calls are received on Monday, 982 calls are received on Tuesday, 867 calls are received onWednesday, 737 calls are received on Thursday, and 661 calls are received on Friday.b) To obtain the mean absolute deviation for each forecasting method, we simplyneed to subtract the true call volume from the forecasted call volume for each day in the sixth week. We then need to take the absolute value of the five differences.Finally, we need to take the average of these five absolute values to obtain the mean absolute deviation.1) The spreadsheet for the calculation of the mean absolute deviation for the last-value forecasting method follows.This method is the least effective of the four methods because this method depends heavily upon the average seasonality factors. If the average seasonality factors are not the true seasonality factors for week 6, a large error will appear because the average seasonality factors are used to transform the Friday call volume in week 5 to forecasts for all call volumes in week 6. We calculated in part(a) that the call volume for Friday is 0.762 times lower than the overall average callvolume. In week 6, however, the call volume for Friday is only 0.83 times lower than the average call volume over the week. Also, we calculated that the call volume for Monday is 1.34 times higher than the overall average call volume. In Week 6, however, the call volume for Monday is only 1.21 times higher than the average call volume over the week. These differences introduce a large error.。
Chapter 5 ForecastingLearning ObjectivesStudents will be able to:1.Understand and know when to usevarious families of forecasting models.pare moving averages, exponentialsmoothing, and trend time-seriesmodels.3.Seasonally adjust data.4.Understand Delphi and otherqualitative decision-makingapproaches.pute a variety of error measures.Chapter Outline5.1Introduction5.2Types of Forecasts5.3Scatter Diagrams and TimeSeries5.4Measures of Forecast Accuracy 5.5Time-Series Forecasting Models 5.6Monitoring and ControllingForecasts5.7Using the Computer to ForecastIntroductionEight steps to forecasting:1.Determine the use of the forecast.2.Select the items or quantities to beforecasted.3.Determine the time horizon of theforecast.4.Select the forecasting model ormodels.5.Gather the data needed to make theforecast.6.Validate the forecasting model.7.Make the forecast.8.Implement the results.These steps provide a systematic way of initiating, designing, and implementing a forecasting system.Types of ForecastsMoving Average Exponential SmoothingTrend Projections Time-Series Methods:include historical data over a time intervalForecasting TechniquesNo single method is superiorDelphi Methods Jury of Executive Opinion Sales Force Composite Consumer Market SurveyQualitative Models :attempt to include subjective factorsCausal Methods:include a variety of factorsRegression Analysis Multiple RegressionDecompositionQualitative MethodsDelphi Methodinteractive group process consisting ofobtaining information from a group ofrespondents through questionnaires andsurveysJury of Executive Opinionobtains opinions of a small group of high-level managers in combination withstatistical modelsSales Force Compositeallows each sales person to estimate thesales for his/her region and then compiles the data at a district or national levelConsumer Market Surveysolicits input from customers or potential customers regarding their futurepurchasing plansScatter Diagrams050100150200250300350400450024681012Time (Years)A n n u a l S a l e sR a d i o sTelevisionsCom p a c t D i s c s Scatter diagrams are helpful whenforecasting time-series data because they depict the relationship between variables.Measures of ForecastAccuracy Forecast errors allow one to see how well the forecast model works and compare that model with other forecast models.Forecast error= actual value –forecast valueMeasures of Forecast Accuracy (continued)Measures of forecast accuracy include: Mean Absolute Deviation (MAD)Mean Squared Error (MSE)Mean Absolute Percent Error (MAPE)= ∑|forecast errors|n= ∑(errors)n=∑actualn100%error2Hospital Days –Forecast Error ExampleMs. Smith forecastedtotal hospitalinpatient days last year. Now that the actual data are known, she is reevaluatingher forecasting model. Compute the MAD, MSE, and MAPE for her forecast.Month Forecast Actual JAN250243 FEB320315 MAR275286 APR260256 MAY250241 JUN275298 JUL300292 AUG325333 SEP320326 OCT350378 NOV365382 DEC380396Hospital Days –Forecast Error ExampleForecastActual|error|error^2|error/actual|JAN 2502437490.03FEB 3203155250.02MAR 275286111210.04APR 2602564160.02MAY 2502419810.04JUN 275298235290.08JUL 3002928640.03AUG 3253338640.02SEP 3203266360.02OCT 350378287840.07NOV 365382172890.04DEC 380396162560.04AVERAGE11.83192.833.68MAD =MSE =MAPE = .0368*100=Decomposition of a Time-SeriesTime series can be decomposed into:Trend (T):gradual up or downmovement over timeSeasonality (S):pattern offluctuations above or below trendline that occurs every yearCycles(C):patterns in data thatoccur every several yearsRandom variations (R):“blips”inthe data caused by chance andunusual situationsComponents of Decomposition-150-5050150250350450550650012345Time (Years)D e m a n d TrendActual DataCyclicRandomDecomposition of Time-Series: Two ModelsMultiplicative model assumes demand is the product of the four components.demand = T * S * C * RAdditive model assumes demand is the summation of the four components.demand = T + S + C + RMoving Averages Moving average methods consist of computing an average of the most recent n data values for the time series and using this average for the forecast of the next period.Simple moving average =∑demand in previous n periodsnWallace Garden Supply’s Three-Month Moving AverageMonth ActualShedSalesThree-Month Moving AverageJanuary10 February12 March13April16 May19 June23 July26(10+12+13)/3 = 11 2/3 (12+13+16)/3 = 13 2/3 (13+16+19)/3 = 16 (16+19+23)/3 = 19 1/3Weighted MovingAveragesWeighted moving averages use weights to put more emphasis on recent periods.Weighted moving average =∑(weight for period n) (demand in period n)∑weightsCalculating Weighted Moving AveragesWeightsApplied PeriodLast month3Two months ago21Three months ago3*Sales last month +2*Sales two months ago +1*Sales threemonths ago6Sum of weightsWallace Garden ’s Weighted Three -Month Moving AverageMonthActual Shed SalesThree-Month Weighted Moving Average101213161923January February March April May June July26[3*13+2*12+1*10]/6 = 12 1/6[3*16+2*13+1*12]/6 =14 1/3[3*19+2*16+1*13]/6 = 17[3*23+2*19+1*16]/6 = 20 1/2Exponential SmoothingExponential smoothing is a type of moving average technique that involves little record keeping of past data.New forecast= previous forecast + α(previous actual –previous forecast)Mathematically this is expressed as:F t = F t-1 + α(Y t-1 -F t-1)F t-1= previous forecast α= smoothing constant F t = new forecastY t-1= previous period actualQtr ActualRounded Forecast using α=0.10 TonnageUnloaded11801752168176= 175.00+0.10(180-175) 3159175 =175.50+0.10(168-175.50) 4175173 =174.75+0.10(159-174.75) 5190173 =173.18+0.10(175-173.18) 6205175 =173.36+0.10(190-173.36) 7180178 =175.02+0.10(205-175.02) 8182178 =178.02+0.10(180-178.02) 9?179= 178.22+0.10(182-178.22)Qtr ActualRounded Forecast using α=0.50 TonnageUnloaded11801752168178 =175.00+0.50(180-175) 3159173 =177.50+0.50(168-177.50) 4175166 =172.75+0.50(159-172.75) 5190170 =165.88+0.50(175-165.88) 6205180 =170.44+0.50(190-170.44) 7180193 =180.22+0.50(205-180.22) 8182186 =192.61+0.50(180-192.61) 9?184 =186.30+0.50(182-186.30)Selecting a SmoothingConstantActualForecast witha = 0.10Absolute DeviationsForecast witha = 0.50Absolute Deviations180175517551681768178101591751617314175173216691901731717020205175301802518017821931318217841864MAD10.012To select the best smoothing constant, evaluate the accuracy of each forecasting model.The lowest MAD results from α= 0.10PM Computer: Moving Average ExamplePM Computer assembles customized personal computers from generic parts. The owners purchase generic computer parts in volume at a discount from a variety of sources whenever they see a good deal.It is important that they develop a good forecast of demand for their computers so they can purchase component parts efficiently.PM Computers: DataPeriod month actual demand1Jan372Feb403Mar414Apr375May456June507July438Aug479Sept56Compute a 2-month moving averageCompute a 3-month weighted average using weights of 4,2,1 for the past three months ofdataCompute an exponential smoothing forecast using α= 0.7Using MAD, what forecast is most accurate?PM Computers: Moving Average Solution2 monthMA Abs. Dev 3 month WMA Abs. Dev Exp.Sm.Abs. Dev37.0037.00 3.0038.50 2.5039.10 1.9040.50 3.5040.14 3.1440.43 3.4339.00 6.0038.57 6.4338.03 6.9741.009.0042.147.8642.917.0947.50 4.5046.71 3.7147.87 4.8746.500.5045.29 1.7144.46 2.5445.0011.0046.299.7146.249.7651.5051.5753.07MAD5.29 5.43 4.95 Exponential smoothing resulted in the lowest MAD.Simple exponential smoothing fails to respond to trends, so a more complex model is necessary with trend adjustment. Simple exponential smoothing -first-order smoothingTrend adjusted smoothing -second-order smoothingLow βgives less weight to morerecent trends, while high βgiveshigher weight to more recent trends.Forecast including trend (FIT t+1) = new forecast (F t ) + trend correction(T t )whereT t = (1 -β)T t-1 + β(F t –F t-1)T i = smoothed trend for period 1T i-1= smoothed trend for the preceding period β= trend smoothing constantF t = simple exponential smoothed forecast for period t F t-1= forecast for period t-1Trend ProjectionTrend projections are used to forecast time-series data that exhibit a linear trend.Least squares may be used to determine a trend projection for future forecasts. Least squares determines the trend line forecast by minimizing the mean squared error between the trend line forecasts and the actual observed values. The independent variable is the time period and the dependent variable is the actual observed value in the time series.Trend Projection(continued)The formula for the trend projection is:Y= b+ b X0 1where: Y = predicted valueb1 = slope of the trend lineb0 = interceptX = time period (1,2,3…n)Midwestern Manufacturing Trend Projection Example Midwestern Manufacturing Company’s demand for electrical generators over the period of 1996 –2000 is given below.Year Time Sales1996174199727919983801999490200051052001614220027122Midwestern Manufacturing CompanyTrend SolutionSUMMARY OUTPUTRegression StatisticsMultiple R0.89R Square0.80Adjusted R Square0.76Standard Error12.43Observations7.00ANOVAdf SS MS F Sign. F Regression 1.003108.043108.0420.110.01 Residual 5.00772.82154.56Total 6.003880.86Coefficients StandardError t Stat P-valueLower95%Intercept56.7110.51 5.400.0029.70 Time10.54 2.35 4.480.01 4.50 Sales = 56.71 + 10.54 (time)MidwesternManufacturing ’s Trend60708090100110120130140150160199319941995199619971998199920002001Forecast pointsTrend LineActual demand lineXy 54.1070.56+=Seasonal Variations Seasonal indices can be used to make adjustments in the forecast for seasonality.A seasonal index indicates how a particular season compares with an average season.The seasonal index can be found by dividing the average value for a particular season by the average of all the data.Eichler Supplies: Seasonal Index ExampleMonthSales DemandAverage Two-Year DemandAverageMonthly DemandSeasonal IndexYear 1Year28010090940.957758580940.851809085940.9049011010094 1.064Jan Feb Mar Apr May11513112394 1.309………………Total Average Demand 1,128Seasonal Index:= Average 2 -year demand/Average monthly demandSeasonal Variationswith TrendSteps of Multiplicative Time-Series Model pute the CMA for each pute seasonal ratio (observation/CMA).3.Average seasonal ratios to get seasonal indices.4.If seasonal indices do not add to thenumber of seasons, multiply each index by (number of seasons)/(sum of the indices).Centered Moving Average (CMA)is an approach that prevents a variation due to trend from being incorrectly interpreted as a variation due to the season.Turner Industries Seasonal Variations with TrendTurner Industries’sales figures are shown below with the CMA andseasonal ratio.Year Quarter Sales CMA Seasonal Ratio1110821253150132 1.1364141134.125 1.051 21116136.3750.851234138.8750.9653159141.125 1.1274152143 1.063 31123145.1250.8482142147.1870.9631684165CMA (qtr 3 / yr 1 ) = .5(108) + 125 + 150 + 141+ .5(116)4Seasonal Ratio = Sales Qtr 3 = 150CMA 132Decomposition Method with Trend and Seasonal Components Decomposition is the process of isolating linear trend and seasonal factors to develop more accurate forecasts.There are five steps to decomposition:pute the seasonal index for eachseason.2.Deseasonalize the data by dividingeach number by its seasonal index.pute a trend line with thedeseasonalized data.e the trend line to forecast.5.Multiply the forecasts by the seasonalindex.Turner Industries has noticed a trend in quarterly sales figures. There is also aseasonal component. Below is the seasonal index and deseasonalized sales data.Yr Qtr Sales Seasonal Index DesasonalizedSales 111080.85127.05921250.96130.2083150 1.13132.7434141 1.06133.019211160.85136.47121340.96139.5833159 1.13140.7084152 1.06143.396311230.85144.70621420.96147.9173168 1.13148.67341651.06155.660*•This value is derived by averaging the season rations for each quarter.Refer to slide 5-37.Seasonal Index for Qtr 1 =0.851+0.848 = 0.8521080.85=Using the deseasonalized data, the following trend line was computed: SUMMARY OUTPUTRegression StatisticsMultiple R0.98981389R Square0.97973154Adjusted R Squar0.9777047Standard Error 1.26913895Observations12ANOVAdf SS MS F ignificance F Regression1778.5826246778.5826246483.37748.49E-10 Residual1016.10713664 1.610713664Total11794.6897613Coefficients Standard Error t Stat P-value Lower 95% Intercept124.780.781101025159.8320597 2.26E-18123.1046 Time 2.340.1061307321.985846048.49E-10 2.0969Sales = 124.78 + 2.34XTurner Industries: Decomposition Method Using the trend line, the following forecast was computed:Sales = 124.78 + 2.34XFor period 13 (quarter 1/ year 4):Sales = 124.78 + 2.34 (13)= 155.2 (before seasonality adjustment) After seasonality adjustment:Sales = 155.2 (0.85) = 131.92Seasonal index for quarter 1Multiple Regression with Trend and Seasonal Components Multiple regression can be used to develop an additive decomposition model.One independent variable is time. Seasons are represented by dummy independent variables.Y = a + b X + b X + b X + b X Where X = time period X = 1 if quarter 2= 0 otherwiseX = 1 if quarter 3= 0 otherwiseX = 1 if quarter 4= 0 otherwise1 12 23 34 41234Monitoring and Controlling Forecasts n |errors forecast |Σ MAD MAD i) period in demand forecast i period in demand Σ(actual MADRSFE gnal TrackingSi =−==whereTracking signals measure how well predictions fit actual data.Monitoring and Controlling Forecasts。
ARIMA(预测时间序列的模型)Autoregressive integrated moving average(⼀种预测时间序列的模型) ARIMA In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalisation of an autoregressive moving average (ARMA) model. These models are fitted to time series data either to better understand the data or to predict future points in the series. They are applied in some cases where data show evidence of non-stationarity, where an initial differencing step (corresponding to the "integrated" part of the model) can be applied to remove the non-stationarity.The model is generally referred to as an ARIMA(p,d,q) model where p, d, and q are integers greater than or equal to zero and refer to the order of the autoregressive, integrated, and moving average parts of the model respectively. ARIMA models form an important part of the Box-Jenkins approach to time-series modelling. When one of the terms is zero, it's usual to drop AR, I or MA. For example, an I(1) model is ARIMA(0,1,0), and a MA(1) model is ARIMA(0,0,1).Contents[hide]1 Definition2 Forecasts using ARIMA models3 Examples4 Implementations in statistics packages5 See also6 References7 External linksDefinitionGiven a time series of data Xt where t is an integer index and the Xt are real numbers, then an ARMA(p,q) model is given by: where L is the lag operator, the αi are the parameters of the autoregressive part of the model, the θi are the parameters of the moving average part and the are error terms. The error terms are generally assumed to be independent,identically distributed variables sampled from a normal distribution with zero mean.Assume now that the polynomial has a unitary root of multiplicity d. Then it can be rewritten as:An ARIMA(p,d,q) process expresses this polynomial factorisation property, and is given by:a nd thus can be thought as a particular case of an ARMA(p+d,q) process having the auto-regressive polynomial with some roots in the unity. For this reason every ARIMA model withd>0 is not wide sense stationary.Forecasts using ARIMA modelsARIMA models are used for observable non-stationary processes Xt that have some clearly identifiable trends:constant trend (i.e. a non-zero average) leads to d = 1linear trend (i.e. a linear growth behavior) leads to d = 2quadratic trend (i.e. a quadratic growth behavior) leads to d = 3In these cases the ARIMA model can be viewed as a "cascade" of two models. The first is non-stationary:while the second is wide-sense stationary:Now standard forecasts techniques can be formulated for the process Yt, and then (having the sufficient number of initial conditions) Xt can be forecasted via opportune integration steps.[edit]ExamplesSome well-known special cases arise naturally. For example, an ARIMA(0,1,0) model is given by:which is simply a random walk.A number of variations on the ARIMA model are commonly used. For example, if multiple time seriesare used then the Xt can be thought of as vectors and a VARIMA model may be appropriate. Sometimesa seasonal effect is suspected in the model. For example, consider a model of daily road traffic volumes. Weekends clearly exhibit different behaviour from weekdays. In this case it is often considered better touse a SARIMA (seasonal ARIMA) model than to increase the order of the AR or MA parts of the model.If the time-series is suspected to exhibit long-range dependence then the d parameter may be replaced by certain non-integer values in an autoregressive fractionally integrated moving average model, which isalso called a Fractional ARIMA (FARIMA or ARFIMA) model.[edit]Implementations in statistics packagesIn R, the stats package includes an arima function. The function is documented in "ARIMAModelling of Time Series". Besides the ARIMA(p,d,q) part, the function also includes seasonalfactors, an intercept term, and exogenous variables (xreg, called "external regressors").[edit]See alsoAutocorrelationARMAX-12-ARIMAPartial autocorrelation[edit]ReferencesMills, Terence C. Time Series Techniques for Economists. Cambridge University Press, 1990.Percival, Donald B. and Andrew T. Walden. Spectral Analysis for Physical Applications.Cambridge University Press, 1993.Autoregressive integrated moving average (ARIMA) is one of the popular linear models in time series forecasting。
TEST BANKCHAPTER 7: DEMAND MANAGEMENT, ORDER MANAGEMENTAND CUSTOMER SERVICEMultiple Choice Questions (correct answers are bolded)1. The creation across the supply chain and its markets of a coordinated flow of demand is the definition of ___________.a. order cycleb. order managementc. demand managementd. supply chain management[LO 7.1: To explain demand management and demand forecasting models; Easy; Concept; AACSB Category 3: Analytical thinking]2. ___________ refers to finished goods that are produced prior to receiving a customer order.a. Make-to-stockb. Supply managementc. Make-to-orderd. Speculation[LO 7.1: To explain demand management and demand forecasting models; Easy; Concept; AACSB Category 3: Analytical thinking]3. ___________ refers to finished goods that are produced after receiving a customer order.a. Make-to-stockb. Supply managementc. Make-to-orderd. Postponement[LO 7.1: To explain demand management and demand forecasting models; Easy; Concept; AACSB Category 3: Analytical thinking]4. Which of the following is not a basic type of demand forecasting model?a. exponential smoothingb. cause and effectc. judgmentald. time series[LO 7.1: To explain demand management and demand forecasting models; Moderate; Synthesis; AACSB Category 3: Analytical thinking]5. Surveys and analog techniques are examples of ___________ forecasting.a. cause and effectb. time seriesc. exponential smoothingd. judgmental[LO 7.1: To explain demand management and demand forecasting models; Moderate; Application; AACSB Category 3: Analytical thinking]6. An underlying assumption of ___________ forecasting is that future demand is dependent on past demand.a. trial and errorb. time seriesc. judgmentald. cause and effect[LO 7.1: To explain demand management and demand forecasting models; Moderate; Concept; AACSB Category 3: Analytical thinking]7. Which forecasting technique assumes that one or more factors are related to demand and that this relationship can be used to estimate future demand?a. exponential smoothingb. judgmentalc. cause and effectd. time series[LO 7.1: To explain demand management and demand forecasting models; Moderate; Concept; AACSB Category 3: Analytical thinking]8. Which forecasting technique tends to be appropriate when there is little or no historical data?a. exponential smoothingb. judgmentalc. time seriesd. cause and effect[LO 7.1: To explain demand management and demand forecasting models; Moderate; Application; AACSB Category 3: Analytical thinking]9. ___________ suggests that supply chain partners will be working from a collectively agreed-to single demand forecast number as opposed to each member working off its own demand forecast projection.a. Supply chain orientationb. Collaborative planning, forecasting, and replenishment (CPFR) conceptc. Order managementd. Supply chain analytics[LO 7.1: To explain demand management and demand forecasting models; Easy; Concept; AACSB Category 3: Analytical thinking]10. ___________ refers to the management of various activities associated with the order cycle.a. Logisticsb. Order processingc. Demand managementd. Order management[LO 7.2: To examine the order cycle and its four components; Easy; Concept; AACSB Category3: Analytical thinking]11. The order cycle is ___________.a. the time that it takes for a check to clearb. the time that it takes from when a customer places an order until the selling firm receives the orderc. also called the replenishment cycled. also called the vendor cycle[LO 7.2: To examine the order cycle and its four components; Moderate; Concept; AACSB Category3: Analytical thinking]12. The order cycle is composed of each of the following except:a. order retrieval.b. order delivery.c. order picking and assembly.d. order transmittal.[LO 7.2: To examine the order cycle and its four components; Difficult; Synthesis; AACSB Category 3: Analytical thinking]13. Which of the following statements is false?a. Some organizations have expanded the order management concept to include the length of time it takes for an organization to receive payment for an order.b. The order cycle should be analyzed in terms of total cycle time and cycle time variability.c. Order management has been profoundly impacted by advances in information systems.d. Order management is synonymous with order cycle.[LO 7.2: To examine the order cycle and its four components; Difficult; Synthesis; AACSB Category 3: Analytical thinking]14. Order transmittal is ___________.a. the series of events that occurs from the time a customer places an order and the time the customer receives the orderb. the series of events that occurs between the time a customer places an order and the time the seller receives the orderc. the series of events that occurs between the time a customer perceives the need for a product and the time the seller receives the orderd. the series of events that occurs between the time a customer places an order and the time the order cycle begins[LO 7.2: To examine the order cycle and its four components; Moderate; Concept; AACSB Category 3: Analytical thinking]15. In general, there are ___________ possible ways to transmit orders.a. threeb. fourc. fived. six[LO 7.2: To examine the order cycle and its four components; Moderate; Application; AACSB Category 3: Analytical thinking]16. ___________ and electronic ordering are order transmittal techniques that have emerged over the last 30 years.a. In-personb. Mailc. Faxd. Telephone[LO 7.2: To examine the order cycle and its four components; Easy; Application; AACSB Category 3: Analytical thinking]17. What is the second phase of the order cycle?a.order transmittalb.order processingc.order picking and assemblyd.order delivery[LO 7.2: To examine the order cycle and its four components; Moderate; Synthesis; AACSB Category 3: Analytical thinking]18. ___________ refers to the time from when the seller receives an order until an appropriate location is authorized to fill the order.a. Order processingb. Order cyclec. Order managementd. Order transmittal[LO 7.2: To examine the order cycle and its four components; Moderate; Concept; AACSB Category 3: Analytical thinking]19. Classifying orders according to pre-established guidelines so that a company can prioritize how orders are to be filled refers to ___________.a. order fill rateb. order managementc. order processingd. order triage[LO 7.2: To examine the order cycle and its four components; Moderate; Concept; AACSB Category 3: Analytical thinking]20. Order picking and assembly is ___________.a. the final stage of the order cycleb. the most important component of the order cyclec. the order cycle component that follows order processingd. the order cycle component that follows order transmittal[LO 7.2: To examine the order cycle and its four components; Moderate; Synthesis; AACSB Category 3: Analytical thinking]21. The text suggests that ___________ often represents the best opportunity to improve the effectiveness and efficiency of an order cycle.a. order transmittalb. order picking and assemblyc. order deliveryd. order processing[LO 7.2: To examine the order cycle and its four components; Difficult; Synthesis; AACSB Category 3: Analytical thinking]22. Which of the following is not a characteristic of contemporary voice-based order picking systems?a. easily disrupted by other noisesb. better voice qualityc. more powerfuld. less costly[LO 7.2: Order Management; Difficult; Synthesis; AACSB Category 3: Analytical thinking]23. Which of the following is not a benefit of voice-based order picking?a. fewer picking errorsb. improved productivityc. minimal training time to learn the technologyd. fewer employee accidents[LO 7.2: To examine the order cycle and its four components; Difficult; Synthesis; AACSB Category 3: Analytical thinking]24. The final phase of the order cycle is called order ___________.a. picking and assemblyb. deliveryc. receivingd. replenishment[LO 7.2: To examine the order cycle and its four components; Moderate; Synthesis; AACSB Category 3: Analytical thinking]25. The time span within which an order must arrive refers to ___________.a. transit time reliabilityb. order deliveryc. delivery windowd. transit time[LO 7.2: To examine the order cycle and its four components; Moderate; Concept; AACSB Category 3: Analytical thinking]26. A commonly used rule of thumb is that it costs approximately ___________ times as much to get a new customer as it does to keep an existing customer.a. threeb. fourc. fived. six[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Application; AACSB Category 3: Analytical thinking]27. An unhappy customer will tell ___________ other people about her/his unhappiness.a. sevenb. ninec. twelved. fifteen[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Application; AACSB Category 3: Analytical thinking]28. The ability of logistics management to satisfy users in terms of time, dependability, communication, and convenience is the definition of ___________.a. customer serviceb. the order cyclec. a perfect orderd. customer satisfaction[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Concept; AACSB Category 3: Analytical thinking]29. The order cycle is an excellent example of the ___________ dimension of customer service.a. timeb. conveniencec. dependabilityd. communication[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Application; AACSB Category 3: Analytical thinking]30. The percentage of orders that can be completely and immediately filled from existing stock is the ___________ rate.a. optimal inventoryb. order cyclec. perfect orderd. order fill[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Easy; Concept; AACSB Category 3: Analytical thinking]31. What component of customer service focuses on the ease of doing business with a seller?a. convenienceb. dependabilityc. timed. communication[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Easy; Concept; AACSB Category 3: Analytical thinking]32. What are multichannel marketing systems?a. channels that have multiple intermediaries between the producer and the consumerb. separate marketing channels that serve an individual customerc. the same thing as horizontal marketing systemsd. channels that combine horizontal and vertical marketing systems[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Concept; AACSB Category 3: Analytical thinking]33. Objectives should be SMART—that is, ___________, measurable, achievable, realistic, and timely.a. specificb. strategicc. staticd. striving[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Application; AACSB Category 3: Analytical thinking]34. Which of the following statements is false?a. Goals tend to be broad, generalized statements regarding the overall results that the firm is trying to achieve.a. Objectives are more specific than goals.c. A central element to the establishment of customer service goals and objectives is determining the customer’s viewpoint.d. Objectives should be specific, measurable, achievable, and responsive.[LO 7.4: To familiarize you with select managerial issues associated with customer service; Difficult; Synthesis; AACSB Category 3: Analytical thinking]35. ___________ refers to a process that continuously identifies, understands, and adapts outstanding processes inside and outside an organization.a. Environmental scanningb. Quality managementc. Benchmarkingd. Continuous improvement[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Concept; AACSB Category 3: Analytical thinking]36. ___________ is the process of taking corrective action when measurements indicate that the goals and objectives of customer service are not being achieved.a. Benchmarkingb. Leadershipc. Controld. Managing[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Concept; AACSB Category 3: Analytical thinking]37. Which statement about measuring customer service is true?a. Firms should choose those aspects of customer service that are easiest to measure.b. Order cycle time is the most commonly used customer service measure.c. Firms should use as many customer service measures as they can.d. It is possible for organizations to use only one customer service metric.[LO 7.4: To familiarize you with select managerial issues associated with customer service; Difficult; Synthesis; AACSB Category 3: Analytical thinking]38. ___________ refers to the allocation of revenues and costs to customer segments or individual customers to calculate the profitability of the segments or customers.a. Customer profitability analysisb. Net present valuec. Customer lifetime valued. Activity-based costing (ABC)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Easy; Concept; AACSB Category 3: Analytical thinking]39. Which of the following statements is false?a. The service recovery paradox is where a customer holds the responsible company in higher regard after the service recovery than if a service failure had not occurred in the first place.b. A set formula that companies should follow for service recovery exists.c. One service recovery guideline involves fair treatment for customers.d. Service recovery refers to a process for returning a customer to a state of satisfaction after a service or product has failed to live up to expectations.[LO 7.4: To familiarize you with select managerial issues associated with customer service; Difficult; Synthesis; AACSB Category 3: Analytical thinking]True-False Questions1.Demand management is important because efficient and effective supply chains have learnedto match both supply and demand. (True)[LO: Beginning of the chapter material; Moderate; Application; AACSB Category 3: Analytical thinking]2.In make-to-order situations, finished goods are produced after receiving a customer order.(True)[LO 7.1: To explain demand management and demand forecasting models; Easy; Concept; AACSB Category 3: Analytical thinking]3.Simple moving averages and weighted moving averages are examples of judgmentalforecasting. (False)[LO 7.1: To explain demand management and demand forecasting models; Moderate; Application; AACSB Category 3: Analytical thinking]4.Judgmental forecasting is appropriate when there is little or no historical data. (True)[LO 7.1: To explain demand management and demand forecasting models; Moderate; Application; AACSB Category 3: Analytical thinking]5.Forecasting accuracy refers to the relationship between the actual and forecasted demand.(True)[LO 7.1: To explain demand management and demand forecasting models; Easy; Concept; AACSB Category 3: Analytical thinking]6.Demand chain management is where supply chain partners share planning and forecastingdata to better match up supply and demand. (False)[LO 7.1: To explain demand management and demand forecasting models; Moderate; Concept; AACSB Category 3: Analytical thinking]7.In general terms, order management refers to management of the various activities associatedwith the order cycle. (True)[LO 7.2: To examine the order cycle and its four components; Easy; Concept; AACSB Category 3: Analytical thinking]8.The order cycle is usually the time from when a customer places an order to when the firmreceives the order. (False)[LO 7.2: To examine the order cycle and its four components; Moderate; Concept; AACSB Category 3: Analytical thinking]9.There are four possible ways to transmit orders. (False)[LO 7.2: To examine the order cycle and its four components; Moderate; Application; AACSB Category 3: Analytical thinking]10.Order information is checked for completeness and accuracy in the order processingcomponent of the order cycle. (True)[LO 7.2: To examine the order cycle and its four components; Moderate; Synthesis; AACSB Category 3: Analytical thinking]11.The order triage function refers to correcting mistakes that may occur with order picking.(False)[LO 7.2: To examine the order cycle and its four components; Moderate; Concept; AACSB Category 3: Analytical thinking]12.A commonsense approach is to fill an order from the facility location that is closest to thecustomer, with the idea that this should generate lower transportation costs as well as ashorter order cycle time. (True)[LO 7.2: To examine the order cycle and its four components; Easy; Application; AACSB Category 3: Analytical thinking]13.Order processing often represents the best opportunity to improve the effectiveness andefficiency of the order cycle. (False)[LO 7.2: To examine the order cycle and its four components; Moderate; Application; AACSB Category 3: Analytical thinking]14.Travel time accounts for a majority of an order picker’s total pick time. (True)[LO 7.2: To examine the order cycle and its four components; Moderate; Synthesis; AACSB Category 3: Analytical thinking]15.Pick-to-light technology is an order picking technique that has grown in popularity in recentyears. (True)[LO 7.2: To examine the order cycle and its four components; Moderate; Application; AACSB Category 3: Analytical thinking]16.Order retrieval is the final phase of the order cycle. (False)[LO 7.2: To examine the order cycle and its four components; Moderate; Synthesis; AACSB Category 3: Analytical thinking]17.A key change in the order delivery component of the order cycle is that more and moreshippers are emphasizing both the elapsed transit time and transit time variability. (True) [LO 7.2: To examine the order cycle and its four components; Moderate; Application; AACSB Category 3: Analytical thinking]18.It costs about five times as much to get a new customer as it does to keep an existingcustomer. (True)[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Easy; Application; AACSB Category 3: Analytical thinking]19.Consumers are demanding about the same levels of service today as in years past. (False) [LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Synthesis; AACSB Category 3: Analytical thinking]20.The increased use of vendor quality-control programs necessitates higher levels of customerservice. (True)[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Synthesis; AACSB Category 3: Analytical thinking]21.Customer service can be defined as the ability of logistics management to satisfy users interms of quality, dependability, communication, and convenience. (False)[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Concept; AACSB Category 3: Analytical thinking]22.Dependability consists of consistent order cycles, safe delivery, and complete delivery.(True)[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Synthesis; AACSB Category 3: Analytical thinking]panies today will not accept slower order cycles in exchange for higher order cycleconsistency. (False)[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Synthesis; AACSB Category 3: Analytical thinking]24.Order fill rate is the percentage of orders that can be completely and immediately filled fromexisting stock. (True)[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Easy; Concept; AACSB Category 3: Analytical thinking]25.Text messaging and the Internet have lessened the need for telephone interaction and face-to-face contact between seller and customer. (False)[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Synthesis; AACSB Category 3: Analytical thinking]26.The convenience component of customer service focuses on the ease of doing business with aseller. (True)[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Easy; Concept; AACSB Category 3: Analytical thinking]27.Today’s customer likes to have multiple purchasing options at her/his disposal, andorganizations have responded by developing hybrid marketing channels—that is, separate marketing channels to serve an individual customer. (False)[LO 7.3: To understand the four dimensions of customer service as they pertain to logistics; Moderate; Concept; AACSB Category 3: Analytical thinking]28.Goals are the means by which objectives are achieved. (False)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Concept; AACSB Category 3: Analytical thinking]29.Objectives should be specific, measurable, achievable, realistic, and timely. (True)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Synthesis; AACSB Category 3: Analytical thinking]30.Continuous improvement refers to a process that continuously identifies, understands, andadapts outstanding processes found inside and outside an organization. (False)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Concept; AACSB Category 3: Analytical thinking]31.Benchmarking should only involve numerical comparisons of relevant metrics. (False) [LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Synthesis; AACSB Category 3: Analytical thinking]32.The nature of the product can affect the level of customer service that should be offered.(True)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Synthesis; AACSB Category 3: Analytical thinking]33.A product just being introduced needs a different level of service support than one that is in amature or declining market stage. (True)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Synthesis; AACSB Category 3: Analytical thinking]34.Leadership is the process of taking corrective action when measurements indicate that thegoals and objectives of customer service are not being achieved. (False)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Concept; AACSB Category 3: Analytical thinking]35.The customer service metrics that are chosen should be relevant and important from thecustomer’s perspective. (True)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Synthesis; AACSB Category 3: Analytical thinking]36.It is possible for organizations to use only one customer service metric to measure customerservice. (True)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Easy; Application; AACSB Category 3: Analytical thinking37.Customer profitability analysis explicitly recognizes that all customers are not the same andthat some customers are more valuable than others to an organization. (True)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Concept; AACSB Category 3: Analytical thinking]38.Customer profitability analysis is grounded in traditional accounting cost allocation methods.(False)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Application; AACSB Category 3: Analytical thinking]39.Poor customer experiences cost U.S. business in excess of $75 billion per year. (False) [LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Synthesis; AACSB Category 3: Analytical thinking]40.In the service recovery paradox, a customer holds the responsible company in higher regardafter the service than if a service failure had not occurred in the first place. (True)[LO 7.4: To familiarize you with select managerial issues associated with customer service; Moderate; Application; AACSB Category 3: Analytical thinking]。
MODELING AND FORECASTING REALIZED VOLATILITY *by Torben G. Andersen a , Tim Bollerslev b , Francis X. Diebold c and Paul Labys dFirst Draft: January 1999Revised: January 2001, January 2002We provide a general framework for integration of high-frequency intraday data into the measurement,modeling, and forecasting of daily and lower frequency return volatilities and return distributions. Most procedures for modeling and forecasting financial asset return volatilities, correlations, and distributions rely on potentially restrictive and complicated parametric multivariate ARCH or stochastic volatilitymodels. Use of realized volatility constructed from high-frequency intraday returns, in contrast, permits the use of traditional time-series methods for modeling and forecasting. Building on the theory ofcontinuous-time arbitrage-free price processes and the theory of quadratic variation, we develop formal links between realized volatility and the conditional covariance matrix. Next, using continuouslyrecorded observations for the Deutschemark / Dollar and Yen / Dollar spot exchange rates covering more than a decade, we find that forecasts from a simple long-memory Gaussian vector autoregression for the logarithmic daily realized volatilities perform admirably compared to a variety of popular daily ARCH and more complicated high-frequency models. Moreover, the vector autoregressive volatility forecast,coupled with a parametric lognormal-normal mixture distribution implied by the theoretically andempirically grounded assumption of normally distributed standardized returns, produces well-calibrated density forecasts of future returns, and correspondingly accurate quantile predictions. Our results hold promise for practical modeling and forecasting of the large covariance matrices relevant in asset pricing,asset allocation and financial risk management applications.K EYWORDS : Continuous-time methods, quadratic variation, realized volatility, realized correlation, high-frequency data, exchange rates, vector autoregression, long memory, volatility forecasting, correlation forecasting, density forecasting, risk management, value at risk._________________* This research was supported by the National Science Foundation. We are grateful to Olsen and Associates, who generously made available their intraday exchange rate data. For insightful suggestions and comments we thank three anonymous referees and the Co-Editor, as well as Kobi Bodoukh, Sean Campbell, Rob Engle, Eric Ghysels, Atsushi Inoue, Eric Renault, Jeff Russell, Neil Shephard, Til Schuermann, Clara Vega, Ken West, and seminar participants at BIS (Basel), Chicago, CIRANO/Montreal, Emory,Iowa, Michigan, Minnesota, NYU, Penn, Rice, UCLA, UCSB, the June 2000 Meeting of the Western Finance Association, the July 2001 NSF/NBER Conference on Forecasting and Empirical Methods in Macroeconomics and Finance, the November 2001 NBER Meeting on Financial Risk Management, and the January 2002 North American Meeting of the Econometric Society.a Department of Finance, Kellogg School of Management, Northwestern University, Evanston, IL 60208, and NBER,phone: 847-467-1285, e-mail: t-andersen@bDepartment of Economics, Duke University, Durham, NC 27708, and NBER,phone: 919-660-1846, e-mail: boller@ c Department of Economics, University of Pennsylvania, Philadelphia, PA 19104, and NBER,phone: 215-898-1507, e-mail: fdiebold@dGraduate Group in Economics, University of Pennsylvania, 3718 Locust Walk, Philadelphia, PA 19104,phone: 801-536-1511, e-mail: labys@ Copyright © 2000-2002 T.G. Andersen, T. Bollerslev, F.X. Diebold and P. LabysAndersen, T., Bollerslev, T., Diebold, F.X. and Labys, P. (2003),"Modeling and Forecasting Realized Volatility,"Econometrica, 71, 529-626.1. INTRODUCTIONThe joint distributional characteristics of asset returns are pivotal for many issues in financial economics. They are the key ingredients for the pricing of financial instruments, and they speak directly to the risk-return tradeoff central to portfolio allocation, performance evaluation, and managerial decision-making. Moreover, they are intimately related to the fractiles of conditional portfolio return distributions, which govern the likelihood of extreme shifts in portfolio value and are therefore central to financial risk management, figuring prominently in both regulatory and private-sector initiatives.The most critical feature of the conditional return distribution is arguably its second moment structure, which is empirically the dominant time-varying characteristic of the distribution. This fact has spurred an enormous literature on the modeling and forecasting of return volatility.1 Over time, the availability of data for increasingly shorter return horizons has allowed the focus to shift from modeling at quarterly and monthly frequencies to the weekly and daily horizons. Forecasting performance has improved with the incorporation of more data, not only because high-frequency volatility turns out to be highly predictable, but also because the information in high-frequency data proves useful for forecasting at longer horizons, such as monthly or quarterly.In some respects, however, progress in volatility modeling has slowed in the last decade. First, the availability of truly high-frequency intraday data has made scant impact on the modeling of, say, daily return volatility. It has become apparent that standard volatility models used for forecasting at the daily level cannot readily accommodate the information in intraday data, and models specified directly for the intraday data generally fail to capture the longer interdaily volatility movements sufficiently well. As a result, standard practice is still to produce forecasts of daily volatility from daily return observations, even when higher-frequency data are available. Second, the focus of volatility modeling continues to be decidedly very low-dimensional, if not universally univariate. Many multivariate ARCH and stochastic volatility models for time-varying return volatilities and conditional distributions have, of course, been proposed (see, for example, the surveys by Bollerslev, Engle and Nelson (1994) and Ghysels, Harvey and Renault (1996)), but those models generally suffer from a curse-of-dimensionality problem that severely constrains their practical application. Consequently, it is rare to see substantive applications of those multivariate models dealing with more than a few assets simultaneously.In view of such difficulties, finance practitioners have largely eschewed formal volatility modeling and forecasting in the higher-dimensional situations of practical relevance, relying instead on1 Here and throughout, we use the generic term “volatilities” in reference both to variances (or standard deviations)ad hoc methods, such as simple exponential smoothing coupled with an assumption of conditionally normally distributed returns.2 Although such methods rely on counterfactual assumptions and are almost surely suboptimal, practitioners have been swayed by considerations of feasibility, simplicity and speed of implementation in high-dimensional environments.Set against this rather discouraging background, we seek to improve matters. We propose a new and rigorous framework for volatility forecasting and conditional return fractile, or value-at-risk (VaR), calculation, with two key properties. First, it efficiently exploits the information in intraday return data, without having to explicitly model the intraday data, producing significant improvements in predictive performance relative to standard procedures that rely on daily data alone. Second, it achieves a simplicity and ease of implementation, which, for example, holds promise for high-dimensional return volatility modeling.We progress by focusing on an empirical measure of daily return variability called realized volatility, which is easily computed from high-frequency intra-period returns. The theory of quadratic variation suggests that, under suitable conditions, realized volatility is an unbiased and highly efficient estimator of return volatility, as discussed in Andersen, Bollerslev, Diebold and Labys (2001) (henceforth ABDL) as well as in concurrent work by Barndorff-Nielsen and Shephard (2002, 2001a).3 Building on the notion of continuous-time arbitrage-free price processes, we advance in several directions, including rigorous theoretical foundations, multivariate emphasis, explicit focus on forecasting, and links to modern risk management via modeling of the entire conditional density.Empirically, by treating volatility as observed rather than latent, our approach facilitates modeling and forecasting using simple methods based directly on observable variables.4 We illustrate the ideas using the highly liquid U.S. dollar ($), Deutschemark (DM), and Japanese yen (¥) spot exchange rate markets. Our full sample consists of nearly thirteen years of continuously recorded spot quotations from 1986 through 1999. During that period, the dollar, Deutschemark and yen constituted2This approach is exemplified by the highly influential “RiskMetrics” of J.P. Morgan (1997).3 Earlier work by Comte and Renault (1998), within the context of estimation of a long-memory stochastic volatility model, helped to elevate the discussion of realized and integrated volatility to a more rigorous theoretical level.4 The direct modeling of observable volatility proxies was pioneered by Taylor (1986), who fit ARMA models to absolute and squared returns. Subsequent empirical work exploiting related univariate approaches based on improved realized volatility measures from a heuristic perspective includes French, Schwert and Stambaugh (1987) and Schwert (1989), who rely on daily returns to estimate models for monthly realized U.S. equity volatility, and Hsieh (1991), who fits an AR(5) model to a time series of daily realized logarithmic volatilities constructed from 15-minute S&P500 returns.the main axes of the international financial system, and thus spanned the majority of the systematic currency risk faced by large institutional investors and international corporations.We break the sample into a ten-year "in-sample" estimation period, and a subsequent two-and-a-half-year "out-of-sample" forecasting period. The basic distributional and dynamic characteristics of the foreign exchange returns and realized volatilities during the in-sample period have been analyzed in detail by ABDL (2000a, 2001).5 Three pieces of their results form the foundation on which the empirical analysis of this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly right-skewed, the distributions of the logarithms of realized volatilities are approximately Gaussian. Third, the long-run dynamics of realized logarithmic volatilities are well approximated by a fractionally-integrated long-memory process.Motivated by the three ABDL empirical regularities, we proceed to estimate and evaluate a multivariate model for the logarithmic realized volatilities: a fractionally-integrated Gaussian vector autoregression (VAR) . Importantly, our approach explicitly permits measurement errors in the realized volatilities. Comparing the resulting volatility forecasts to those obtained from currently popular daily volatility models and more complicated high-frequency models, we find that our simple Gaussian VAR forecasts generally produce superior forecasts. Furthermore, we show that, given the theoretically motivated and empirically plausible assumption of normally distributed returns conditional on the realized volatilities, the resulting lognormal-normal mixture forecast distribution provides conditionally well-calibrated density forecasts of returns, from which we obtain accurate estimates of conditional return quantiles.In the remainder of this paper, we proceed as follows. We begin in section 2 by formally developing the relevant quadratic variation theory within a standard frictionless arbitrage-free multivariate pricing environment. In section 3 we discuss the practical construction of realized volatilities from high-frequency foreign exchange returns. Next, in section 4 we summarize the salient distributional features of returns and volatilities, which motivate the long-memory trivariate Gaussian VAR that we estimate in section 5. In section 6 we compare the resulting volatility point forecasts to those obtained from more traditional volatility models. We also evaluate the success of the density forecasts and corresponding VaR estimates generated from the long-memory Gaussian VAR in5 Strikingly similar and hence confirmatory qualitative findings have been obtained from a separate sample consisting of individual U.S. stock returns in Andersen, Bollerslev, Diebold and Ebens (2001).conjunction with a lognormal-normal mixture distribution. In section 7 we conclude with suggestions for future research and discussion of issues related to the practical implementation of our approach for other financial instruments and markets.2. QUADRATIC RETURN VARIATION AND REALIZED VOLATILITYWe consider an n -dimensional price process defined on a complete probability space, (,Û, P), evolvingin continuous time over the interval [0,T], where T denotes a positive integer. We further consider an information filtration, i.e., an increasing family of -fields, (Ût )t 0[0,T] f Û , which satisfies the usual conditions of P -completeness and right continuity. Finally, we assume that the asset prices through time t , including the relevant state variables, are included in the information set Ût .Under the standard assumptions that the return process does not allow for arbitrage and has afinite instantaneous mean the asset price process, as well as smooth transformations thereof, belong to the class of special semi-martingales, as detailed by Back (1991). A fundamental result of stochastic integration theory states that such processes permit a unique canonical decomposition. In particular, we have the following characterization of the logarithmic asset price vector process, p = (p(t))t 0[0,T].PROPOSITION 1: For any n-dimensional arbitrage-free vector price process with finite mean, the logarithmic vector price process, p, may be written uniquely as the sum of a finite variation and predictable mean component, A = (A 1 , ... , A n ), and a local martingale, M = (M 1 , ... , M n ). These may each be decomposed into a continuous sample-path and jump part,p(t) = p(0) + A(t) + M(t) = p(0) + A c (t) + )A(t) + M c (t) + )M(t),(1)where the finite-variation predictable components, A c and )A, are respectively continuous and pure jump processes, while the local martingales, M c and )M, are respectively continuous sample-path and compensated jump processes, and by definition M(0) / A(0) / 0. Moreover, the predictable jumps are associated with genuine jump risk, in the sense that if )A(t) ú 0, thenP [ sgn( )A(t) ) = - sgn( )A(t)+)M(t) ) ] > 0 ,(2)where sgn(x) / 1 for x $0 and sgn(x) / -1 for x < 0.Equation (1) is standard, see, for example, Protter (1992), chapter 3. Equation (2) is an implication of6 This does not appear particularly restrictive. For example, if an announcement is pending, a natural way to model the arrival time is according to a continuous hazard function. Then the probability of a jump within each (infinitesimal)instant of time is zero - there is no discrete probability mass - and by arbitrage there cannot be a predictable jump.the no-arbitrage condition. Whenever )A(t) ú 0, there is a predictable jump in the price - the timing and size of the jump is perfectly known (just) prior to the jump event - and hence there is a trivial arbitrage (with probability one) unless there is a simultaneous jump in the martingale component, )M(t) ú 0. Moreover, the concurrent martingale jump must be large enough (with strictly positive probability) to overturn the gain associated with a position dictated by sgn()A(t)).Proposition 1 provides a general characterization of the asset return process. We denote the(continuously compounded) return over [t-h,t] by r(t,h) = p(t) - p(t-h). The cumulative return process from t=0 onward, r = (r(t))t 0[0,T] , is then r(t) / r(t,t) = p(t) - p(0) = A(t) + M(t). Clearly, r(t) inherits all the main properties of p(t) and may likewise be decomposed uniquely into the predictable andintegrable mean component, A , and the local martingale, M . The predictability of A still allows for quite general properties in the (instantaneous) mean process, for example it may evolve stochastically and display jumps. Nonetheless, the continuous component of the mean return must have smooth sample paths compared to those of a non-constant continuous martingale - such as a Brownian motion - and any jump in the mean must be accompanied by a corresponding predictable jump (of unknown magnitude) in the compensated jump martingale, )M . Consequently, there are two types of jumps in the return process, namely, predictable jumps where )A(t)ú0 and equation (2) applies, and purely unanticipated jumps where )A(t)=0 but )M(t)ú0. The latter jump event will typically occur when unanticipated news hit the market. In contrast, the former type of predictable jump may be associated with the release of information according to a predetermined schedule, such as macroeconomic news releases or company earnings reports. Nonetheless, it is worth noting that any slight uncertainty about the precise timing of the news (even to within a fraction of a second) invalidates the assumption of predictability and removes the jump in the mean process. If there are no such perfectly anticipated news releases, the predictable,finite variation mean return, A , may still evolve stochastically, but it will have continuous sample paths. This constraint is implicitly invoked in the vast majority of the continuous-time models employed in the literature.6Because the return process is a semi-martingale it has an associated quadratic variation process. Quadratic variation plays a critical role in our theoretical developments. The following proposition7 All of the properties in Proposition 2 follow, for example, from Protter (1992), chapter 2.8 In the general case with predictable jumps the last term in equation (4) is simply replaced by0#s #tr i (s)r j (s),where r i (s) /A i (s) + M i (s) explicitly incorporates both types of jumps. However, as discussed above, this case is arguable of little interest from a practical empirical perspective.enumerates some essential properties of the quadratic return variation process.7PROPOSITION 2: For any n-dimensional arbitrage-free price process with finite mean, the quadratic variation nxn matrix process of the associated return process, [r,r] = { [r,r]t }t 0[0,T] , is well-defined. The i’th diagonal element is called the quadratic variation process of the i’th asset return while the ij’th off-diagonal element, [r i ,r j ], is called the quadratic covariation process between asset returns i and j. The quadratic variation and covariation processes have the following properties:(i)For an increasing sequence of random partitions of [0,T], 0 = J m,0 # J m,1 # ..., such thatsup j $1(J m,j+1 - J m,j )60 and sup j $1 J m,j 6T for m 64 with probability one, we have thatlim m 64 { E j $1 [r(t v J m,j ) - r(t v J m,j-1)] [r(t v J m,j ) - r(t v J m,j-1)]’ } 6 [r,r]t ,(3)where t v J / min(t,J ), t 0 [0,T], and the convergence is uniform on [0,T] in probability.(ii)If the finite variation component, A, in the canonical return decomposition in Proposition 1 iscontinuous, then[r i ,r j ]t = [M i ,M j ]t = [M i c ,M j c ]t + E 0#s #t )M i (s) )M j (s) .(4)The terminology of quadratic variation is justified by property (i) of Proposition 2. Property (ii) reflects the fact that the quadratic variation of continuous finite-variation processes is zero, so the meancomponent becomes irrelevant for the quadratic variation.8 Moreover, jump components only contribute to the quadratic covariation if there are simultaneous jumps in the price path for the i ’th and j ’th asset,whereas the squared jump size contributes one-for-one to the quadratic variation. The quadratic variation process measures the realized sample-path variation of the squared return processes. Under the weak auxiliary condition ensuring property (ii), this variation is exclusively induced by the innovations to the return process. As such, the quadratic covariation constitutes, in theory, a unique and invariant ex-post realized volatility measure that is essentially model free. Notice that property (i) also suggests that we9 This has previously been discussed by Comte and Renault (1998) in the context of estimating the spot volatility for a stochastic volatility model corresponding to the derivative of the quadratic variation (integrated volatility) process. 10 This same intuition underlies the consistent filtering results for continuous sample path diffusions in Merton (1980)and Nelson and Foster (1995).may approximate the quadratic variation by cumulating cross-products of high-frequency returns.9 We refer to such measures, obtained from actual high-frequency data, as realized volatilities .The above results suggest that the quadratic variation is the dominant determinant of the return covariance matrix, especially for shorter horizons. Specifically, the variation induced by the genuine return innovations, represented by the martingale component, locally is an order of magnitude larger than the return variation caused by changes in the conditional mean.10 We have the following theorem which generalizes previous results in ABDL (2001).THEOREM 1: Consider an n-dimensional square-integrable arbitrage-free logarithmic price process with a continuous mean return, as in property (ii) of Proposition 2. The conditional return covariance matrix at time t over [t, t+h], where 0 # t # t+h # T, is then given byCov(r(t+h,h)*Ût ) = E([r,r ]t+h - [r,r ]t *Ût ) + 'A (t+h,h) + 'AM (t+h,h) + 'AM ’(t+h,h),(5)where 'A (t+h,h) = Cov(A(t+h) - A(t) * Ût ) and 'AM (t+h,h) = E(A(t+h) [M(t+h) - M(t)]’ *Ût ).PROOF: From equation (1), r(t+h,h) = [ A(t+h) - A(t) ] + [ M(t+h) - M(t) ]. The martingale property implies E( M(t+h) - M(t) *Ût ) = E( [M(t+h) - M(t)] A(t) *Ût ) = 0, so, for i,j 0 {1, ..., n}, Cov( [A i (t+h)- A i (t)], [M j (t+h) - M j (t)] * Ût ) = E( A i (t+h) [M j (t+h) - M j (t)] * Ût ). It therefore follows that Cov(r(t+h,h) * Ût ) = Cov( M(t+h) - M(t) * Ût ) + 'A (t+h,h) + 'AM (t+h,h) + 'AM ’(t+h,h). Hence, it only remains to show that the conditional covariance of the martingale term equals the expected value of the quadratic variation. We proceed by verifying the equality for an arbitrary element of the covariancematrix. If this is the i ’th diagonal element, we are studying a univariate square-integrable martingale and by Protter (1992), chapter II.6, corollary 3, we have E[M i 2(t+h)] = E( [M i ,M i ]t+h ), so Var(M i (t+h) -M i (t) * Ût ) = E( [M i ,M i ]t+h - [M i ,M i ]t * Ût ) = E( [r i ,r i ]t+h - [r i ,r i ]t * Ût ), where the second equality follows from equation (3) of Proposition 2. This confirms the result for the diagonal elements of the covariance matrix. An identical argument works for the off-diagonal terms by noting that the sum of two square-integrable martingales remains a square-integrable martingale and then applying the reasoning toeach component of the polarization identity, [M i ,M j ]t = ½ ( [M i +M j , M i +M j ]t - [M i ,M i ]t - [M j ,M j ]t ). In particular, it follows as above that E( [M i ,M j ]t+h - [M i ,M j ]t * Ût ) = ½ [ Var( [M i (t+h)+M j (t+h)] -[(M i (t)+M j (t)]* Ût ) - Var( M i (t+h) - M i (t)*Ût ) - Var( M j (t+h) - M j (t)*Ût ) ]= Cov( [M i (t+h) - M i (t)],[M j (t+h) - M j (t)]*Ût ). Equation (3) of Proposition 2 again ensures that this equals E( [r i ,r j ]t+h - [r i ,r j ]t * Ût ). 9Two scenarios highlight the role of the quadratic variation in driving the return volatility process. These important special cases are collected in a corollary which follows immediately from Theorem 1.COROLLARY 1: Consider an n-dimensional square-integrable arbitrage-free logarithmic price process, as described in Theorem 1. If the mean process, {A(s) - A(t)}s 0[t,t+h] , conditional on information at time t is independent of the return innovation process, {M(u)}u 0[t,t+h], then the conditional return covariance matrix reduces to the conditional expectation of the quadratic return variation plus the conditional variance of the mean component, i.e., for 0 # t # t+h # T,Cov( r(t+h,h) * Ût ) = E( [r,r ]t+h - [r,r ]t * Ût ) + 'A (t+h,h).If the mean process, {A(s) - A(t)}s 0[t,t+h], conditional on information at time t is a predetermined function over [t, t+h], then the conditional return covariance matrix equals the conditional expectation of the quadratic return variation process, i.e., for 0 # t # t+h # T,Cov( r(t+h,h) * Ût ) = E( [r,r ]t+h - [r,r ]t * Ût ).(6)Under the conditions leading to equation (6), the quadratic variation is the critical ingredient in volatility measurement and forecasting. This follows as the quadratic variation represents the actual variability of the return innovations, and the conditional covariance matrix is the conditional expectation of this quantity. Moreover, it implies that the time t+h ex-post realized quadratic variation is an unbiased estimator for the return covariance matrix conditional on information at time t .Although the corollary’s strong implications rely upon specific assumptions, these sufficientconditions are not as restrictive as an initial assessment may suggest, and they are satisfied for a wide set of popular models. For example, a constant mean is frequently invoked in daily or weekly return models. Equation (6) further allows for deterministic intra-period variation in the conditional mean,11 Merton (1982) provides a similar intuitive account of the continuous record h-asymptotics . These limiting results are also closely related to the theory rationalizing the quadratic variation formulas in Proposition 2 and Theorem 1.induced by time-of-day or other calendar effects. Of course, equation (6) also accommodates a stochastic mean process as long as it remains a function, over the interval [t, t+h], of variables in the time tinformation set. Specification (6) does, however, preclude feedback effects from the random intra-period evolution of the system to the instantaneous mean. Although such feedback effects may be present in high-frequency returns, they are likely trivial in magnitude over daily or weekly frequencies, as we argue subsequently. It is also worth stressing that (6) is compatible with the existence of an asymmetric return-volatility relation (sometimes called a leverage effect), which arises from a correlation between the return innovations, measured as deviations from the conditional mean, and the innovations to the volatility process. In other words, the leverage effect is separate from a contemporaneous correlation between the return innovations and the instantaneous mean return. Furthermore, as emphasized above,equation (6) does allow for the return innovations over [t-h, t] to impact the conditional mean over [t,t+h] and onwards, so that the intra-period evolution of the system may still impact the future expected returns. In fact, this is how potential interaction between risk and return is captured in discrete-time stochastic volatility or ARCH models with leverage effects.In contrast to equation (6), the first expression in Corollary 1 involving 'A explicitlyaccommodates continually evolving random variation in the conditional mean process, although the random mean variation must be independent of the return innovations. Even with this feature present,the quadratic variation is likely an order of magnitude larger than the mean variation, and hence the former remains the critical determinant of the return volatility over shorter horizons. This observation follows from the fact that over horizons of length h , with h small, the variance of the mean return is of order h 2, while the quadratic variation is of order h . It is an empirical question whether these results are a good guide for volatility measurement at relevant frequencies.11 To illustrate the implications at a daily horizon, consider an asset return with standard deviation of 1% daily, or 15.8% annually, and a (large)mean return of 0.1%, or about 25% annually. The squared mean return is still only one-hundredth of the variance. The expected daily variation of the mean return is obviously smaller yet, unless the required daily return is assumed to behave truly erratically within the day. In fact, we would generally expect the within-day variance of the expected daily return to be much smaller than the expected daily return itself. Hence, the daily return fluctuations induced by within-day variations in the mean return are almostcertainly trivial. For a weekly horizon, similar calculations suggest that the identical conclusion applies.。
Forecasting MethodsThe FORECAST ProcedurePage 1 of 10Forecasting MethodsThis section explains the forecasting methods used by PROC FORECAST.STEPAR MethodIn the STEPAR method, PROC FORECAST first fits a time trend model to the series and takes the difference between each value and the estimated trend. (This process is called detrending.) Then, the remaining variation is fit using an autoregressive model. The STEPAR method fits the autoregressive process to the residuals of the trend model using a backwards-stepping method to select parameters. Since the trend and autoregressive parameters are fit in sequence rather than simultaneously, the parameter estimates are not optimal in a statistical sense; however, the estimates are usually close to optimal, and the method is computationally inexpensive. The STEPAR Algorithm The STEPAR method consists of the following computational steps: 1. Fit the trend model as specified by the TREND= option using ordinary least-squares regression. This step detrends the data. The default trend model for the STEPAR method is TREND=2, a linear trend model. 2. Take the residuals from step 1 and compute the autocovariances to the number of lags specified by the NLAGS= option. 3. Regress the current values against the lags, using the autocovariances from step 2 in a Yule-Walker framework. Do not bring in any autoregressive parameter that is not significant at the level specified by the SLENTRY= option. (The default is SLENTRY=0.20.) Do not bring in any autoregressive parameter which results in a nonpositive-definite Toeplitz matrix. 4. Find the autoregressive parameter that is least significant. If the significance level is greater than the SLSTAY= value, remove the parameter from the model. (The default is SLSTAY=0.05.) Continue this process until only significant autoregressive parameters remain. If the OUTEST= option is specified, write the estimates to the OUTEST= data set. 5. Generate the forecasts using the estimated model and output to the OUT= data set. Form the confidence limits by combining the trend variances with the autoregressive variances. Missing values are tolerated in the series; the autocorrelations are estimated from the available data and tapered if necessary. This method requires at least three passes through the data: two passes to fit the model and a third pass to initialize the autoregressive process and write to the output data set. Default Value of the NLAGS= Option If the NLAGS= option is not specified, the default value of the NLAGS= option is chosenmk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29Forecasting MethodsPage 2 of 10based on the data frequency specified by the INTERVAL= option and on the number of observations in the input data set, if this can be determined in advance. (PROC FORECAST cannot determine the number of input observations before reading the data when a BY statement or a WHERE statement is used or if the data are from a tape format SAS data set or external database. The NLAGS= value must be fixed before the data are processed.) If the INTERVAL= option is specified, the default NLAGS= value includes lags for up to three years plus one, subject to the maximum of 13 lags or one third of the number of observations in your data set, whichever is less. If the number of observations in the input data set cannot be determined, the maximum NLAGS= default value is 13. If the INTERVAL= option is not specified, the default is NLAGS=13 or one-third the number of input observations, whichever is less. If the Toeplitz matrix formed by the autocovariance matrix at a given step is not positive definite, the maximal number of autoregressive lags is reduced. For example, for INTERVAL=QTR, the default is NLAGS=13 (that is, 4×3+1) provided that there are at least 39 observations. The NLAGS= option default is always at least 3.EXPO MethodExponential smoothing is used when the METHOD=EXPO option is specified. The term exponential smoothing is derived from the computational scheme developed by Brown and others (Brown and Meyers 1961; Brown 1962). Estimates are computed with updating formulas that are developed across time series in a manner similar to smoothing. The EXPO method fits a trend model such that the most recent data are weighted more heavily than data in the early part of the series. The weight of an observation is a geometric (exponential) function of the number of periods that the observation extends into the past relative to the current period. The weight function iswhere is the observation number of the past observation, t is the current observation number, and is the weighting constant specified with the WEIGHT= option. You specify the model with the TREND= option as follows:l l lTREND=1 specifies single exponential smoothing (a constant model) TREND=2 specifies double exponential smoothing (a linear trend model) TREND=3 specifies triple exponential smoothing (a quadratic trend model)Updating Equations The single exponential smoothing operation is expressed by the formulawhere St is the smoothed value at the current period, t is the time index of the current period, and xt is the current actual value of the series. The smoothed value St is the forecast mk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29Forecasting MethodsPage 3 of 10of xt+1 and is calculated as the smoothing constant times the value of the series, xt, in the current period plus ( ) times the previous smoothed value St-1, which is the forecast of xt computed at time t-1. Double and triple exponential smoothing are derived by applying exponential smoothing to the smoothed series, obtaining smoothed values as follows:Missing values after the start of the series are replaced with one-step-ahead predicted values, and the predicted value is then applied to the smoothing equations. The polynomial time trend parameters CONSTANT, LINEAR, and QUAD in the OUTEST= data set are computed from ST, ST[2], and ST[3], the final smoothed values at observation T, the last observation used to fit the model. In the OUTEST= data set, the values of ST, S[2]T, and S[3]T are identified by _TYPE_=S1, _TYPE_=S2, and _TYPE_=S3, respectively. Smoothing Weights Exponential smoothing forecasts are forecasts for an integrated moving-average process; however, the weighting parameter is specified by the user rather than estimated from the data. Experience has shown that good values for the WEIGHT= option are between 0.05 and 0.3. As a general rule, smaller smoothing weights are appropriate for series with a slowly changing trend, while larger weights are appropriate for volatile series with a rapidly changing trend. If unspecified, the weight defaults to (1- .81/trend), where trend is the value of the TREND= option. This produces defaults of WEIGHT=0.2 for TREND=1, WEIGHT=0.10557 for TREND=2, and WEIGHT=0.07168 for TREND=3. Confidence Limits The confidence limits for exponential smoothing forecasts are calculated as they would be for an exponentially weighted time-trend regression, using the simplifying assumption of an infinite number of observations. The variance estimate is computed using the mean square of the unweighted one-step-ahead forecast residuals. More detailed descriptions of the forecast computations can be found in Montgomery and Johnson (1976) and Brown (1962).Exponential Smoothing as an ARIMA ModelThe traditional description of exponential smoothing given in the preceding section is standard in most books on forecasting, and so this traditional version is employed by PROC FORECAST. However, the standard exponential smoothing model is, in fact, a special case of an ARIMA model (McKenzie 1984). Single exponential smoothing corresponds to an ARIMA(0,1,1) model; double exponential smoothing corresponds to an ARIMA(0,2,2) model; and triple mk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29Forecasting Methods exponential smoothing corresponds to an ARIMA(0,3,3) model.Page 4 of 10The traditional exponential smoothing calculations can be viewed as a simple and computationally inexpensive method of forecasting the equivalent ARIMA model. The exponential smoothing technique was developed in the 1960s before computers were widely available and before ARIMA modeling methods were developed. If you use exponential smoothing as a forecasting method, you might consider using the ARIMA procedure to forecast the equivalent ARIMA model as an alternative to the traditional version of exponential smoothing used by PROC FORECAST. The advantages of the ARIMA form are:llllThe optimal smoothing weight is automatically computed as the estimate of the moving average parameter of the ARIMA model. For double exponential smoothing, the optimal pair of two smoothing weights are computed. For triple exponential smoothing, the optimal three smoothing weights are computed by the ARIMA method. Most implementations of the traditional exponential smoothing method (including PROC FORECAST) use the same smoothing weight for each stage of smoothing. The problem of setting the starting smoothed value is automatically handled by the ARIMA method. This is done in a statistically optimal way when the maximum likelihood method is used. The statistical estimates of the forecast confidence limits have a sounder theoretical basis.See Chapter 11, "The ARIMA Procedure," for information on forecasting with ARIMA models. The Time Series Forecasting System provides for exponential smoothing models and allows you to either specify or optimize the smoothing weights. See Chapter 34, "Getting Started with Time Series Forecasting," for details.WINTERS MethodThe WINTERS method uses updating equations similar to exponential smoothing to fit parameters for the modelwhere a and b are the trend parameters, and the function s(t) selects the seasonal parameter for the season corresponding to time t. The WINTERS method assumes that the series values are positive. If negative or zero values are found in the series, a warning is printed and the values are treated as missing. The preceding standard WINTERS model uses a linear trend. However, PROC FORECAST can also fit a version of the WINTERS method that uses a quadratic trend. When TREND=3 is specified for METHOD=WINTERS, PROC FORECAST fits the following model:mk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29Forecasting MethodsPage 5 of 10The quadratic trend version of the Winters method is often unstable, and its use is not recommended. When TREND=1 is specified, the following constant trend version is fit:The default for the WINTERS method is TREND=2, which produces the standard linear trend model. Seasonal Factors The notation s(t) represents the selection of the seasonal factor used for different time periods. For example, if INTERVAL=DAY and SEASONS=MONTH, there are 12 seasonal factors, one for each month in the year, and the time index t is measured in days. For any observation, t is determined by the ID variable and s(t) selects the seasonal factor for the month that t falls in. For example, if t is 9 February 1993 then s(t) is the seasonal parameter for February. When there are multiple seasons specified, s(t) is the product of the parameters for the seasons. For example, if SEASONS=(MONTH DAY), then s(t) is the product of the seasonal parameter for the month corresponding to the period t, and the seasonal parameter for the day of the week corresponding to period t. When the SEASONS= option is not specified, the seasonal factors s(t) are not included in the model. See the section "Specifying Seasonality" later in this chapter for more information on specifying multiple seasonal factors. Updating Equations This section shows the updating equations for the Winters method. In the following formula, xt is the actual value of the series at time t; at is the smoothed value of the series at time t; bt is the smoothed trend at time t; ct is the smoothed quadratic trend at time t; st-1(t) selects the old value of the seasonal factor corresponding to time t before the seasonal factors are updated. The estimates of the constant, linear, and quadratic trend parameters are updated using the following equations: For TREND=3,For TREND=2, mk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29Forecasting MethodsPage 6 of 10For TREND=1,In this updating system, the trend polynomial is always centered at the current period so that the intercept parameter of the trend polynomial for predicted values at times after t is always the updated intercept parameter at. The predicted value for periods ahead isThe seasonal parameters are updated when the season changes in the data, using the mean of the ratios of the actual to the predicted values for the season. For example, if SEASONS=MONTH and INTERVAL=DAY, then when the observation for the first of February is encountered, the seasonal parameter for January is updated using the formulawhere t is February 1 of the current year, st(t-1) is the seasonal parameter for January updated with the data available at time t, and st-1(t-1) is the seasonal parameter for January of the previous year. When multiple seasons are used, st(t) is a product of seasonal factors. For example, if SEASONS=(MONTH DAY) then st(t) is the product of the seasonal factors for the month and for the day of the week: st(t) = smt(t) sdt(t). The factor smt(t) is updated at the start of each month using a modification of the preceding formula that adjusts for the presence of the other seasonal by dividing the summands [(xi)/ (ai )] by the corresponding day of the week effect sdi(i). Similarly, the factor sdt(t) is updated using the following formula:where sdt-1(t) is the seasonal factor for the same day of the previous week. Missing values after the start of the series are replaced with one-step-ahead predicted values, and the predicted value is substituted for xi and applied to the updating equations.mk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29Forecasting Methods NormalizationPage 7 of 10The parameters are normalized so that the seasonal factors for each cycle have a mean of 1.0. This normalization is performed after each complete cycle and at the end of the data. Thus, if INTERVAL=MONTH and SEASONS=MONTH are specified, and a series begins with a July value, then the seasonal factors for the series are normalized at each observation for July and at the last observation in the data set. The normalization is performed by dividing each of the seasonal parameters, and multiplying each of the trend parameters, by the mean of the unnormalized seasonal parameters. Smoothing Weights The weight for updating the seasonal factors, 3, is given by the third value specified in the WEIGHT= option. If the WEIGHT= option is not used, then 3 defaults to 0.25; if the WEIGHT= option is used but does not specify a third value, then 3 defaults to 2. The weight for updating the linear and quadratic trend parameters, 2, is given by the second value specified in the WEIGHT= option; if the WEIGHT= option does not specify a second value, then 2 defaults to 1. The updating weight for the constant parameter, 1, is given by the first value specified in the WEIGHT= option. As a general rule, smaller smoothing weights are appropriate for series with a slowly changing trend, while larger weights are appropriate for volatile series with a rapidly changing trend. If the WEIGHT= option is not used, then 1 defaults to (1- .81/trend), where trend is the value of the TREND= option. This produces defaults of WEIGHT=0.2 for TREND=1, WEIGHT=0.10557 for TREND=2, and WEIGHT=0.07168 for TREND=3. The Time Series Forecasting System provides for generating forecast models using Winters Method and allows you to specify or optimize the weights. See Chapter 34, "Getting Started with Time Series Forecasting," for details. Confidence Limits A method for calculating exact forecast confidence limits for the WINTERS method is not available. Therefore, the approach taken in PROC FORECAST is to assume that the true seasonal factors have small variability about a set of fixed seasonal factors and that the remaining variation of the series is small relative to the mean level of the series. The equations are writtenwhere is the mean level and I(t) are the fixed seasonal factors. Assuming that and are small, the forecast equations can be linearized and only first-order terms in and kept. In terms of forecasts for , this linearized system is equivalent to a seasonal ARIMA model. Confidence limits for are based on this ARIMA model and converted into mk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29Forecasting Methods confidence limits for xt using st(t) as estimates of I(t).Page 8 of 10The exponential smoothing confidence limits are based on an approximation to a weighted regression model, whereas the preceding Winters confidence limits are based on an approximation to an ARIMA model. You can use METHOD=WINTERS without the SEASONS= option to do exponential smoothing and get confidence limits for the EXPO forecasts based on the ARIMA model approximation. These are generally more pessimistic than the weighted regression confidence limits produced by METHOD=EXPO.ADDWINTERS MethodThe ADDWINTERS method is like the WINTERS method except that the seasonal parameters are added to the trend instead of multiplied with the trend. The default TREND=2 model is as follows:The WINTERS method for updating equation and confidence limits calculations described in the preceding section are modified accordingly for the additive version.Holt Two-Parameter Exponential SmoothingIf the seasonal factors are omitted (that is, if the SEASONS= option is not specified), the WINTERS (and ADDWINTERS) method reduces to the Holt two-parameter version of exponential smoothing. Thus, the WINTERS method is often referred to as the Holt-Winters method. Double exponential smoothing is a special case of the Holt two-parameter smoother. The double exponential smoothing results can be duplicated with METHOD=WINTERS by omitting the SEASONS= option and appropriately setting the WEIGHT= option. Letting and , the following statements produce the same forecasts:proc forecast method=expo trend=2 weight= proc forecast method=winters trend=2 weight=( , ) ... ; ... ;Although the forecasts are the same, the confidence limits are computed differently.Choice of Weights for EXPO, WINTERS, and ADDWINTERS MethodsFor the EXPO, WINTERS, and ADDWINTERS methods, properly chosen smoothing weights are of critical importance in generating reasonable results. There are several factors to consider in choosing the weights. The noisier the data, the lower should be the weight given to the most recent observation. Another factor to consider is how quickly the mean of the time series is changing. If the mean of the series is changing rapidly, relatively more weight should be given to the most recent observation. The more stable the series over time, the lower should be the weight given to the most recent observation.mk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29Forecasting MethodsPage 9 of 10Note that the smoothing weights should be set separately for each series; weights that produce good results for one series may be poor for another series. Since PROC FORECAST does not have a feature to use different weights for different series, when forecasting multiple series with the EXPO, WINTERS, or ADDWINTERS method it may be desirable to use different PROC FORECAST steps with different WEIGHT= options. For the Winters method, many combinations of weight values may produce unstable noninvertible models, even though all three weights are between 0 and 1. When the model is noninvertible, the forecasts depend strongly on values in the distant past, and predictions are determined largely by the starting values. Unstable models usually produce poor forecasts. The Winters model may be unstable even if the weights are optimally chosen to minimize the in-sample MSE. Refer to Archibald (1990) for a detailed discussion of the unstable region of the parameter space of the Winters model. Optimal weights and forecasts for exponential smoothing models can be computed using the ARIMA procedure. For more information, see "Exponential Smoothing as an ARIMA Model" earlier in this chapter. The ARIMA procedure can also be used to compute optimal weights and forecasts for seasonal ARIMA models similar to the Winters type methods. In particular, an ARIMA(0,1,1) ×(0,1,1)S model may be a good alternative to the additive version of the Winters method. The ARIMA(0,1,1)×(0,1,1)S model fit to the logarithms of the series may be a good alternative to the multiplicative Winters method. See Chapter 11, "The ARIMA Procedure," for information on forecasting with ARIMA models. The Time Series Forecasting System can be used to automatically select an appropriate smoothing method as well as to optimize the smoothing weights. See Chapter 34, "Getting Started with Time Series Forecasting," for more information.Starting Values for EXPO, WINTERS, and ADDWINTERS MethodsThe exponential smoothing method requires starting values for the smoothed values S0, S[2] [3] 0, and S 0. The Winters and additive Winters methods require starting values for the trend coefficients and seasonal factors. By default, starting values for the trend parameters are computed by a time-trend regression over the first few observations for the series. Alternatively, you can specify the starting value for the trend parameters with the ASTART=, BSTART=, and CSTART= options. The number of observations used in the time-trend regression for starting values depends on the NSTART= option. For METHOD=EXPO, NSTART= beginning values of the series are used, and the coefficients of the time-trend regression are then used to form the initial smoothed values S0, S[2]0, and S[3]0. For METHOD=WINTERS or METHOD=ADDWINTERS, n complete seasonal cycles are used to compute starting values for the trend parameter, where n is the value of the NSTART= option. For example, for monthly data the seasonal cycle is one year, so NSTART=2 specifies that the first 24 observations at the beginning of each series are used for the time trend regression used to calculate starting values.mk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29Forecasting MethodsPage 10 of 10The starting values for the seasonal factors for the WINTERS and ADDWINTERS methods are computed from seasonal averages over the first few complete seasonal cycles at the beginning of the series. The number of seasonal cycles averaged to compute starting seasonal factors is controlled by the NSSTART= option. For example, for monthly data with SEASONS=12 or SEASONS=MONTH, the first n January values are averaged to get the starting value for the January seasonal parameter, where n is the value of the NSSTART= option. The s0(i) seasonal parameters are set to the ratio (for WINTERS) or difference (for ADDWINTERS) of the mean for the season to the overall mean for the observations used to compute seasonal starting values. For example, if METHOD=WINTERS, INTERVAL=DAY, SEASON=(MONTH DAY), and NSTART=2 (the default), the initial seasonal parameter for January is the ratio of the mean value over days in the first two Januarys after the start of the series (that is, after the first nonmissing value), to the mean value for all days read for initialization of the seasonal factors. Likewise, the initial factor for Sundays is the ratio of the mean value for Sundays to the mean of all days read. For the ASTART=, BSTART=, and CSTART= options, the values specified are associated with the variables in the VAR statement in the order in which the variables are listed (the first value with the first variable, the second value with the second variable, and so on). If there are fewer values than variables, default starting values are used for the later variables. If there are more values than variables, the extra values are ignored.Previous | Next | Top of Page Copyright © 2003 by SAS Institute Inc., Cary, NC, USA. All rights reserved.mk:@MSITStore:d:\Program%20Files\SAS\SAS%209.1\core\help\etsug.chm::/etsug.hlp/... 2009-03-29。