Introductory Econometrics for Finance Chris Brook
- 格式:ppt
- 大小:125.50 KB
- 文档页数:26
《金融时间序列分析》讲义主讲教师:徐占东登录:徐占东《金融时间序列模型》参考教材:1.《金融时间序列的经济计量学模型》经济科学出版社米尔斯著2.《经济计量学手册》章节3.《Introductory Econometrics for Finance》 Chris Brooks 剑桥大学出版社4.《金融计量学:资产定价实证分析》周国富著北京大学出版社5.《金融市场的经济计量学》 Andrew lo等上海财经大学出版社6.《动态经济计量学》 Hendry著上海人民出版社7.《商业和经济预测中的时间序列模型》中国人民大学出版社弗朗西斯著8.《No Linear Econometric Modeling in Time series Analysis》剑桥大学出版社9.《时间序列分析》汉密尔顿中国社会科学出版社10.《高等时间序列经济计量学》陆懋祖上海人民出版社11.《计量经济分析》张晓峒经济科学出版社12.《经济周期的波动与预测方法》董文泉高铁梅著吉林大学出版社13.《宏观计量的若干前言理论与应用》王少平著南开大学出版社14.《协整理论与波动模型——金融时间序列分析与应用》张世英、樊智著清华大学出版社15.《协整理论与应用》马薇著南开大学出版社16.(NBER working paper)17.(Journal of Finance)18.(中国金融学术研究网) 教学目的:1)能够掌握时间序列分析的基本方法;2)能够应用时间序列方法解决问题。
教学安排1单变量线性随机模型:ARMA ; ARIMA; 单位根检验。
2单变量非线性随机模型:ARCH,GARCH系列模型。
3谱分析方法。
4混沌模型。
5多变量经济计量分析:V AR模型,协整过程;误差修正模型。
第一章引论第一节金融学简介一.金融学概论1.金融学:研究人们在不确定环境中进行资源最优配置的学科。
金融学的三个核心问题:资产时间价值,资产定价理论(资源配置系统)和风险管理理论。
金融统计中的信息几何文献综述10901010127 吴必良1.引言信息几何在金融统计中具有很大作用。
多年来,金融统计为促进金融发展作出了巨大贡献,但随着中国加入世贸组织,以及金融深化的进程不断加快,现行金融统计制度也面临着一系列潜在的挑战,不容置疑,金融统计改革也应与时俱进,发展成为综合型、多元化的统计系统工程。
而信息几何是基于微分几何发展出来的一套理论体系。
主要应用于统计分析、控制理论、神经网络、量子力学、信息论等领域。
2.微分几何基础微分几何基础是曲线和曲面几何的入门知识,主要内容包括欧氏空间上的积分、帧场、欧氏几何、曲面积分、形状算子、曲面几何、黎曼几何、曲面上的球面结构等。
修订版扩展了一些主题,更加强调拓扑性质、测地线的性质、向量场的奇异性等。
信息几何的基础知识主要有一下几个方面。
(1)微分流形。
微分流形(differentiable manifold),也称为光滑流形(smooth manifold),是拓扑学和几何学中一类重要的空间,是带有微分结构的拓扑流形。
微分流形是微分几何与微分拓扑的主要研究对象,是三维欧式空间中曲线和曲面概念的推广,可以有更高的维数,而不必有距离和度量的概念。
(2)切向量和切空间。
切向量,曲线在一点处的切向量可以理解为该点的切线(带个方向箭头)。
曲面的切向量可视为切平面中的向量。
对更一般的流形M,M在点P处的切向量,就是M中通过P点的曲线在P处的切向量。
切向量的概念是个几何概念,亦即它的定义和坐标选取无关。
因而是几何量。
这是微分几何中最基本的概念。
切空间是在某一点所有的切向量组成的线性空间。
向量(切向量)存在多种定义。
直观的讲,如果所研究的流形是一个三维空间中的曲面,则在每一点的切向量,就是和该曲面相切的向量,切空间就是和该曲面相切的平面。
通常情形下,因为所有流形可以嵌入欧几里得空间,切空间也可以理解为在该点和流形相切的欧几里得空间的仿射子空间。
切空间更好的定义不依赖于这种嵌入,例如,切向量可以定义为通过该点的曲线的等价类,或者是对光滑函数在该点的在某个方向上的求导。
110CHAPTER 18SOLUTIONS TO PROBLEMS18.1 With z t 1 and z t 2 now in the model, we should use one lag each as instrumental variables, z t-1,1 and z t-1,2. This gives one overidentifying restriction that can be tested.18.3 For δ ≠ β, y t – δz t = y t – βz t + (β – δ)z t , which is an I(0) sequence (y t – βz t ) plus an I(1) sequence. Since an I(1) sequence has a growing variance, it dominates the I(0) part, and the resulting sum is an I(1) sequence.18.5 Following the hint, we havey t – y t -1 = βx t – βx t -1 + βx t -1 – y t -1 + u torΔy t = βΔx t – (y t -1 – βx t -1) + u t .Next, we plug in Δx t = γΔx t -1 + v t to get Δy t = β(γΔx t -1 + v t ) – (y t -1 – βx t -1) + u t = βγΔx t -1 – (y t -1 – βx t -1) + u t + βv t≡ γ1Δx t -1 + δ(y t -1 – βx t -1) + e t ,where γ1 = βγ, δ = –1, and e t = u t + βv t .18.7 If unem t follows a stable AR(1) process, then this is the null model used to test for Granger causality: under the null that gM t does not Granger cause unem t , we can write unem t = β0 + β1unem t-1 + u t E(u t |unem t -1, gM t -1, unem t -2, gM t -2, …) = 0and |β1| < 1. Now, it is up to us to choose how many lags of gM to add to this equation. The simplest approach is to add gM t -1 and to do a t test. But we could add a second or third lag (and probably not beyond this with annual data), and compute an F test for joint significance of all lags of gM t .18.9 Let 1ˆn e+ be the forecast error for forecasting y n +1, and let 1ˆn a + be the forecast error for forecasting Δy n +1. By definition, 1ˆn e+= y n +1 − ˆnf = y n +1 – (ˆng + y n ) = (y n +1 – y n ) − ˆng = Δy n +1 − ˆn g = 1ˆn a +, where the last equality follows by definition of the forecasting error for Δy n +1.111SOLUTIONS TO COMPUTER EXERCISESC18.1 (i) The estimated GDL model isgprice = .0013 + .081 gwage + .640 gprice -1(.0003) (.031) (.045) n = 284, R 2 = .454.The estimated impact propensity is .081 while the estimated LRP is .081/(1 – .640) = .225. The estimated lag distribution is graphed below.above the estimated IP for the GDL model. Further, the estimated LRP from GDL model is much lower than that for the FDL model, which we estimated as 1.172. Clearly we cannot think of the GDL model as a good approximation to the FDL model. One reason these are so different can be seen by comparing the estimated lag distributions (see below for the GDL model). With the FDL, the largest lag coefficient is at the ninth lag, which is impossible with the GDL model (where the largest impact is always at lag zero). It could also be that {u t } in equation (18.8) does not follow an AR(1) process with parameter ρ, which would cause the dynamic regression to produce inconsistent estimators of the lag coefficients.112(iii) When we estimate the RDL from equation (18.16) we obtaingprice = .0011 + .090 gwage + .619 gprice -1 + .055 gwage -1(.0003) (.031) (.046) (.032)n = 284, R 2 = .460.The coefficient on gwage -1 is not especially significant but we include it in obtaining theestimated LRP. The estimated IP is .09 while the LRP is (.090 + .055)/(1 – .619) ≈ .381. These are both slightly higher than what we obtained for the GDL, but the LRP is still well below what we obtained for the FDL in Problem 11.5. While this RDL model is more flexible than the GDL model, it imposes a maximum lag coefficient (in absolute value) at lag zero or one. For the estimates given above, the maximum effect is at the first lag. (See the estimated lag distributiontt pcip = 1.80 + .349 pcip t-1 + .071 pcip t-2 + .067 pcip t-2(0.55) (.043) (.045) (.043)n = 554, R 2 = .166, ˆσ = 12.15.When pcip t-4 is added, its coefficient is .0043 with a t statistic of about .10.(ii) In the model113pcip t = δ0 + α1pcip t -1 + α2pcip t -2 + α3pcip t -3 + γ1pcsp t -1 + γ2pcsp t -2 + γ3pcsp t -3 + u t ,The null hypothesis is that pcsp does not Granger cause pcip . This is stated as H 0: γ1 = γ2 = γ3 = 0. The F statistic for joint significance of the three lags of pcsp t , with 3 and 547 df , is F = 5.37 and p -value = .0012. Therefore, we strongly reject H 0 and conclude that pcsp does Granger cause pcip .(iii) When we add Δi3t -1, Δi3t -2, and Δi3t -3 to the regression from part (ii), and now test the joint significance of pcsp t -1, pcsp t -2, and pcsp t -3, the F statistic is 5.08. With 3 and 544 df in the F distribution, this gives p -value = .0018, and so pcsp Granger causes pcip even conditional on past Δi3.C18.5 (i) The estimated equation is6t hy = .078 + 1.027 hy3t -1 − 1.021 Δhy3t − .085 Δhy3t -1 − .104 Δhy3t -2 (.028) (0.016) (0.038) (.037) (.037)n = 121, R 2 = .982, ˆσ = .123.The t statistic for H 0: β = 1 is (1.027 – 1)/.016 ≈ 1.69. We do not reject H 0: β = 1 at the 5% level against a two-sided alternative, although we would reject at the 10% level.(ii) The estimated error correction model is6t hy = .070 + 1.259 Δhy3t -1 − .816 (hy6t -1 – hy3t -2)(.049) (.278) (.256)+ .283 Δhy3t -2 + .127 (hy6t -2 – hy3t -3) (.272) (.256)n = 121, R 2 = .795.Neither of the added terms is individually significant. The F test for their joint significance gives F = 1.35, p -value = .264. Therefore, we would omit these terms and stick with the error correction model estimated in (18.39).C18.7 (i) The estimated linear trend equation using the first 119 observations and excluding the last 12 months istchnimp = 248.58 + 5.15 t (53.20) (0.77) n = 119, R 2 = .277, ˆσ= 288.33.114The standard error of the regression is 288.33.(ii) The estimated AR(1) model excluding the last 12 months istchnimp = 329.18 + .416 chnimp t -1 (54.71) (.084)n = 118, R 2 = .174, ˆσ = 308.17.Because ˆσis lower for the linear trend model, it provides the better in-sample fit.(iii) Using the last 12 observations for one-step-ahead out-of-sample forecasting gives an RMSE and MAE for the linear trend equation of about 315.5 and 201.9, respectively. For the AR(1) model, the RMSE and MAE are about 388.6 and 246.1, respectively. In this case, the linear trend is the better forecasting model.(iv) Using again the first 119 observations, the F statistic for joint significance of feb t , mar t , …, dec t when added to the linear trend model is about 1.15 with p -value ≈ .328. (The df are 11 and 107.) So there is no evidence that seasonality needs to be accounted for in forecasting chnimp .C18.9 (i) Using the data up through 1989 givesˆy = 3,186.04 + 116.24 t + .630 y t-1t(.148)(46.31)(1,163.09)n = 30, R2 = .994, ˆσ = 223.95.(Notice how high the R-squared is. However, it is meaningless as a goodness-of-fit measure because {y t} has a trend, and possibly a unit root.)(ii) The forecast for 1990 (t = 32) is 3,186.04 + 116.24(32) + .630(17,804.09) ≈18,122.30, because y is $17,804.09 in 1989. The actual value for real per capita disposable income was $17,944.64, and so the forecast error is –$177.66.(iii) The MAE for the 1990s, using the model estimated in part (i), is about 371.76.Withouty t-1 in the equation, we obtain(iv)ˆy = 8,143.11 + 311.26 tt(5.64)(103.38)n = 31, R2 = .991, ˆσ = 280.87.The MAE for the forecasts in the 1990s is about 718.26. This is much higher than for the model with y t-1, so we should use the AR(1) model with a linear time trend.C18.11 (i) For lsp500, the ADF statistic without a trend is t = −.79; with a trend, the t statistic is −2.20. This are both well above their respective 10% critical values. In addition, the estimated roots are quite close to one. For lip, the ADF statistic without a trend is −1.37 without a trend and −2.52 with a trend. Again, these are not close to rejecting even at the 10% levels, and the estimated roots are very close to one.(ii) The simple regression of lsp500 on lip giveslsp = −2.402 + 1.694 lip500(.024)(.095)n = 558, R2 = .903The t statistic for lip is over 70, and the R-squared is over .90. These are hallmarks of spurious regressions.u obtained in part (ii), the ADF statistic (with two lagged changes) (iii) Using the residuals ˆtis −1.57, and the estimated root is over .99. There is no evidence of cointegration. (The 10% critical value is −3.04.)115(iv) After adding a linear time trend to the regression from part (ii), the ADF statistic applied to the residuals is −1.88, and the estimated root is again about .99. Even with a time trend there is no evidence of cointegration.(v) It appears that lsp500 and lip do not move together in the sense of cointegration, even if we allow them to have unrestricted linear time trends. The analysis does not point to a long-run equilibrium relationship.C18.13 (i) The DF statistic is about −3.31, which is to the left of the 2.5% critical value (−3.12), and so, using this test, we can reject a unit root at the 2.5% level. (The estimated root is about.81.)(ii) When two lagged changes are added to the regression in part (i), the t statistic becomes −1.50, and the root is larger (about .915). Now, there is little evidence against a unit root.(iii) If we add a time trend to the regression in part (ii), the ADF statistic becomes −3.67, and the estimated root is about .57. The 2.5% critical value is −3.66, and so we are back to fairly convincingly rejecting a unit root.(iv) The best characterization seems to be an I(0) process about a linear trend. In fact, a stable AR(3) about a linear trend is suggested by the regression in part (iii).prcfat t, the ADF statistic without a trend is −4.74 (estimated root = .62) and with a(v)Fortime trend the statistic is −5.29 (estimated root = .54). Here, the evidence is strongly in favor of an I(0) process whether or not we include a trend.116。
Solutions to the Review Questions at the End of Chapter 71. (a) Many series in finance and economics in their levels (or log-levels) forms are non-stationary and exhibit stochastic trends. They have a tendency not to revert to a mean level, but they “wander” for prolonged periods in one direction or the other. Examples would be most kinds of asset or goods prices, GDP, unemployment, money supply, etc. Such variables can usually be made stationary by transforming them into their differences or by constructing percentage changes of them.(b) Non-stationarity can be an important determinant of the properties of a series. Also, if two series are non-stationary, we may experience the problem of “spurious” regression. This occurs when we regress one non -stationary variable on a completely unrelated non-stationary variable, but yield a reasonably high value of R 2, apparently indicating that the model fits well.Most importantly therefore, we are not able to perform any hypothesis tests in models which inappropriately use non-stationary data since the test statistics will no longer follow the distributions which we assumed they would (e.g. a t or F ), so any inferences we make are likely to be invalid.(c) A weakly stationary process was defined in Chapter 5, and has the following characteristics:1. E(y t ) = μ2.∞<=--2))((σμμt t y y E3.1221))((t t t t y y E -=--γμμ ∀ t 1 , t 2That is, a stationary process has a constant mean, a constant variance, and a constant covariance structure. A strictly stationary process could be defined by an equation such as),...,(,...,,),...,(,...,,112121T k t k t k t T t t t x x x x Fx x x x x Fx T T +++=for any t 1 , t 2 , ..., t T ∈ Z, any k ∈ Z and T = 1, 2, ...., and where F denotes the joint distribution function of the set of random variables. It should be evident from the definitions of weak and strict stationarity that the latter is a stronger definition and is a special case of the former. In the former case, only the first two moments of the distribution has to be constant (i.e. the mean and variances (and covariances)), whilst in the latter case, all moments of the distribution (i.e. the whole of the probability distribution) has to be constant. Both weakly stationary and strictly stationary processes will cross their mean value frequently and will not wander a long way from that mean value.An example of a deterministic trend process was given in Figure 7.5. Such a process will have random variations about a linear (usually upward) trend. An expression for a deterministic trend process y t could bey t = α + βt + u twhere t = 1, 2,…, is the trend and u t is a zero mean white noise disturbance term. This is called deterministic non-stationarity because the source of the non-stationarity is a deterministic straight line process.A variable containing a stochastic trend will also not cross its mean value frequently and will wander a long way from its mean value. A stochastically non-stationary process could be a unit root or explosive autoregressive process such asy t = φy t-1 + u twhere φ≥ 1.2. (a)The null hypothesis is of a unit root against a one sided stationary alternative, i.e. we haveH0 : y t~ I(1)H1 : y t~ I(0)which is also equivalent toH0 : ψ = 0H1 : ψ< 0(b) The test statistic is given by /( )ψψSE which equals -0.02 / 0.31 = -0.06 Since this is not more negative than the appropriate critical value, we do not reject the null hypothesis.(c) We therefore conclude that there is at least one unit root in the series (there could be 1, 2, 3 or more). What we would do now is to regress ∆2y t on ∆y t-1 and test if there is a further unit root. The null and alternative hypotheses would now beH0 : ∆y t~ I(1) i.e. y t~ I(2)H1 : ∆y t~ I(0) i.e. y t~ I(1)If we rejected the null hypothesis, we would therefore conclude that the first differences are stationary, and hence the original series was I(1). If we did not reject at this stage, we would conclude that y t must be at least I(2), and we would have to test again until we rejected.(d) We cannot compare the test statistic with that from a t-distribution since we have non-stationarity under the null hypothesis and hence the test statistic will no longer follow a t-distribution.3. Using the same regression as above, but on a different set of data, the researcher now obtains the estimate ψ=-0.52 with standard error = 0.16.(a) The test statistic is calculated as above. The value of the test statistic = -0.52 /0.16 = -3.25. We therefore reject the null hypothesis since the test statistic is smaller (more negative) than the critical value.(b) We conclude that the series is stationary since we reject the unit root null hypothesis. We need do no further tests since we have already rejected.(c) The researcher is correct. One possible source of non-whiteness is when the errors are autocorrelated. This will occur if there is autocorrelation in the original dependent variable in the regression ( y t). In practice, we can easily get around this by “augmenting” the test with lags of the dependent variable to “soak up” the autocorrelation. The appropriate number of lags can be determined using the information criteria.4. (a) If two or more series are cointegrated, in intuitive terms this implies that they have a long run equilibrium relationship that they may deviate from in the short run, but which will always be returned to in the long run. In the context of spot and futures prices, the fact that these are essentially prices of the same asset but with different delivery and payment dates, means that financial theory would suggest that they should be cointegrated. If they were not cointegrated, this would imply that the series did not contain a common stochastic trend and that they could therefore wander apart without bound even in the long run. If the spot and futures prices for a given asset did separate from one another, market forces would work to bring them back to follow their long run relationship given by the cost of carry formula.The Engle-Granger approach to cointegration involves first ensuring that the variables are individually unit root processes (note that the test is often conducted on the logs of the spot and of the futures prices rather than on the price series themselves). Then a regression would be conducted of one of the series on the other (i.e. regressing spot on futures prices or futures on spot prices) would be conducted and the residuals from that regression collected. These residuals would then be subjected to a Dickey-Fuller or augmented Dickey-Fuller test. If the null hypothesis of a unit root in the DF test regression residuals is not rejected, it would be concluded that a stationary combination of the non-stationary variables has not been found and thus that there is no cointegration. On the other hand, if the null is rejected, it would be concluded that a stationary combination of the non-stationary variables has been found and thus that the variables are cointegrated.Forming an error correction model (ECM) following the Engle-Granger approach is a 2-stage process. The first stage is (assuming that the original series are non-stationary) to determine whether the variables are cointegrated. If they are not, obviously there would be no sense in forming an ECM, and the appropriate response would be to form a model in first differences only. If the variables are cointegrated, the second stage of the process involves forming the error correction model which, in the context of spot and futures prices, could be of the form given in equation (7.57) on page 345.(b) There are many other examples that one could draw from financial or economic theory of situations where cointegration would be expected to be present and where its absence could imply a permanent disequilibrium. It is usually the presence of market forces and investors continually looking for arbitrage opportunities that would lead us to expect cointegration to exist. Good illustrations include equity prices and dividends, or price levels in a set of countries and the exchange rates between them. The latter is embodied in the purchasing power parity (PPP) theory, which suggests that a representative basket of goods and services should, when converted into a common currency, cost the same wherever in the world it is purchased. In the context of PPP, one may expect cointegration since again, its absence would imply that relative prices and the exchange rate could wander apart without bound in the long run. This would imply that the general price of goods and services in one country could get permanently out of line with those, when converted into a common currency, of other countries. This would not be expected to happen since people would spot a profitable opportunity to buy the goods in one country where they were cheaper and to sell them in the country where they were more expensive until the prices were forced back into line. There is some evidence against PPP, however, and one explanation is that transactions costs including transportation costs, currency conversion costs, differential tax rates and restrictions on imports, stop full adjustment from taking place. Services are also much less portable than goods and everybody knows that everything costs twice as much in the UK as anywhere else in the world.5. (a) The Johansen test is computed in the following way. Suppose we have p variables that we think might be cointegrated. First, ensure that all the variables are of the same order of non-stationary, and in fact are I(1), since it is very unlikely that variables will be of a higher order of integration. Stack the variables that are to be tested for cointegration into a p-dimensional vector, called, say, y t. Then construct a p⨯1 vector of first differences, ∆y t, and form and estimate the following VAR∆y t = ∏y t-k + Γ1∆y t-1 + Γ2∆y t-2 + ... + Γk-1∆y t-(k-1) + u tThen test the rank of the matrix ∏. If ∏ is of zero rank (i.e. all the eigenvalues are not significantly different from zero), there is no cointegration, otherwise, the rank will give the number of cointegrating vectors. (You could also go intoa bit more detail on how the eigenvalues are used to obtain the rank.)(b) Repeating the table given in the question, but adding the null andalternative hypotheses in each case, and letting r denote the number of cointegrating vectors:Considering each row in the table in turn, and looking at the first one first, the test statistic is greater than the critical value, so we reject the null hypothesis that there are no cointegrating vectors. The same is true of the second row (that is, we reject the null hypothesis of one cointegrating vector in favour of the alternative that there are two). Looking now at the third row, we cannot reject (at the 5% level) the null hypothesis that there are two cointegrating vectors, and this is our conclusion. There are two independent linear combinations of the variables that will be stationary.(c) Johansen’s method allows the testing of hypotheses by considering them effectively as restrictions on the cointegrating vector. The first thing to note is that all linear combinations of the cointegrating vectors are also cointegrating vectors. Therefore, if there are many cointegrating vectors in the unrestricted case and if the restrictions are relatively simple, it may be possible to satisfy the restrictions without causing the eigenvalues of the estimated coefficient matrix to change at all. However, as the restrictions become more complex, “renormalisation” will no longer be sufficient to satisfy them, so that imposing them will cause the eigenvalues of the restricted coefficient matrix to be different to those of the unrestricted coefficient matrix. If the restriction(s) implied by the hypothesis is (are) nearly already present in the data, then the eigenvectors will not change significantly when the restriction is imposed. If, on the other hand, the restriction on the data is severe, then the eigenvalues will change significantly compared with the case when no restrictions were imposed.The test statistic for testing the validity of these restrictions is given by----=+∑T i i i r p[ln()ln()]*111λλ ~ χ2(p-r )whereλi * are the characteristic roots (eigenvalues) of the restricted modelλi are the characteristic roots (eigenvalues) of the unrestricted modelr is the number of non-zero (eigenvalues) characteristic roots in the unrestricted modelp is the number of variables in the system.If the restrictions are supported by the data, the eigenvalues will not changemuch when the restrictions are imposed and so the test statistic will be small.(d) There are many applications that could be considered, and tests for PPP,for cointegration between international bond markets, and tests of theexpectations hypothesis were presented in Sections 7.9, 7.10, and 7.11respectively. These are not repeated here.(e) Both Johansen statistics can be thought of as being based on an examination of the eigenvalues of the long run coefficient or ∏ matrix. In bothcases, the g eigenvalues (for a system containing g variables) are placed ascending order: λ1≥λ2≥...≥λg. The maximal eigenvalue (i.e. the λmax) statistic is based on an examination of each eigenvalue separately, while thetrace statistic is based on a joint examination of the g-r largest eigenvalues. Ifthe test statistic is greater than the critical value from Johansen’s tables, rejectthe null hypothesis that there are r cointegrating vectors in favour of the alternative that there are r+1 (for λmax) or more than r (for λtrace). The testing is conducted in a sequence and under the null, r= 0, 1, ..., g-1 so that the hypotheses for λtrace and λmax are as followsNull hypothesis for both tests Trace alternative Max alternativeH0: r = 0 H1: 0 < r≤g H1: r = 1H0: r = 1 H1: 1 < r≤g H1: r = 2H0: r = 2 H1: 2 < r≤g H1: r = 3... ... ...H0: r = p-1 H1: r = g H1: r = gThus the trace test starts by examining all eigenvalues together to test H0: r = 0, and if this is not rejected, this is the end and the conclusion would be that there is no cointegration. If this hypothesis is not rejected, the largest eigenvalue would be dropped and a joint test conducted using all of the eigenvalues except the largest to test H0: r = 1. If this hypothesis is not rejected, the conclusion would be that there is one cointegrating vector, while if this is rejected, the second largest eigenvalue would be dropped and the test statistic recomputed using the remaining g-2 eigenvalues and so on. The testing sequence would stop when the null hypothesis is not rejected.The maximal eigenvalue test follows exactly the same testing sequence with the same null hypothesis as for the trace test, but the max test only considers one eigenvalue at a time. The null hypothesis that r= 0 is tested using the largest eigenvalue. If this null is rejected, the null that r = 1 is examined using the second largest eigenvalue and so on.6. (a) The operation of the Johansen test has been described in the book, and also in question 5, part (a) above. If the rank of the ∏matrix is zero, this implies that there is no cointegration or no common stochastic trends between the series. A finding that the rank of ∏ is one or two would imply that there were one or two linearly independent cointegrating vectors or combinations of the series that would be stationary respectively. A finding that the rank of ∏ is 3 would imply that the matrix is of full rank. Since the maximum number of cointegrating vectors is g-1, where g is the number of variables in the system, this does not imply that there 3 cointegrating vectors. In fact, the implication of a rank of 3 would be that the original series were stationary, and provided that unit root tests had been conducted on each series, this would have effectively been ruled out.(b) The first test of H0: r= 0 is conducted using the first row of the table. Clearly, the test statistic is greater than the critical value so the null hypothesis is rejected. Considering the second row, the same is true, so that the null of r = 1 is also rejected. Considering now H0: r = 2, the test statistic is smaller than the critical value so that the null is not rejected. So we conclude that there are 2 cointegrating vectors, or in other words 2 linearly independent combinations of the non-stationary variables that are stationary.7. The fundamental difference between the Engle-Granger and the Johansen approaches is that the former is a single-equation methodology whereas Johansen is a systems technique involving the estimation of more than one equation. The two approaches have been described in detail in Chapter 7 and in the answers to the questions above, and will therefore not be covered again. The main (arguably only) advantage of the Engle-Granger approach is its simplicity and its intuitive interpretability. However, it has a number of disadvantages that have been described in detail in Chapter 7, including its inability to detect more than one cointegrating relationship and the impossibility of validly testing hypotheses about the cointegrating vector.。
英国留学国际金融专业英国留学金融专业详解分类及介绍国外关于金融专业的设置,是两方面都有。
一、以微观为主,也就是研究与公司个体有关的投资、融资等行为。
另一方面就是和国内类似的宏观金融的研究。
专业细分英国大学的金融专业按细分不同通常设置在商学院、经济学院或数学学院。
在参考专业排名时需要考虑会计与金融、经济、商学三个方向。
金融专业细分可分为:金融学、公司金融、金融与投资、国际金融、银行与金融、金融与管理、会计与金融、风险管理、房地产金融与投资、金融与经济、金融工程。
金融学:对金融各个细分领域的综合介绍。
下面以曼彻斯特大学为例来看下金融学专业的课程设置:第一学期必修课:Introductory Research Methods for Accounting and Finance; 会计与金融学方法导论Essentials of Finance;金融学精要Derivative Securities衍生证券选修一门:Portfolio Investment证券投资International Macroeconomics and Global Capital Markets国际宏观经济学与全球资本市场Foundations of Finance Theory金融学基础第二学期Financial Econometrics金融计量经济学Advanced Empirical Finance高级实证金融学Corporate Finance; 公司金融选修一门International Finance国际金融Financial Statement Analysis财务报表分析Real Options in Corporate Finance公司金融中的实物期权Mergers and Acquisitions: Economic and Financial Aspects关于企业并购的经济金融思考Dissertation毕业论文公司金融:解决以公司财务、公司融资、公司治理为核心的公司治理结构方面的问题,综合运用各种形式的金融工具与方法,进行风险管理和财富创造。
第1篇横截面数据的回归分析
第1章计量经济学的性质与经济数据
1.1 什么是计量经济学?
1.2 经验经济分析的步骤
1.3 经济数据的结构
1.4 计量经济分析中的因果关系和其他条件不变的概念
小结
关键术语
习题
计算机习题
第2章简单回归模型
第3章多元回归分析:估计
第4章多元回归分析:推断
第5章多元回归分析:OLS的渐近性
第6章多元回归分析:深入专题
第7章含有定性信息的多元回归分析:二值(或虚拟)变量第8章异方差性
第9章模型设定和数据问题的深入探讨
第2篇时间序列数据的回归分析
第10章时间序列数据的基本回归分析
第11章OLS用于时间序列数据的其他问题
第12章时间序列回归中的序列相关和异方差
第3篇高深专题讨论
第13章跨时横截面的混合:简单面板数据方法
第14章高深的面板数据方法
第15章工具变量估计与两阶段最小二乘法
第16章联立方程模型
第17章限值因变量模型和样本选择纠正
第18章时间序列高深专题
第19章一个经验项目的实施
附录A 基本数学工具
附录B 概率论基础
附录C 数理统计基础
附录D 矩阵代数概述
附录E 矩阵形式的线性回归模型附录F 各章问题解答
附录G 统计用表。