有关现值汉密尔顿函数的注解
- 格式:doc
- 大小:169.50 KB
- 文档页数:8
罗伯特·卢卡斯论经济增长的机制*1988I.作者简介罗伯特·卢卡斯(Robert E. Lucas, Jr.)1937年,卢卡斯生于华盛顿的雅奇马。
1955年,卢卡斯从西雅图的罗斯福公立学校高中毕业。
1959年,卢卡斯在芝加哥大学本科毕业,获得历史学学士学位。
于1964年获得芝加哥大学的经济学博士学位。
1963年,于卡内基工学院(现卡内基——梅隆大学)任教,在此期间,卢卡斯的经济动力学的全部观点逐渐成形。
卢卡斯于1970年完成、1972年发表代表作《预期和货币中性》,货币中性是他获得诺贝尔奖的演讲主题之一。
1974年卢卡斯回芝加哥教书。
1980年成为芝加哥的约翰·杜威有优异贡献教授。
1995年卢卡斯以其对“理性预期假说的应用和发展”所作的贡献而获得了诺贝尔经济学奖。
卢卡斯首要的理论贡献是开创并领导一个新的宏观经济学派——理性预期学派(又称新古典宏观经济学派),倡导和发展了理性预期与宏观经济学研究的运用理论,深化了人们对经济政策的理解。
此外,他在经济周期理论、宏观经济模型构造、计量方法、动态经济分析以及国际资本流动分析等方面都做出了卓越的贡献。
主要著作有:《理性预期与经济计量实践》(Rational Expectations and Econometric Practice,与T.J.萨金特合著,University Minnesota Press,1981年)《经济周期理论研究》(Studies in Business-Cycle Theory, MIT Press,1981年)《经济周期模型》(Models of Business Cycles, Wiley-Blackwell, 1991年)《经济动态学中的递归法》(Recursive Methods in Economic Dynamics, Harvard University Press, 1989年)II.论著摘要罗伯特·卢卡斯创作了《论经济发展的机制》一文并于1988年发表于《货币经济学杂志》上,这被认为是他的人力资本内生增长理论的经典文章。
罗伯特·卢卡斯论经济增长的机制*1988I.作者简介罗伯特·卢卡斯(Robert E. Lucas, Jr.)1937年,卢卡斯生于华盛顿的雅奇马。
1955年,卢卡斯从西雅图的罗斯福公立学校高中毕业。
1959年,卢卡斯在芝加哥大学本科毕业,获得历史学学士学位。
于1964年获得芝加哥大学的经济学博士学位。
1963年,于卡内基工学院(现卡内基——梅隆大学)任教,在此期间,卢卡斯的经济动力学的全部观点逐渐成形。
卢卡斯于1970年完成、1972年发表代表作《预期和货币中性》,货币中性是他获得诺贝尔奖的演讲主题之一。
1974年卢卡斯回芝加哥教书。
1980年成为芝加哥的约翰·杜威有优异贡献教授。
1995年卢卡斯以其对“理性预期假说的应用和发展”所作的贡献而获得了诺贝尔经济学奖。
卢卡斯首要的理论贡献是开创并领导一个新的宏观经济学派——理性预期学派(又称新古典宏观经济学派),倡导和发展了理性预期与宏观经济学研究的运用理论,深化了人们对经济政策的理解。
此外,他在经济周期理论、宏观经济模型构造、计量方法、动态经济分析以及国际资本流动分析等方面都做出了卓越的贡献。
主要著作有:《理性预期与经济计量实践》(Rational Expectations and Econometric Practice,与T.J.萨金特合著,University Minnesota Press,1981年)《经济周期理论研究》(Studies in Business-Cycle Theory, MIT Press,1981年)《经济周期模型》(Models of Business Cycles, Wiley-Blackwell, 1991年)《经济动态学中的递归法》(Recursive Methods in Economic Dynamics, Harvard University Press, 1989年)II.论著摘要罗伯特·卢卡斯创作了《论经济发展的机制》一文并于1988年发表于《货币经济学杂志》上,这被认为是他的人力资本内生增长理论的经典文章。
Part 3. The Essentials of Dynamic OptimisationIn macroeconomics the majority of problems involve optimisation over time. Typically a representative agent chooses optimal magnitudes of choice variables from an initial time until infinitely far into the future. There are a number of methods to solve these problems. In discrete time the problem can often be solved using a Lagrangean function. However in other cases it becomes necessary to use the more sophisticated techniques of Optimal Control Theory or Dynamic Programming . This handout provides an introduction to optimal control theory.Special Aspects of Optimisation over Time• Stock - Flow variable relationship.All dynamic problems have a stock-flow structure. Mathematically the flow variables are referred to as control variables and the stock variables as state variables. Not surprisingly the control variables are used to affect (or steer) the state variables. For example in any one period the amount of investment and the amount of money growth are flow variables that affect the stock of output and the level of prices which are state variables.• The objective function is additively seperable. This assumption makes the problem analytically tractable. In essence it allows us to separate the dynamic problem into a sequence of separate (in the objective function) one period optimisation problems. Don't be confused, the optimisation problems are not separate because of the stock-flow relationships, but the elements of the objective function are. To be more precise the objective function is expressed as a sum of functions (i.e. integral or sigma form) each of which depends only on the variables in that period. For example utility in a given period is independent of utility in the previous period.1. Lagrangean TechniqueWe can apply the Lagrangean technique in the usual way.Notationt y = State variable(s) =t μControl variable(s)The control and state variables are related according to some dynamic equation,()t y f y y t t t t ,,1μ=-+ (1)Choosing t μ allows us to alter the change in t y . If the above is a production function we choose =t μ investment to alter t t y y -+1 the change in output over the period. Why does time enter on its own? This would represent the trend growth rate of output.We might also have constraints that apply in each single period such as,()0,,≤t y G t t μ(2)The objective function in discrete time is of the form,()∑=Tt ttt y F 0,,μ(3)The first order conditions with respect to t y are,1. Optimal Control TheorySuppose that our objective is maximise the discounted utility from the use of an exhaustible resource over a given time interval. In order to optimise we would have to choose the optimal rate of extraction. That is we would solve the following problem,()()dt e E S U Max t TEρ-⎰0subject to,()t E dtdS-= )0(S S =()free T S =Where ()t S denotes the stock of a raw material and ()t E the rate of extraction. By choosing the optimal rate of extraction we can choose the optimal stock of oil at each period of time and so maximise utility. The rate of extraction is called the control variable and the stock of the raw material the state variable. By finding the optimal path for the control variable we can find the optimal path for the state variable. This is how optimal control theory works.The relationship between the stock and the extraction rate is defined by a differential equation (otherwise it would not be a dynamic problem). This differential equation is called the equation of motion . The last two are conditions are boundary conditions. The first tells us the current stock, the last tells us we are free to choose the stock at the end of the period. If utility is always increased by using the raw material this must be zero. Notice that the time period is fixed. This is called a fixed terminal time problem.The Maximum PrincipleIn general our prototype problem is to solve,()dt u y t F V Max Tu⎰=0,,()u y t f ty,,=∂∂ ()00y y =To find the first order conditions that define the extreme values we apply a set of condition known as the maximum principle.Step 1. Form the Hamiltonian function defined as,()()()()u y t f t u y t F u y t H ,,,,,,,λλ+=Step 2. Find,),,,(λu y t H Max uOr if as is usual you are looking for an interior solution, apply the weaker condition,0),,,(=∂∂uu y t H λAlong with,()•=∂∂y u y t H λλ,,,()•=∂∂λλy u y t H ,,,()0=T λStep 3. Analyse these conditions.Heuristic Proof of the Maximum PrincipleIn this section we can derive the maximum principle , a set of first order conditions that characterise extreme values of the problem under consideration.The basic problem is defined by,()dt u y t F V Max Tu⎰=0,,()u y t f ty,,=∂∂()00y y =To derive the maximum principle we use attempt to solve the problem using the 'Calculus of Variations'. Essentially the approach is as follows. The dynamic problem is to find the optimal time path for ()t y , although that we can use ()t u to steer ()t y . It ought to be obvious that,()()0=∂∂t u t VWill not do. This simply finds the best choice in any one period without regard to any future periods. Think of the trade off between consumption and saving. We need to choose the paths of the control (state) variable that gives us the highest value of the integral subject to the constraints. So we need to optimise in every time period, given the linkages across periods and the constraints. The Calculus of Variations is a way to transform this into a static optimisation problem.To do this let ()*t u denote the optimal path of the control variable and consider each possible path as variations about the optimal path.()()()t P t u t u ε+=*(3)In this case ε is a small number (the maths sense) and ()t P is a perturbing curve. It simply means all paths can be written as variations about the optimal path. Since we can write the control path this way we can also (must) write the path of the state variable and boundary points in the same way.()()()t q t y t y ε+=*(4)T T T ∆+=*ε (5)T T T y y y ∆+=*ε(6)The trick is that all of the possible choice variables that define the integral path are now functions of .ε As ε varies we can vary the whole path including the endpoints so this trick essentially allows us to solve the dynamic problem as a function of ε as a static problem. That is to find the optimum (extreme value) path we choose the value of ε that satisfies,0=∂∂εV(7) given (3) to (6).Since every variable has been written as a function of ε, (7) is the only necessary condition for an optimum that we need. When this condition is applied it yields the various conditions that are referred to as the maximum principle .In order to show this we first rewrite the problem in a way that allows us to include the Hamiltonian function,()()()dt y u y t f t u y t F V Max T u ⎪⎪⎭⎫ ⎝⎛-+=•⎰,,,, 0λWe can do this because the term inside the brackets is always zero provided the equation of motion is satisfied. Alternatively as,()()dt y t u y t H V Max Tu•-=⎰λ0,,(1)Integrating (by parts)1 the second term in the integral we obtain,()()()()()T T u y T y dt t y t u y t H V Max λλλ-+⎭⎬⎫⎩⎨⎧+=⎰•000,,(2)Now we apply the necessary condition (7) given (3) to (6).Recall that to differentiate an Integral by a Parameter we use Leibniz's rule, (page 9). After simplification this yields,()()()()[]()0,,,0=∆-∆+⎭⎬⎫⎩⎨⎧∂∂+⎥⎦⎤⎢⎣⎡+∂∂=∂∂=•⎰T T t T y T T u y t H dt t p u H t q y H V λλλεε (3)The 3 components of this integral provide the conditions defining the optimum. In particular,()()()⎰=⎭⎬⎫⎩⎨⎧∂∂+⎥⎦⎤⎢⎣⎡+∂∂•ελT dt t p u Ht q y H 00requires that,•-=∂∂λyHand 0=∂∂u HWhich is a key part of the maximum principle.The Transversality Condition1Just letdt y ydt y T TT⎰⎰••-=0λλλTo derive the transversality condition we have to analyse the two terms,()[]()0,,,=∆-∆=T T t y T T u y t H λλFor out prototype problem (fixed terminal time) we must have .0=∆T Therefore the transversality condition is simply that,()0=T λThe first two conditions always apply for 'interior' solutions but the transversality condition has to be defined by the problem at hand. For more on this see Chiang pages 181-184.The Current Value HamiltonianIt is very common in economics to encounter problems in which the objective function includes the following function t e ρ-. It is usually easier to solve these problems using the current value Hamiltonian. For example an optimal consumption problem may have an objective function looking something like,()()dt et C U tρ-∞⎰0Where ρ represents the rate of time discount. In general the Hamiltonian for such problems will be of the form,()()()()u y t f t e u y t F u y t H t ,,,,,,,λλρ+=-The current value Hamiltonian is defined by,()()()u y t f t m u y t F H CV ,,,,+=(1)Where ()()t e t t m ρλ-=. The advantage of the current value Hamiltonian is that the system defined by the first order equations is usually easier to solve. In addition to (1) an alternative is to write,()()()t t CV e u y t f t m e u y t F H ρρ--+=,,,,(2)The Maximum ConditionsWith regard to (2) the first two conditions are unchanged. That is,μμ∂∂=∂∂CV H H and λλ∂∂=∂∂CVH H (3)The third condition is also essentially the same since,•=∂∂=∂∂λyH y H CVHowever it is usual to write this in terms of •m . Since,t t me e m ρρρλ--••-=We can write the third condition as,m yH m CV ρ+∂∂-=•(4)The endpoint can similarly be stated in terms of m since t e m ρλ= the condition ()0=T λmeans that,()()0==T e T T m ρλOr,()0=-T e T m ρ(5)Ramsey model of optimal savingIn the macroeconomics class you have the following problem,Choose consumption to maximise,()()dt t c u e B U t t ⎰∞=-=0βWhere ()g θηρβ---=1 subject to the following constraint,()()()()()()t k g t c t k f t k +--=•ηIn this example ()t c u =, ()t k y =. The Hamiltonian is,()()()()()()()[]g n t c t k f t t C u Be H t +--+=-λβThe basic conditions give us,()()()()0=-'=∂∂-t t c u Be t c Ht λβ (1)()()()()()[]()t g n t k f t t k H λλ&=+-'-=∂∂ (2)Plus,()()()()()()t k g t c t k f t k +--=•η(3)Now we must solve these. Differentiate the right-hand side of (1) with respect to time.This gives you ()t λ& which can then be eliminated from (2). The combined condition can then be rewritten as,()()()()()()[]g t r t c u t c u t cθρ--'''-=&This is the Euler equation. If you assume the instantaneous utility function is CRRA as in class and you calculate the derivatives you should get the same expression.。