MODELING THE IMPACT OF DELAY SPIKES ON TCP PERFORMANCE ON A LOW BANDWIDTH GPRS WIRELESS LIN
- 格式:pdf
- 大小:277.97 KB
- 文档页数:8
Econ5140Macroeconomic AnalysisFall2014Jenny XuHKUSTCalvo Model and New Keyesian Framework for Monetary Policy Analysis1Introduction•In the early1970s,1980s,and early1990s, the standard model used for most mone-tary policy analysis combined the assump-tion of nominal rigidity with a simple struc-ture linking the quantity of money to ag-gregate spending.•This linkage was usually directly through a quantity theory equation in which nominal demand was equal to the nominal money supply,often with a random disturbance included,or through a traditional textbook IS-LM model.•While the theoretical foundations of these models were weak,the approach proved remarkably useful in addressing a wide range of monetary policy topics.2•More recently,attention has been placed on ensuring that the model structure is consistent with the underlying behavior of optimizing economic agents.•The standard approach tody builds a dy-namic,stochastic,general equilibrium frame-work based on optimizing behavior,com-bined with some form of nominal wage and /or price rigidity.•Early examples of models with these prop-erties include those of Yun(1996),Good-friend and King(1997),Rotemberg and Woodford(1995,1997),and McCallum and Nelson(1999).•This lecture shows how a basic MIU model, combined with the assumption of monop-olistically competitive goods markets and price stickiness,can form the basic for a simple linear macroeconomic model that is useful for policy analysis.3Calvo’s model•An alternative model of staggered price adjustment is due to Calvo(1983).He assumed thatfirms adjustment their price infrequently and that opportunities to ad-just arrived as an exogenous Poisson pro-cess.•Each period,there is a constant probabil-ity1−ωthatfirm can adjust its price;the expected time between price adjustment .is11−ω•Because these adjustment opportunities oc-cur randomly,the interval between price changes for an individualfirm is a random variables.4•Following Rotemberg(1987),suppose the representativefirm i set its price to min-imize a quadratic loss function that de-pends on the difference between thefirm’s actual price in period t,p it,and its optimal price,p∗t.This latter price might denote the profit maximization price forfirm i in the absence of any restrictions or costs as-sociated with price adjustment.•If thefirm can adjust at time t,it will set its price to minimize1 2E t∞∑j=0βj(p t+j−p∗t+j)2(1)subject to the assumed process for determining when thefirm will next be able to adjust.•If only the terms in(1)involving the price set at time t are written out,they are1 2E t∞∑j=0(ωβ)j(p it−p∗t+j)2(2)whereωj is the probability thatfirm has not ad-justed after j periods so that the price set at t still holds in t+j.5•Thefirst order condition for the optimal choice of p it requires thatp it∞∑j=0ωjβj−∞∑j=0ωjβj E t p∗t+j=0(3)•Rearranging,and letting x t denote the op-timal price set by allfirms adjusting their price,x t=(1−ωβ)∞∑j=0ωjβj E t p∗t+j(4)The price set by thefirm at time t is a weighted average of current and expected future value of the target price p∗.•Equation(4)can be rewritten asx t=(1−ωβ)p∗t+ωβE t x t+1(5) If the price target p∗depends on the ag-gregate price level and output.6•We can replace p∗t with p t+γy t+ϵt,where ϵt is a random disturbance to capture other determinants of p∗.In general,thefirm’s optimal price will be shown to be a func-tion of its marginal cost,which,in turn, can be related to a measure of output.•With a large number offirms,a fraction 1−ωwill actually adjust their prices each period,and the aggregate price level can be expressed as p t=(1−ω)x t+ωp t−1.•We then have the following two equations to describe the evolution of x t and p t:x t=(1−ωβ)(p t+γy t+ϵt)+ωβE t x t+1(6)p t=(1−ω)x t+ωp t−1(7)•The aggregate inflation can be derived asπt=βE tπt+1+(1−ω)(1−ωβ)ω(γy t+ϵt)(8)7Discussion and Comparison•This expression is quite similar to the one we derived from Taylor’s model.Currentinflation depends on expected inflation andthe current output.•One difference is that the coefficient in front of E tπt+1is nowβ,but not1.Thisis because in??,we did not discount thereal wage.•Another difference,noted by Kiley(2002), is that the Tylor-type staggered adjust-ment model display less persistence thanthe Calvo-type model when both are cali-brated to the same frequency of price changes.•In Taylor(1979.1980),after two periods, all wages are adjusted after two periods.Or,no wages arefixed for more than twoperiods.8•In Calvo(1983),if we assumeω+12,then the expected time between price changes is two periods.So on average,prices are adjustment every two periods.•However,many prices remainfixed for more than two periods.For exam,w3=0.125 of all prices remainfixed for at least three periods.In Calvo,there is a tail of distri-bution of prices which have remainedfixed for many periods.•One attractive aspect of Calvo’s model is that it show how the coefficient on output in the inflation equation depends on the frequency with which prices are adjusted.•A rise inω,which means that the average time between price changes for an individ-ualfirm increase.Output movement havea smaller impact on current inflation,hold-ing expected future inflation constant.9A New Keyesian Model for MonetaryPolicy Analysis•The model is consistent with general equi-librium model in which all agents face well-defined decision problems and behave op-timally.•Three key modifications will be made here.–First,endogenous variations of the cap-ital stock are ignored.This follows Mc-Callum and Nelson(1999),who arguethat little is lost for the purpose of short-run business cycle analysis by assumingan exogenous process for the capitalstock.At least for the US,there is little relationshipbetween the capital stock and output dynamicsat business cycle frequencies.Cogley and Nason(1995)show that the re-sponse of investment and the capital stock toproductivity shocks contributes little to dynam-ics in RBC models.10•–The second key modification is to in-corporate differentiated goods whose in-dividual prices are set by monopolis-tically competitivefirms facing Calvo-type price stickiness.–Third,monetary policy is represented by a rule for setting the nominal rateof interest.The nominal quantity ofmoney is then endogenously determinedto achieve the desired nominal interestrate.•The modification yield a framework,often referred to as new Keynesian.The result-ing version of the MIU model can be linked directly to the more traditional aggregate supply-demand(AS-IS-LM)model.The new Keynesian model will be then used in later lectures to explore a variety of mon-etary policy issues.11The basic model•The model consists of households that sup-ply labor,purchase goods for consump-tion,and hold money and bonds andfirms that hire labor and produce and sell differ-entiated products in monopolistically com-petitive goods markets.•Eachfirm sets the price of the good it pro-duces,but not allfirms reset their price in each period.Households andfirm be-have optimally;Households maximize the expected present value of utility,andfirms maximize profits.•There is also a central bank that controls the nominal rate of interest.The central bank,in contrast to households andfirms, is not assumed to behave optimally.12Households•The preference of the representative house-hold are defined over a composite con-sumption good C t,real money balance M tP t ,and leisure1−N t,where N t is the time de-voted to market employment.Households maximize the expected present discounted value of utility:E t∞∑t=0βt[C1−σt+l1−σ+γ1−b(M t+iP t+i)1−b−χN1+ηt+i1+η](9)•There is a continuum of suchfirms of measure1,andfirm j produces good c j.The composite consumption good that en-ters the household’s utility function is de-fined asC t=[∫1cθ−1θjtdj]θθ−1,θ>1(10)The parameterθgoverns the price elastic-ity of demand for the individual goods.13•The household’s decision problem can be dealt with in two stages.–First,regardless of the level of C t the household decide on,it will always beoptimal to purchase combination of in-dividual goods that minimizes the costof achieving this level of the compositegood.–Second,given the cost of achieving any given level of C t,the household choosesC t,N t,and M t optimally.•Dealingfirst with the problem of minimiz-ing the cost of buying C t,the household’s decision problem is tomin c jt ∫1p jt c jt dj(11)subject to[∫1cθ−1θjtdj]θθ−1≥C t(12)where p jt is the price of good j.14•Lettingψt be Lagrangian multiplier on the constraint,thefirst order condition for good j isp jt−ψt[∫1cθ−1θjtdj]1θ−1c−1θjt=0(13)Rearranging,c jt=(p jtψt )−θC t,From the def-inition of the composite level of consump-tion,we can solve forψt and c jt.–The lagrangian multiplier is the appro-priately aggregated price index for con-sumption.ψt=[∫1p jt1−θdj]11−θ≡P t(14)–The demand for good j can then be written asc jt=(p jtP t)−θC t(15)The price elasticity of demand for good j is equal toθ.Asθ−→∞,the indi-vidual goods become closer and closer substitutes,and as a consequence,in-dividualfirms have less market power.15•Given the definition of the aggregate price index P t,the budget constraint of the house-hold is,in real terms,C t+M tP t+B tP t=W tP tN t+M t−1P t+(1+i t−1)B t−1P t+Πt(16)where M t(B t)is the household’s nomi-nal holding of money(one-period bonds).Bonds pay a nominal rate of interest i.Real profits received fromfirms are equal toΠt.•In the second stage of the household’s de-cision problem,consumption,labor supply, money,and bond holding are chosen to maximize expected utility subject to the budget constraint.•This leads to the following conditions,which, in addition to the budget constraint,must hold in equilibrium:16•C−σt =β(1+i t)E tP tP t+1C−σt+1(17)γ(M tP t)−bC−σt=i t1+i t(18)χNηtC−σt=W tP t(19)These conditions represent–The Euler condition for the optimal in-tertermporal allocation of consumption,–The intratemporal optimality condition setting the MRS between money and consumption equal to the opportunity cost of holding money,–and the intratemporal optimality con-dition setting the MRS between leisure and consumption equal to the real wage.17Firms•Firms maximize profits,subject to three constraints.Thefirst is the production summarizing the avail-able technology.For simplicity,we have ignored capital,so output is a function solely of labor sup-ply input N jt and an aggregate productivity distur-bance Z t:c jt=Z t N jt,E(Z t)=1(20)where constant return to scale has been assumed.The second constraint on thefirm is the demand curve each faces.This is given by(15).The third constraint is that each period somefirms are not able to adjust their price.•The specific model of price stickiness we will use in due to Calvo(1983).Each period,thefirms that adjust their price are randomly selected,and a fraction1−ωof allfirms adjust while the remainingωfraction do not adjust.The parameterωis a measure of the degree of nominal rigidity;18•Before analyzing thefirm’s pricing decision,con-sider its cost minimization problem,which involves minimizing W t N jt subject to producing c jt=Z t N jt.This problem can be written,in real terms,asmin N j (W tP t)N jt+φt(c jt−Z t N jt)(21)whereφt is equal to thefirm’s real marginal cost. Thefirst order condition impliesφt=W tP t Z t(22)•Thefirm’s pricing decision problem then involves picking p jt to maximizeE t∞∑i=0ωi△i,t+i[(p jtP t+i)C jt+i−φt+i C jt+i](23)where the discount factor△i,t+i is given byβi(C t+iC t)−σ. Using the demand curve(15)to eliminate c jt,this objective function can be written asE t∞∑i=0ωi△i,t+i[(p jtP t+i)1−θ−φt+i(p jtP t+i)−θ]C t+i(24)while individualfirms produce differentiated prod-ucts,they all have the same production technology and face demand curves with constant and equal demand elasticities.19•All firms adjusting in period t face the same prob-lem,so all adjusting firms will set the same price.Let P ∗t be the optimal price chosen by all firms ad-justing at time t .The first order condition for the optimal choice of P ∗t isP ∗t P t =θθ−1E t ∑∞i =0ωi βi C 1−σt +i φt +i (P t +i P t )θE t ∑∞i =0ωi βi C 1−σt +i (P t +i P t)θ−1(25)•Consider the case in which all firms are able to adjust their prices every period (ω=0),it reduce toP ∗t P t =θθ−1φt =µφt (26)Each firm sets price P ∗t equal to a markup µ>1over its nominal marginal cost P t φt .Because price exceeds marginal cost,output will be inefficiently low.•When prices are flexible,all firms charge the same price.In this case,p ∗t =P t and φt =1µt .Using the definition of real marginal cost,this meansW t P t =Z t µ(27)in a flexible price equilibrium.20•However,the real wage must also equal the marginal rate of substitution between leisure and consump-tion to be consistent with household optimization.This conditions implies thatW t P t =Z tµ=χNηtC−σt(28)•Letˆx t denote the percentage deviation of a vari-able X t around its steady state and let the super-script f denote theflexible price equilibrium.Then approximating(28)around the steady state yields ηˆn f t+σˆc f t=ˆz t.From the production function,ˆy f t =ˆn ft+ˆz t,and because output is equal to con-sumption in equilibrium,ˆy ft =ˆc ft.•Combining these conditions,theflexible-price equi-librium outputˆy ftcan be expressed asˆy f t =1+ησ+ηˆz t(29)When prices are sticky,output can differ from the flexible price equilibrium level.•The average price in period t satisfiesP1−θt =(1−ω)(P∗t)1−θ+ωP1−θt−1(30)21•Equation(25)and(30)can be approximated arounda zero average inflation,steady-state equilibriumto obtain an expression for aggregate inflation of the formπt=βE tπt+1+˜κˆφt(31)where˜κ=(1−ω)(1−βω)ωis an increasing function ofthe fraction offirms able to adjust each period andˆφt is the real marginal cost,expressed as a percentage deviation around its steady state value.•Equation(31)is often referred to as the new Key-nesian Philips curve.Unlike more traditional Philips curve equations,the new Keynesian Philips curve implies that real marginal cost is the correct driving variables for the inflation process.It also implies that the inflation process is forward-looking,with current inflation a function of ex-pected future inflation.•Solving(31),πt=˜κ∞∑i=0βi E tˆφt+i(32)which shows that inflation is a function of the present discounted value of current and future real marginal costs.This derivation reveals how˜κ,the impact of real marginal cost on inflation,depends on the structural parametersβandω.22•We can show that inflation is also related to the an output gap measurement.We rewriteˆφt=γ(ˆy t−ˆy f t)(33) whereγ=σ+η.This implies that the inflation adjustment equation becomesπt=βE tπt+1+κx t(34)and x t= whereκ=γ˜κ=γ(1−ω)(1−βω)ωˆy t−ˆy f t is the gap between actual output andflexible price equilibrium output.•Equation(34)relates output,in the form of the deviation around the level of output that would occur in the absence of nomi-nal price rigidities,to inflation.It forms one of the two key components of an optimizing model that can be used for monetary analysis.The other component is a linearized version of the household’s Euler condition.23Equilibrium•We now have all the components of a simple general equilibrium model that isconsistent with optimizing behavior on thepart of households andfirms.Because consumption is equal to outputin this model,(17),(25),and(30)provideequilibrium conditions that determines out-put,the price set byfirms adjusting theirprice,and the aggregate price level oncethe behavior of the nominal rate of inter-est rate is specified.With the nominal interest rate treated asthe monetary policy instrument,(18)sim-ply determines the nominal quantity of money in equilibrium.•Equation(17)can be approximated around the zero-inflation steady state asˆy t=E tˆy t+1−1σ(ˆi t−E tπt+1)(35)24•Expressing this in terms of the output gap x t=ˆy t−ˆy f t,x t=E t x t+1−1σ(ˆi t−E tπt+1)+µt(36)whereµt=E tˆy ft+1−ˆy f t depends only onthe exogenous productivity disturbance.Combining(36)and(34)gives a simple two-equation,forward-looking,rational-expectations model for inflation and the output gap measure x t.For convenience,(34)is repeated here:πt=βE tπt+1+κx t(37)•This two equation model represents the equilibrium conditions for a well-specified general equilibrium model.Equation(36) and(35)represents the demand side of the economy(an expectational,forward-looking IS curve),while the new Keyne-sian Philips curve(37)corresponds to the supply side.25•The model can be closed by assuming that the central bank implements monetary pol-icy through control of the nominal interest rate.–The linearized version of(18)can then be used tofind the equilibrium nominal money supply.–Alternatively,if the central bank implements money policy by setting a path for the nominalsupply of money,(36)and(37)together withthe linearized version of(18),determines x t,πt,andˆi t.•If a policy rule for the nominal interest rule is added to the model,this must be done with care to ensure that the policy rule does not render the system unstable or introduce multiple equilibria.For example, suppose monetary policy is represented by the following rule forˆi t,ˆi t=ργˆi t−1+v t(38) With this policy rule,there exist multiple bounded equilibria and the equilibrium is locally indeterminate.Stationary sunspot equilibria are possible.26•Bullard and Mitra(2002)show that a unique stationary equilibrium exists as long asδ>1.Settingδ>1is referred to as the Tay-lor Principle because John Taylor was the first to stress the importance of interest rate rules that called for responding more than one for one to changes in inflation.•Suppose that,instead of reacting solely to inflation,the central bank respond to both inflation and the output gap according toˆi t=δππt+δx x t+v t(39)This type of policy rule is called a Taylor rule(Tay-lor1993),and variants of it have been shown to provide a reasonable empirical description of the policy behavior of many central banks(Clarida, Gali,and Gertler2000).With this policy rule,stability now depends on both the policy parametersδπandδx.27The monetary transmission mechanism•The model consisting of(36)and(37)as-sumes that the impact of monetary policy on output and inflation operates through the real rate of interest.•As long as the central bank is able to affect the real interest rate through its control of nominal interest rate,monetary policy can affect real output.•The basic interest rate transmission mech-anism for monetary policy could be ex-tended to include effects on investment spending if capital were reintroduced into the model(Christiano,Eichenbaum,and Evans2001).Increases the real interest rate would reduce the demand for capital and lead to a fall in investment spending.28•In addition to these interest rate channels, monetary policy is often thought to af-fect the economy either indirectly through credit or directly through the quantity of money.–Since measure of money and bank credit move together.This is called the creditchannel of money transmission process.–It is sometimes argued that changes in the money supply have direct effectson aggregate demand that are indepen-dent of the interest rate channels thatoperates on consumption.Real money holding represent part ofhousehold wealth;an increase in realbalance should induce an increase inconsumption spending through a wealtheffect.This channel is often called thePigou effect.29。
潮汐锁定英语文章Tidal locking, also known as gravitational locking, occurs when the gravitational forces between two objects cause one to always face the other. This phenomenon is most commonly seen in the relationship between a planet and its moon. The most well-known example of tidal locking is the Moon, which always shows the same face to the Earth. This means that the Moon rotates on its axis in the same amount of time it takes to orbit the Earth.Tidal locking occurs because of the gravitational interaction between the two objects. As the moon orbits the Earth, the gravitational force exerted by the Earth causes the moon to bulge slightly. This bulge creates a gravitational force that slows down the moon's rotation until it eventually becomes locked into the same period of rotation and orbit. This process takes millions of years to occur.The same process occurs with exoplanets and their host stars. In fact, many exoplanets have been discovered to be tidally locked to their host stars. This means that one side of the planet always faces the star, while the otherside is in perpetual darkness. This has significant implications for the climate and habitability of these exoplanets.Tidal locking can have a profound impact on the climate of a tidally locked planet. The side that faces the star will be much hotter than the side that faces away. This drastic temperature difference can lead to extreme weather patterns and winds. Additionally, the lack of a day-night cycle can also have implications for the planet's atmosphere and weather systems.In terms of habitability, a tidally locked planet may have regions that are more temperate and suitable for life. The region known as the terminator, which is the boundary between the light and dark sides of the planet, may have conditions that are conducive to life. However, the extreme conditions on the hot and cold sides of the planet may make it difficult for life to thrive.In conclusion, tidal locking is a fascinating phenomenon that has significant implications for the climate and habitability of a planet. Understanding the processes that lead to tidal locking can provide valuable insights intothe potential habitability of exoplanets and the conditions for life beyond our solar system.潮汐锁定,也被称为引力锁定,是指两个物体之间的引力作用导致一个物体始终面向另一个物体。
Perseverance is a vital quality that can lead to the creation of legends.It is the unwavering commitment to a goal despite the obstacles and setbacks one may encounter. Here are some key points to consider when discussing the role of perseverance in achieving legendary success:1.Definition of Perseverance:Perseverance is the act of continuing in a course of action in spite of difficulty or opposition.It is the ability to keep going even when the odds are stacked against you.2.Historical Examples:Throughout history,many legends have been created by individuals who demonstrated exceptional perseverance.For example,Thomas Edison is often cited for his persistence in inventing the light bulb,despite numerous failed attempts.3.Overcoming Failure:Perseverance is closely linked to the ability to learn from failure. It is not the absence of failure but the resilience to continue after failure that defines a persons perseverance.4.Growth Mindset:A growth mindset is essential for perseverance.It is the belief that abilities and intelligence can be developed through dedication and hard work.This mindset encourages individuals to embrace challenges and persist in the face of setbacks.5.Setting Goals:Perseverance is often fueled by clear and achievable goals.Setting realistic yet challenging objectives can provide the motivation needed to push through difficult times.6.Developing Resilience:Resilience is the inner strength that helps individuals bounce back from adversity.It is a key component of perseverance and is developed through experiences that test ones resolve.7.Support Systems:Having a support system in place can significantly enhance ones perseverance.This can include mentors,friends,and family who provide encouragement and guidance.8.SelfDiscipline:Perseverance requires selfdiscipline,which is the ability to control ones emotions and actions in order to achieve longterm goals.9.Celebrating Small Victories:Recognizing and celebrating small achievements along the way can help maintain momentum and build confidence in ones ability to persevere.10.Adaptability:Perseverance often involves adapting to changing circumstances and finding new ways to approach a problem when the initial plan does not work.11.Inspiration:Drawing inspiration from others who have demonstrated perseverance can be a powerful motivator.Stories of individuals who have overcome significant challenges can provide a blueprint for ones own journey.12.Legacy:The desire to create a lasting legacy can be a driving force behind perseverance.Many individuals push through challenges to ensure that their work will have a lasting impact.In conclusion,perseverance is a cornerstone of legendary achievements.It is the relentless pursuit of a vision,the courage to face adversity,and the determination to keep moving forward,even when the path is unclear.By cultivating perseverance,individuals can transform their dreams into reality and create a legacy that inspires future generations.。
Perseverance is a quality that is often celebrated in literature and life as a key to success.It is the unwavering determination to continue in the face of obstacles and challenges.Here are some points to consider when writing an essay on the topic of persistence:1.Definition of Perseverance:Start by defining what perseverance means.It is the continuous effort to achieve a goal despite difficulties,delays,or even repeated failures.2.Importance of Perseverance:Discuss why perseverance is important.It is a critical trait for personal growth and success in various aspects of life,including education,career, and personal relationships.3.Historical Examples:Provide examples of individuals who have demonstrated perseverance.Think of historical figures like Thomas Edison,who failed a thousand times before inventing the light bulb,or Abraham Lincoln,who faced numerous setbacks before becoming the President of the United States.4.Overcoming Failure:Perseverance often involves learning from failure.Discuss how failure can be a stepping stone to success when approached with a persistent mindset.5.Strategies for Perseverance:Offer practical tips on how to cultivate perseverance.This could include setting realistic goals,breaking down larger tasks into smaller steps, seeking support from others,and maintaining a positive attitude.6.The Role of Passion:Explain how passion fuels perseverance.When one is deeply invested in a goal,they are more likely to persist through challenges.7.Coping with Setbacks:Discuss the psychological aspects of perseverance,such as resilience and the ability to cope with setbacks without giving up.8.The Power of Habit:Perseverance can become a habit.When one consistently pushes through difficulties,it becomes easier to do so in the future.9.The Impact on Others:Consider the influence of a persevering individual on those around them.Perseverance can inspire others and create a culture of determination and success.10.Conclusion:Summarize the importance of perseverance and reiterate its role in achieving longterm goals.Encourage readers to adopt a persevering attitude in their own lives.Remember to use a variety of sentence structures and vocabulary to make your essay engaging and to avoid repetition.Provide clear transitions between paragraphs to ensure a smooth flow of stly,proofread your work to correct any grammatical errors and enhance the overall quality of your writing.。
潮汐锁定英语作文The Tidal LockThe concept of tidal locking is a fascinating phenomenon that has captivated the minds of scientists and astronomers for centuries. This unique process occurs when a celestial body's rotational period is equal to its orbital period around another body, resulting in one side of the object permanently facing the other. This phenomenon is particularly prevalent in binary star systems and planets with moons, where the gravitational interactions between the bodies lead to this remarkable synchronization.One of the most well-known examples of tidal locking is the relationship between the Earth and the Moon. The Moon's rotation period is exactly the same as its orbital period around the Earth, causing the same side of the Moon to always face our planet. This tidal lock has had a profound impact on the Earth-Moon system, shaping the dynamics and evolution of both bodies.The tidal locking process is driven by the gravitational forces exerted by the larger body on the smaller one. As the smaller body orbits the larger one, the uneven distribution of mass within the smaller bodycreates a gravitational imbalance. This imbalance causes a slight bulge on the side of the smaller body closest to the larger one, and a smaller bulge on the opposite side. These bulges, known as tidal bulges, create a torque that acts to slow down the smaller body's rotation until it matches its orbital period.Over time, as the smaller body's rotation slows down, the tidal bulges become more pronounced, further reinforcing the tidal locking process. This feedback loop continues until the smallerbody's rotation period is exactly equal to its orbital period, resulting in the permanent synchronization of the two bodies.The consequences of tidal locking are far-reaching and have significant implications for the habitability and evolution of celestial bodies. In the case of the Earth-Moon system, the tidal locking has led to the stabilization of the Earth's axial tilt, which is crucial for the maintenance of a stable climate and the development of complex life. The Moon's gravitational influence also plays a crucial role in the generation of tides, which have shaped the coastlines and influenced the evolution of marine life on Earth.Beyond the Earth-Moon system, tidal locking is observed in many other celestial bodies throughout the universe. For example, many of the moons of Jupiter and Saturn are tidally locked to their parent planets, and some exoplanets in binary star systems are also believedto be tidally locked to their host stars.The study of tidal locking has also led to important insights into the formation and evolution of planetary systems. By understanding the dynamics of tidal locking, scientists can better model the long-term stability and habitability of exoplanetary systems, as well as the potential for the development of life on other worlds.In conclusion, the tidal locking phenomenon is a remarkable example of the complex and intricate processes that shape the universe around us. From the Earth-Moon system to the most distant exoplanets, this fundamental principle of gravitational interactions continues to captivate and inspire scientists, offering new insights into the nature of our cosmos and the potential for life beyond our own planet.。
介绍记者英语开头作文Title: Crafting Engaging Openings in English Journalism。
In journalism, the opening of an article is paramount.It serves as the gateway to the reader's engagement,setting the tone, capturing attention, and succinctly presenting the essence of the story. Crafting an effective opening requires finesse, creativity, and a keen understanding of the audience. Let's delve into the art of composing compelling openings in English journalism.One effective approach to begin an article is by employing a captivating anecdote or a vivid description. This technique immediately draws the reader into the narrative, evoking curiosity and intrigue. For instance, "Amidst the bustling streets of New York City, a lone saxophonist's melancholic melody echoed through the urban canyon, capturing the essence of a city in perpetual motion."Another strategy is to pose a thought-provoking question that stimulates critical thinking and encourages readers to contemplate the subject matter. For example, "What if our smartphones could predict our every move? This tantalizing prospect is no longer confined to the realms of science fiction but is rapidly becoming a reality in the age of artificial intelligence."Moreover, employing a startling statistic or a surprising fact can instantly grab attention and underscore the significance of the article's topic. Consider this opening gambit: "Every year, over 8 million tons of plastic waste find their way into the world's oceans, posing a grave threat to marine life and ecosystems. Yet, amidstthis environmental crisis, there lies a beacon of hope..."Furthermore, leveraging a quotation from a notable figure or an expert in the field can lend credibility and authority to the article while providing a compelling perspective. For instance, "As Nobel laureate Albert Einstein once remarked, 'Imagination is more important than knowledge.' In today's rapidly evolving landscape oftechnological innovation, his words resonate more than ever..."Additionally, beginning with a rhetorical flourish or a rhetorical question can infuse the opening with rhetorical power, engaging the reader's emotions and intellect. For instance, "What does it mean to be truly free? Is it merely the absence of physical constraints, or does it encompass a deeper sense of autonomy and self-determination?"In conclusion, the opening of a journalistic piece plays a pivotal role in captivating readers' attention and setting the stage for the ensuing narrative. Whether through the use of anecdotes, questions, statistics, quotations, or rhetorical devices, crafting an engaging opening requires skillful storytelling and a profound understanding of the audience. By mastering the art of composing compelling openings, journalists can effectively draw readers into their articles and leave a lasting impact on their audience.。
When writing an essay in English about My Model,its important to consider the context in which the term model is being used.Here are a few different approaches you might take,depending on the specific meaning of model in your essay:1.A Role Model:Begin by introducing who your role model is and why they are important to you. Discuss the qualities and achievements of your role model that you admire. Explain how their actions or life story has influenced your own life or goals.Example Paragraph:My role model is Malala Yousafzai,a Pakistani activist for female education and the youngest Nobel Prize laureate.Her courage and determination to fight for girls education rights in the face of adversity have deeply inspired me.Malalas story has taught me the importance of standing up for what I believe in,even when it is difficult.2.A Fashion Model:Describe the physical attributes and style of the model.Discuss the impact they have had on the fashion industry or their unique contributions to it.Explain why you find their work or presence in the industry notable.Example Paragraph:Kendall Jenner is a fashion model who has made a significant impact on the industry with her unique style and presence.Her tall and slender physique,combined with her ability to carry off diverse looks,has made her a favorite among designers and fashion enthusiasts alike.I admire her for her versatility and the way she uses her platform to promote body positivity.3.A Model in Science or Technology:Introduce the model as a theoretical framework or a practical tool used in a specific field.Explain the principles behind the model and how it is applied.Discuss the benefits or limitations of the model and its implications in the real world.Example Paragraph:The Standard Model in physics is a theoretical framework that describes three of the four known fundamental forces excluding gravity and classifies all known elementary particles.It has been instrumental in understanding the behavior of subatomic particles and predicting the existence of new particles,such as the Higgs boson.However,the models inability to incorporate gravity or dark matter has led to ongoing research for amore comprehensive theory.4.A Model in Business or Economics:Introduce the business or economic model and its purpose.Explain how the model works and the strategies it employs.Discuss the success or challenges associated with the model and its potential for future growth.Example Paragraph:The subscriptionbased business model has become increasingly popular in recent years, particularly in the software panies like Adobe have transitioned from selling packaged software to offering services on a subscription basis,allowing for continuous revenue streams and a more predictable income.This model has been successful in fostering customer loyalty and providing a steady income,although it requires ongoing innovation to maintain customer interest.5.A Model in Art or Design:Describe the aesthetic or functional qualities of the model.Discuss the creative process or design principles that inform the model.Explain the cultural or historical significance of the model and its influence on contemporary art or design.Example Paragraph:The Eames Lounge Chair,designed by Charles and Ray Eames,is a model of modern furniture that has become an icon of midcentury design.Its elegant form,made from molded plywood and leather,exemplifies the designers commitment to blending comfort with aesthetics.The chairs timeless appeal has made it a staple in both residential and commercial settings,influencing countless furniture designs that followed. Remember to structure your essay with a clear introduction,body paragraphs that develop your points,and a conclusion that summarizes your main e specific examples and evidence to support your claims,and ensure your writing is clear,concise, and engaging.。
ORIGINAL PAPERLong-term time-dependent stochastic modelling of extreme wavesErik VanemPublished online:5August 2010ÓThe Author(s)2010.This article is published with open access at Abstract This paper presents a literature survey on time-dependent statistical modelling of extreme waves and sea states.The focus is twofold:on statistical modelling of extreme waves and space-and time-dependent statistical modelling.The first part will consist of a literature review of statistical modelling of extreme waves and wave parame-ters,most notably on the modelling of extreme significant wave height.The second part will focus on statistical modelling of time-and space-dependent variables in a more general sense,and will focus on the methodology and models used also in other relevant application areas.It was found that limited effort has been put on developing statis-tical models for waves incorporating spatial and long-term temporal variability and it is suggested that model improvements could be achieved by adopting approaches from other application areas.In particular,Bayesian hier-archical space–time models were identified as promising tools for spatio-temporal modelling of extreme waves.Finally,a review of projections of future extreme wave climate is presented.Keywords Extreme waves ÁStochastic modelling ÁSpatiotemporal modelling ÁClimate change ÁRisk assessment1IntroductionAccording to casualty statistics,one of the major causes of ship losses is bad weather (Guedes Soares et al.2001),which stresses the importance of taking extreme sea state conditions adequately into account in ship design.There-fore,a correct and thorough understanding of meteoro-logical and oceanographic conditions,most notably the extreme values of relevant wave and wind parameters,is of paramount importance to maritime safety.Thus,there is a need for appropriate statistical models to describe these phenomena.When designing ships and other marine and offshore structures,relevant safety regulations and design standards should be based on the best available knowledge.Meteo-rological data for the last 50?years are available and this is often assumed to be representative also for the current sit-uation.However,ships and other marine structures are designed for lifetimes of several decades and design codes and standards should be based on knowledge about the operating environment throughout the expected lifetime of the structure—several decades into the future.Such knowledge will also be crucial for any risk assessment of maritime transportation or offshore operations.According to the IPCC Fourth Assessment Report (IPCC 2007),the globe is currently experiencing climate change and the Earth is warming.It is also very likely that human activities and emission of greenhouse gasses are mainly responsible for the recent rise of global tempera-tures.Projections of future climate indicate that it is very likely that frequencies and intensities of extreme weather events will increase (IPCC 2007).Model projections also show a poleward shift of the storm tracks with more extreme wave heights in those regions.Thus,it is increasingly evident that climate change is a reality.An overwhelming majority of researchers and sci-entists agree on this and it is reasonable to assume that the averages and extremes of sea states are changing and cannot be considered stationary.Hence,it is no longerE.Vanem (&)Statistics Division,Department of Mathematics,University of Oslo,P.O.Box 1053,Blindern,0316Oslo,Norway e-mail:erikvan@math.uio.noStoch Environ Res Risk Assess (2011)25:185–209DOI 10.1007/s00477-010-0431-ysufficient to base design codes on stationary wave param-eters without any consideration of how these are expected to change in the future.There is a need for time-dependent statistical models that can take the time-dependency of the integrated wave parameters into account,and also ade-quately model the uncertainties involved,in order to pre-dict realistic operating environments throughout the lifetime of ships and marine structures.This paper aims at providing a comprehensive,up-to-date review of statistical models proposed for modelling long-term variability in extreme waves and sea states as well as a review of alternative approaches from other areas of application.The paper is organized as follows:Section2 outlines alternative sources of wave data,Sect.3comprises a review of statistical models for extreme waves,Sect.4 presents a review of relevant spatio-temporal statistical models from other areas of application,Sect.5reviews projections of future wave climate and Sect.6concludes with some recommendations for further research.An abbreviated version of this work was presented at the OMAE conference this year(Vanem2010).Efforts have been made to include all relevant and important work to make this literature survey as complete as possible,and this has resulted in a rather voluminous list of references at the end of the paper.Notwithstanding,due to the enormous amount of literature in thisfield some important works might inevitably have been omitted.This is unintended and it should be noted that important con-tributions to the discussion herein might exist of which I have not been aware.Nevertheless,it is believed that this literature study contains a fair review of relevant literature and as such that it gives a good indication of state-of-the art within thefield and may serve as a basis for further research on stochastic modelling of extreme waves and sea states.1.1Integrated sea state parametersThe state of the sea changes constantly,and it is therefore neither very practical nor very useful to describe the sea for an instantaneous point in time.Therefore,sea states are normally described by different averages and extreme values for a certain period of time,often referred to as integrated sea state parameters.Typically,such integrated parameters include the significant wave height,1mean wave period,mean main wave direction,spread of the wave direction and mean swell.Such integrated wave parameters represent averages over a defined period of time,typically in the order of20–30min.Integrated wave parameters,which are averages over different periods of time,will have its own averages and extremes.Of particular interest may be the m-year return value of the significant wave height,SWH m,which is defined as the value of H S that is exceeded on average once every m years.In ship design,the SWH20has traditionally been of particular interest since ships are normally designed for a lifetime of20years.The modelling of such extreme values,for example for the significant wave height,is therefore of interest.It is also of interest to investigate how such average wave parameters vary over time.In particular,long term variations(i.e.how these parameters will vary in the next 50–100years)will be an important basis for design of marine and offshore structures with expected lifetimes in the range of several decades and also for maritime risk analyses.This is of particular importance at times where climate change indicates that the future is not well repre-sented by today’s situation(i.e.where an increase in extreme weather and sea state is expected).1.2Waves as stochastic processesAlthough the dynamics of the sea and the mechanisms underlying the generation of waves on the sea surface inevitably follows the laws of physics and therefore,in principle,the sea state could be described deterministically, in reality this is not possible due to the complexity of the system.Hence,the description of waves and the sea must be done probabilistically.The sea is a dynamic system that is influenced by innumerable factors and an infinite number of interrelated parameters would be needed in order to provide an exact description of the sea in any given point in time.It is simply not possible to know all and every one of these parameters.The unknown parameters introduce uncertainties to any description of the system and an exact description of the sea is therefore not feasible.Thus,the problem of describing the sea turns into a statistical prob-lem,and probabilistic models are needed in order to rep-resent waves on the sea surface and to provide a better understanding of the maritime environment in which ships operate.In this regard stochastic models would seem to be the most appropriate approach to describe extreme waves. Also,the fact that the sea state is normally described through different average and extreme properties,as dis-cussed briefly above,indicates that statistical tools are appropriate to model waves and sea states.A comprehen-sive overview of statistical techniques,methodologies, theories and tools used in climatic analyses is presented in von Storch and Zwiers(1999).Stochastic modelling of ocean waves can be performed on two very different time scales.In the short-term models, the parameters of most concern are those for individual1Significant waveheight,denoted SWH,HSor H1/3is often defined as the average wave height,from trough to crest,of the one-third largest waves that is observed during the period.waves such as individual wave height,wave length and period,etc.The times involved in such models are nor-mally in the order from a few seconds to a couple of hours. The long-term models mainly refer to the description of spectral parameters,and the times that are involved nor-mally span over many years.It is the latter time scales that are of main interest in the present work,considering modelling of possible long-term trends due to climate change.1.3Predicting the impact of climate change on extremesea statesThe state of the oceans and the characteristics of the waves are influenced by innumerable external factors,and the most influential boundary conditions are related to the atmosphere and the global and local climate in general. Atmospheric pressure,wind,temperature,precipitation, solar radiation and heat,tidal movements,the rotation of the earth and movements of the seabed(e.g.from earth-quakes or volcanic activities)are examples of external factors that jointly influence the generation of waves on the sea surface.In one sense,some of the average and extreme properties of the sea state can be regarded as stationary if the overall average boundary conditions does not change. That is,in spite of the continuous variations of sea states over time,the averages such as seasonal average wave heights and return periods for extreme waves can be con-sidered as stationary if the average boundary conditions (e.g.average atmospheric pressure,average wind,average temperatures,etc.)remain stationary.However,in recent years it has become increasingly apparent that the climate system overall is not stationary and that the climate will change in the near future—in fact it has been observed that the climate is already undergoing a change with a global long-term trend towards higher temperatures and more frequent and intense severe weather events,although local and regional trends may differ from this global trend.These climate changes—man-made or not—will thus change the overall boundary conditions for the sea,and the assumption that the average sea states can be regarded as stationary ceases to be valid.In order to predict future trends in sea state parameters in the non-stationary case,one may therefore start with predicting the trends in the boundary conditions such as temperature,atmospheric pressure and wind.Assuming that a significant part of the climate change is man-made and can be ascribed to the increasing emission of green-house gases,most notably CO2,and aerosols,predictions of climate change can be made based on various emission scenarios or forcing scenarios(Nakic´enovic´et al.2000). These forcing scenarios can then be fed into climate models to predict global trends in meteorological variables,which can again be used to predict trends in average and extreme properties of sea waves.However,most wave models are deterministic and not able to handle the inherent uncertainties involved in a rigorous manner.Estimates of future H S return values are difficult since there are no projections of future H Sfields.However, projections of sea level pressure provided by climate models are reasonable reliable and it is known that the H S fields are highly correlated with sea level pressurefields. Therefore,one approach could be to model H Sfields by regressing on projected sea level pressurefields,as was done in Wang et al.(2004).Other covariates may also be used to predict changes in extreme wave climate from projected changes in the overall climate,and the utilization of such dependencies may prove important in modelling long-term trends in extreme waves.2Wave data and data sourcesAs in all statistical modelling,a crucial prerequisite for any sensible modelling and reliable analysis is the availability of statistical data.For example,if models describing the spatio-temporal variability of extreme waves are to be developed,wave data with sufficient spatio-temporal res-olution is needed.Furthermore,the lack of adequate cov-erage in the data will restrict the scope of the statistical models that can be used.Wave data can be obtained from buoys,laser measure-ments,satellite images,shipborne wave recorders or be generated by numerical wave models.Of these,buoy measurements are most reliable,but the spatial coverage is limited.For regions where buoy data are not available, satellite data may be an alternative for estimation of wave heights(Krogstad and Barstow1999;Panchang et al. 1999),and there are different satellites that collect such data.Examples of satellite missions are the European Remote Sensing Satellites(ERS-1and ERS-2),the Topex/ Poseidon mission and Jason-1and-2missions.Wave parameters derived from satellite altimeter data were demonstrated to be in reasonable agreement with buoy measurements by the end of last century(Hwang et al.1998).More recently,further validation of wave heights measured from altimeters have been performed, and the agreement with buoy data is generally good (Queffeulou2004;Durrant et al.2009).However,correc-tions due to biases may be required,and both negative and positive biases for the significant wave height have been reported,indicating that corrections are region-dependent (Meath et al.2008).Sea state parameters such as signifi-cant wave height derived from synthetic aperture radar images taken from satellites were addressed in Lehner et al.(2007).Ship observations are another source of wave data which covers areas where buoy wave measurements are not available.The Voluntary Observing Ship(VOS)scheme has been in operation for almost150years and has a large set of voluntary collected data.However,due to the fact that ships tend to avoid extreme weather whenever possi-ble,extreme wave events are likely to be under-represented in ship observations and hence such data are not ideally suited to model extreme wave events(DelBalzo et al. 2003;Olsen et al.2006).Recently,a novel wave acquisition stereo system (WASS)based on a variational image sensor and video observational technology in order to reconstruct the4D dynamics of ocean waves was developed(Fedele et al. 2009).The spatial and temporal data provided by this system would be rich in statistical content compared to buoy data,but the availability of such data are still limited.In general,measurements of wave parameters are more scarce than meteorological data such as wind and pressure fields which are collected more systematically and cover-ing a wider area.An alternative is therefore to use output from wave models that uses meteorological data as input rather than to use wave data that are measured directly.Wave models are normally used for forecast or hindcast of sea states(Guedes Soares et al.2002).Forecasts typically predicts sea states up to3–5days ahead.Hindcast modelling can be used to calibrate the models after precise meteoro-logical measurements have been collected.It can also be used as a basis for design but it is stressed that quality control is necessary and possible errors and biases should be iden-tified and corrected(Bitner-Gregersen and de Valk2008).Currently,data are available from various reanalysis projects(Caires et al.2004).For example,40year of meteorological data are available from the NCEP/NCAR reanalysis project(Kalnay et al.1996)that could be used to run wave models(Swail and Cox2000;Cox and Swail 2001).A more recent reanalysis project,ERA-40(Uppala et al.2005),was carried out by the European Centre for Medium-Range Weather Forecasts(ECMWF)and covers a 45-year period from1957to2002.The data contain six-hourlyfields of global wave parameters such as significant wave height,mean wave direction and mean wave period as well as mean sea level pressure and windfields and other meteorological parameters.A large part of this reanalysis data are freely available for download from their website for research purposes.2It has been reported that the ERA-40dataset contains some inhomogeneities in time and that it underestimates high wave heights(Sterl and Caires2005),but corrected datasets for the significant wave height have been produced (Caires and Sterl2005).Hence,a new45-year global six-hourly dataset of significant wave height has been created, and the corrected data shows clear improvements com-pared to the original data.In Caires and Swail(2004)it is stated that this dataset can be obtained freely from the authors for scientific purposes.3Review of statistical models for extreme wavesIn order to model long-term trends in the intensity and frequency of occurrence of extreme wave events or extreme sea states due to climate change,appropriate models must be used.There are numerous stochastic wave models proposed in the literature,but most of these are developed for other purposes than predicting such long-term trends.Models used for wave forcasting,for example in operational simulation of safety of ships and offshore structures typically have a short-term perspective,and cannot be used to investigate long-term trends.Also,many wave models assume stationary or cyclic time series,which would not be the case if climate change is a reality.There are different approaches to estimating the extreme wave heights at a certain location based on available wave data,and some of the most widely used are the initial dis-tribution method,the annual maxima method,the peak-over-threshold method and the MEan Number of Up-crossings (MENU)method.The initial distribution method uses data (measured or calculated)of all wave heights and the extreme wave height of a certain return period is estimated as the quantile h p of the wave height distribution F(h)with proba-bility p.The annual maxima approach uses only the annual (or block)maxima and the extreme wave height will have one of the three limit distributions referred to as the family of the generalized extreme value distribution.The peak-over-threshold approach uses data with wave heights greater than a certain threshold,and thus allows for increased number of samples compared to the annual maxima approach.Waves exceeding this threshold would then be modelled according to the Generalized Pareto distribution.However,the peaks-over-threshold method has demonstrated a clear dependence on the threshold and is therefore not very reliable.The MENU method determines the return period of an extreme wave of a certain wave height by requiring that the expected or mean number of up-crossings of this wave height will be one for that time interval.Another approach useful in extreme event modelling is the use of quantile functions,an alternative way of defining a probability distribution(Gilchrist2000).The quantile function,Q,is a function of the cumulative probability of a distribution and is simply the inverse of the cumulative density function:Q(p)=F-1(p)and F(x)=Q-1(x).This function can then be used in frequency analysis tofind2Data available from url:http://data-portal.ecmwf.int/data/d/era40_ daily/.useful estimates of the quantiles of relevant return periods T of extreme events in the upper tail of the frequency distribution,Q T=Q(1-1/T).Yet another approach for estimating the maxima of a sta-tionary process is to model the number of extreme events, defined as the number of times the process crosses afixed level u in the upward direction,as a Poisson process(a counting process{N(t),t C0}with N(0)=0,independent increments and with number of events in a time interval of length t Poisson distributed with mean k t is said to be a Poisson process with rate k)and apply the Rice formula to compute the intensity of the extreme events(see e.g.Rychlik2000).In the following,a brief review of some wave models proposed in the literature will be given.This includes a brief description of some short-term and stationary wave models as well as a more comprehensive review of pro-posed approaches to modelling long term trends due to global climatic changes.An introduction to stochastic analysis of ocean waves can be found in Ochi(1998)and Trulsen(2006),albeit the latter with a particular emphasis on freak or rogue waves.3.1Short-term stochastic wave modelsWaves are generated from wind actions and wave predic-tions are often based on knowledge of the generating wind and wind-wave relationships.Most wave models for operational wave forecasting is based on the energy bal-ance equation;there is a general consensus that this describes the fundamental principle for wave predictions, and significant progress have been made in recent decades (Janssen2008).Currently,the third-generation wave model WAM is one of the most widely used models for wave forecasting(The WAMDI Group1988;Komen et al.1994) computing the wave spectrum from physicalfirst princi-ples.Other widely used wave models are Wave Watch and SWAN,and there exist a number of other models as well (The Wise Group et al.2007).However,wave generation is basically an uncertain and random process which makes it difficult to model deterministically,and in Deo et al. (2001),Bazargan et al.(2007)approaches using neural networks were proposed as an alternative to deterministic wave forecasting models.There are a number of short-term,statistical wave models for modelling of individual waves and for pre-dicting and forecasting sea states in the not too distant future.Most of the models for individual waves are based on Gaussian approaches,but other types of stochastic wave models have also been proposed to account for observed asymmetries(e.g.adding random correction terms to a Gaussian model(Machado and Rychlik2003)or based on Lagrangian models(Lindgren2006;Aberg and Lindgren 2008)).Asymptotic models for the distribution of maxima for Gaussian processes for a certain period of time exist, and under certain assumptions,the maximum values are asymptotically distributed according to the Gumbel distri-bution.However,as noted in Ryde´n(2006),care should be taken when using this approximation for the modelling of maxima of wave crests.A similar concern was expressed in Coles et al.(2003),albeit not related to waves.Given the short-term perspective of these types of models,they cannot be used to describe long-term trends due to climate change,nor to formulate design criteria for ships and offshore structures,even though they are important for maritime safety during operation.Improved weather and wave forecasts will of course improve safety at sea,but the main interest in the present study is on long-term trends in ocean wave climate,and the effect this will have on maritime safety and on the design of marine structures.Therefore,short-term wave models will not be considered further herein.3.1.1Significant wave height as a functionof wind speedThe significant wave height for a fully developed sea, sometimes referred to as the equilibrium sea approxima-tion,given afixed wind speed have been modelled as a function of the wind speed in different ways,for example as H S/U5=2or H S/U2(Kinsman1965).This makes it possible to make short-term predictions of the significant wave height under the assumptions of a constant wind speed and assuming unlimited fetch and duration.For developing sea conditions,with limited fetch or limited wind duration,the significant wave height as a function of wind speed,U(m/s)and respectively fetch X(km)and duration D(h)has been modelled in different ways,for example as H S*X1/2U and H S/D5=7U9=7(O¨zger and S¸en2007).However,it is observed that the equilibrium wind sea approximation is seldom valid,and an alternative model for predicting the significant wave height for wind waves, H S from the wind speed U10at a reference height of10m were proposed in Andreas and Wang(2007),using a dif-ferent,yet simple parametrization.18years of hourly data of significant wave height and winds speed for12different buoys were used in order to estimate the model which can be written on the following form:H S¼CðDÞIðU104m=sÞþaðDÞU210þbðDÞÂÃIðU10[4m=sÞð1ÞD denotes the water depth and C,a and b are depth-dependent parameters.Based on comparison with mea-surements it was concluded that this model is reliable for wind speeds up to at least U10=25m/s.It is out of scope of the present literature survey to review all models for predicting wave heights from wind speed or other meteorological data.Such models are an integral part of the various wave models available for wave forecasting, but cannot be used directly to model long-term variations in wave height.However,given adequate long-term wind forecasts,such relationships between wind speed and wave height may be exploited in simulating long-term wave data for long-term predictions of wave climate.3.2Stationary modelsA thorough survey of stochastic models for wind and sea state time series is presented in Monbet et al.(2007).Only time series at the scale of the sea state have been considered without modelling events at the scale of individual waves, and only at given geographical points.One section of Monbet et al.(2007)is discussing how to model non-sta-tionarity such as trends in time series and seasonal compo-nents,but for the main part of the paper it is assumed that the studied processes are stationary.The models have been classified in three groups:Models based on Gaussian approximations,other non-parametric models and other parametric models.In the following,the main characteristics for these different types of wave models are highlighted.Even though ocean wave time series cannot normally be assumed to be Gaussian,it may be possible to transform these time series into time series with Gaussian marginal distri-butions when they have a continuous state space(Monbet et al.2007).The transformed time series can then be simu-lated by using existing techniques to simulate Gaussian processes.If{Y t}is a stationary process in R d,assume that there exists a transformation f:R d?R d and a stationary Gaussian process{X t}so that Y t=f(X t).Such a procedure consists of determining the transformation function f,gen-eration of realizations of the process{X t}and then trans-forming the generated samples of{X t}into samples of{Y t} using f.A number of such models for the significant wave height have been proposed in the literature(e.g.Cunha and Guedes Soares(1999),Walton and Borgman(1990)for the univariate time series for significant wave height,H s,Guedes Soares and Cunha(2000),Monbet and Prevosto(2001)for the bivariate time series for significant wave height and mean wave period,(H s,T)and DelBalzo et al.(2003)for the multivariate time series for significant wave height,mean wave period and mean wave direction,(H s,T,H m)).How-ever,it is noted that the duration statistics of transformed Gaussian processes has been demonstrated not tofit too well with data,even though the occurrence probability is cor-rectly modelled(Jenkins2002).Multimodal wave models for combined seas(e.g.with wind-sea and swell components)have also been discussed in the literature(see e.g.Torsethaugen1993;Torsethaugen and Haver2004;Ewans et al.2006),but these are gener-ally not required to describe severe sea states where extremes occur(Bitner-Gregersen and Toffoli2009).A few non-parametric methods for simulating wave parameters have been proposed,as reported in Monbet et al. (2007).One may for example assume that the observed time series are Markov chains and use non-parametric methods such as nearest-neighbor resampling to estimate transition kernels.In Caires and Sterl(2005),a non-parametric regression method was proposed to correct outputs of meteorological models.A continuous space,discrete time Markov model for the trivariate time-series of wind speed, significant wave height and spectral peak period was pre-sented in Monbet and Marteau(2001).However,one major drawback of non-parametric methods is the lack of descriptive power.An approach based on copulas for multivariate model-ling of oceanographic variables,accounting for depen-dencies between the variables,were proposed in de Waal and van Gelder(2005)and applied to the joint bivariate description of extreme wave heights and wave periods.Parametric models for wave time series include various linear autoregressive models,nonlinear retrogressive mod-els,finite state space Markov chain models and circular time series models.A modified Weibull model was proposed in Muraleedharan et al.(2007)for modelling of significant and maximum wave height.For short-term modelling of wave parameters,different approaches of artificial neural net-works(see e.g.Deo et al.2001;Mandal and Prabaharan 2006;Arena and Puca2004;Makarynskyy et al.2005)and data mining techniques(Mahjoobi and Etemad-Shahidi 2008;Mahjoobi and Mosabbeb2009)have successfully been applied.A non-linear threshold autoregressive model for the significant waveheight was proposed in Scotto and Guedes Soares(2000).3.3Non-stationary modelsMany statistical models for extreme waves assume the sta-tionarity of extreme values,but there are some non-station-ary models proposed in the literature.In the following,some non-stationary models for extreme waves that are known and previously presented in the literature will be reviewed.A review of classical methods for asymptotic extreme value analysis used in extreme wave predictions are presented in Soukissian and Kalantzi(2006).3.3.1Microscopic modelsA number of statistical models have been presented in the literature where the focus has been to use sophisticated sta-tistical methods to estimate extreme values at certain specific geographical points(e.g.based on data measurements at that。
Network impacts of a road capacity reduction:Empirical analysisand model predictionsDavid Watling a ,⇑,David Milne a ,Stephen Clark baInstitute for Transport Studies,University of Leeds,Woodhouse Lane,Leeds LS29JT,UK b Leeds City Council,Leonardo Building,2Rossington Street,Leeds LS28HD,UKa r t i c l e i n f o Article history:Received 24May 2010Received in revised form 15July 2011Accepted 7September 2011Keywords:Traffic assignment Network models Equilibrium Route choice Day-to-day variabilitya b s t r a c tIn spite of their widespread use in policy design and evaluation,relatively little evidencehas been reported on how well traffic equilibrium models predict real network impacts.Here we present what we believe to be the first paper that together analyses the explicitimpacts on observed route choice of an actual network intervention and compares thiswith the before-and-after predictions of a network equilibrium model.The analysis isbased on the findings of an empirical study of the travel time and route choice impactsof a road capacity reduction.Time-stamped,partial licence plates were recorded across aseries of locations,over a period of days both with and without the capacity reduction,and the data were ‘matched’between locations using special-purpose statistical methods.Hypothesis tests were used to identify statistically significant changes in travel times androute choice,between the periods of days with and without the capacity reduction.A trafficnetwork equilibrium model was then independently applied to the same scenarios,and itspredictions compared with the empirical findings.From a comparison of route choice pat-terns,a particularly influential spatial effect was revealed of the parameter specifying therelative values of distance and travel time assumed in the generalised cost equations.When this parameter was ‘fitted’to the data without the capacity reduction,the networkmodel broadly predicted the route choice impacts of the capacity reduction,but with othervalues it was seen to perform poorly.The paper concludes by discussing the wider practicaland research implications of the study’s findings.Ó2011Elsevier Ltd.All rights reserved.1.IntroductionIt is well known that altering the localised characteristics of a road network,such as a planned change in road capacity,will tend to have both direct and indirect effects.The direct effects are imparted on the road itself,in terms of how it can deal with a given demand flow entering the link,with an impact on travel times to traverse the link at a given demand flow level.The indirect effects arise due to drivers changing their travel decisions,such as choice of route,in response to the altered travel times.There are many practical circumstances in which it is desirable to forecast these direct and indirect impacts in the context of a systematic change in road capacity.For example,in the case of proposed road widening or junction improvements,there is typically a need to justify econom-ically the required investment in terms of the benefits that will likely accrue.There are also several examples in which it is relevant to examine the impacts of road capacity reduction .For example,if one proposes to reallocate road space between alternative modes,such as increased bus and cycle lane provision or a pedestrianisation scheme,then typically a range of alternative designs exist which may differ in their ability to accommodate efficiently the new traffic and routing patterns.0965-8564/$-see front matter Ó2011Elsevier Ltd.All rights reserved.doi:10.1016/j.tra.2011.09.010⇑Corresponding author.Tel.:+441133436612;fax:+441133435334.E-mail address:d.p.watling@ (D.Watling).168 D.Watling et al./Transportation Research Part A46(2012)167–189Through mathematical modelling,the alternative designs may be tested in a simulated environment and the most efficient selected for implementation.Even after a particular design is selected,mathematical models may be used to adjust signal timings to optimise the use of the transport system.Road capacity may also be affected periodically by maintenance to essential services(e.g.water,electricity)or to the road itself,and often this can lead to restricted access over a period of days and weeks.In such cases,planning authorities may use modelling to devise suitable diversionary advice for drivers,and to plan any temporary changes to traffic signals or priorities.Berdica(2002)and Taylor et al.(2006)suggest more of a pro-ac-tive approach,proposing that models should be used to test networks for potential vulnerability,before any reduction mate-rialises,identifying links which if reduced in capacity over an extended period1would have a substantial impact on system performance.There are therefore practical requirements for a suitable network model of travel time and route choice impacts of capac-ity changes.The dominant method that has emerged for this purpose over the last decades is clearly the network equilibrium approach,as proposed by Beckmann et al.(1956)and developed in several directions since.The basis of using this approach is the proposition of what are believed to be‘rational’models of behaviour and other system components(e.g.link perfor-mance functions),with site-specific data used to tailor such models to particular case studies.Cross-sectional forecasts of network performance at specific road capacity states may then be made,such that at the time of any‘snapshot’forecast, drivers’route choices are in some kind of individually-optimum state.In this state,drivers cannot improve their route selec-tion by a unilateral change of route,at the snapshot travel time levels.The accepted practice is to‘validate’such models on a case-by-case basis,by ensuring that the model—when supplied with a particular set of parameters,input network data and input origin–destination demand data—reproduces current mea-sured mean link trafficflows and mean journey times,on a sample of links,to some degree of accuracy(see for example,the practical guidelines in TMIP(1997)and Highways Agency(2002)).This kind of aggregate level,cross-sectional validation to existing conditions persists across a range of network modelling paradigms,ranging from static and dynamic equilibrium (Florian and Nguyen,1976;Leonard and Tough,1979;Stephenson and Teply,1984;Matzoros et al.,1987;Janson et al., 1986;Janson,1991)to micro-simulation approaches(Laird et al.,1999;Ben-Akiva et al.,2000;Keenan,2005).While such an approach is plausible,it leaves many questions unanswered,and we would particularly highlight two: 1.The process of calibration and validation of a network equilibrium model may typically occur in a cycle.That is to say,having initially calibrated a model using the base data sources,if the subsequent validation reveals substantial discrep-ancies in some part of the network,it is then natural to adjust the model parameters(including perhaps even the OD matrix elements)until the model outputs better reflect the validation data.2In this process,then,we allow the adjustment of potentially a large number of network parameters and input data in order to replicate the validation data,yet these data themselves are highly aggregate,existing only at the link level.To be clear here,we are talking about a level of coarseness even greater than that in aggregate choice models,since we cannot even infer from link-level data the aggregate shares on alternative routes or OD movements.The question that arises is then:how many different combinations of parameters and input data values might lead to a similar link-level validation,and even if we knew the answer to this question,how might we choose between these alternative combinations?In practice,this issue is typically neglected,meaning that the‘valida-tion’is a rather weak test of the model.2.Since the data are cross-sectional in time(i.e.the aim is to reproduce current base conditions in equilibrium),then in spiteof the large efforts required in data collection,no empirical evidence is routinely collected regarding the model’s main purpose,namely its ability to predict changes in behaviour and network performance under changes to the network/ demand.This issue is exacerbated by the aggregation concerns in point1:the‘ambiguity’in choosing appropriate param-eter values to satisfy the aggregate,link-level,base validation strengthens the need to independently verify that,with the selected parameter values,the model responds reliably to changes.Although such problems–offitting equilibrium models to cross-sectional data–have long been recognised by practitioners and academics(see,e.g.,Goodwin,1998), the approach described above remains the state-of-practice.Having identified these two problems,how might we go about addressing them?One approach to thefirst problem would be to return to the underlying formulation of the network model,and instead require a model definition that permits analysis by statistical inference techniques(see for example,Nakayama et al.,2009).In this way,we may potentially exploit more information in the variability of the link-level data,with well-defined notions(such as maximum likelihood)allowing a systematic basis for selection between alternative parameter value combinations.However,this approach is still using rather limited data and it is natural not just to question the model but also the data that we use to calibrate and validate it.Yet this is not altogether straightforward to resolve.As Mahmassani and Jou(2000) remarked:‘A major difficulty...is obtaining observations of actual trip-maker behaviour,at the desired level of richness, simultaneously with measurements of prevailing conditions’.For this reason,several authors have turned to simulated gaming environments and/or stated preference techniques to elicit information on drivers’route choice behaviour(e.g. 1Clearly,more sporadic and less predictable reductions in capacity may also occur,such as in the case of breakdowns and accidents,and environmental factors such as severe weather,floods or landslides(see for example,Iida,1999),but the responses to such cases are outside the scope of the present paper. 2Some authors have suggested more systematic,bi-level type optimization processes for thisfitting process(e.g.Xu et al.,2004),but this has no material effect on the essential points above.D.Watling et al./Transportation Research Part A46(2012)167–189169 Mahmassani and Herman,1990;Iida et al.,1992;Khattak et al.,1993;Vaughn et al.,1995;Wardman et al.,1997;Jou,2001; Chen et al.,2001).This provides potentially rich information for calibrating complex behavioural models,but has the obvious limitation that it is based on imagined rather than real route choice situations.Aside from its common focus on hypothetical decision situations,this latter body of work also signifies a subtle change of emphasis in the treatment of the overall network calibration problem.Rather than viewing the network equilibrium calibra-tion process as a whole,the focus is on particular components of the model;in the cases above,the focus is on that compo-nent concerned with how drivers make route decisions.If we are prepared to make such a component-wise analysis,then certainly there exists abundant empirical evidence in the literature,with a history across a number of decades of research into issues such as the factors affecting drivers’route choice(e.g.Wachs,1967;Huchingson et al.,1977;Abu-Eisheh and Mannering,1987;Duffell and Kalombaris,1988;Antonisse et al.,1989;Bekhor et al.,2002;Liu et al.,2004),the nature of travel time variability(e.g.Smeed and Jeffcoate,1971;Montgomery and May,1987;May et al.,1989;McLeod et al., 1993),and the factors affecting trafficflow variability(Bonsall et al.,1984;Huff and Hanson,1986;Ribeiro,1994;Rakha and Van Aerde,1995;Fox et al.,1998).While these works provide useful evidence for the network equilibrium calibration problem,they do not provide a frame-work in which we can judge the overall‘fit’of a particular network model in the light of uncertainty,ambient variation and systematic changes in network attributes,be they related to the OD demand,the route choice process,travel times or the network data.Moreover,such data does nothing to address the second point made above,namely the question of how to validate the model forecasts under systematic changes to its inputs.The studies of Mannering et al.(1994)and Emmerink et al.(1996)are distinctive in this context in that they address some of the empirical concerns expressed in the context of travel information impacts,but their work stops at the stage of the empirical analysis,without a link being made to net-work prediction models.The focus of the present paper therefore is both to present thefindings of an empirical study and to link this empirical evidence to network forecasting models.More recently,Zhu et al.(2010)analysed several sources of data for evidence of the traffic and behavioural impacts of the I-35W bridge collapse in Minneapolis.Most pertinent to the present paper is their location-specific analysis of linkflows at 24locations;by computing the root mean square difference inflows between successive weeks,and comparing the trend for 2006with that for2007(the latter with the bridge collapse),they observed an apparent transient impact of the bridge col-lapse.They also showed there was no statistically-significant evidence of a difference in the pattern offlows in the period September–November2007(a period starting6weeks after the bridge collapse),when compared with the corresponding period in2006.They suggested that this was indicative of the length of a‘re-equilibration process’in a conceptual sense, though did not explicitly compare their empiricalfindings with those of a network equilibrium model.The structure of the remainder of the paper is as follows.In Section2we describe the process of selecting the real-life problem to analyse,together with the details and rationale behind the survey design.Following this,Section3describes the statistical techniques used to extract information on travel times and routing patterns from the survey data.Statistical inference is then considered in Section4,with the aim of detecting statistically significant explanatory factors.In Section5 comparisons are made between the observed network data and those predicted by a network equilibrium model.Finally,in Section6the conclusions of the study are highlighted,and recommendations made for both practice and future research.2.Experimental designThe ultimate objective of the study was to compare actual data with the output of a traffic network equilibrium model, specifically in terms of how well the equilibrium model was able to correctly forecast the impact of a systematic change ap-plied to the network.While a wealth of surveillance data on linkflows and travel times is routinely collected by many local and national agencies,we did not believe that such data would be sufficiently informative for our purposes.The reason is that while such data can often be disaggregated down to small time step resolutions,the data remains aggregate in terms of what it informs about driver response,since it does not provide the opportunity to explicitly trace vehicles(even in aggre-gate form)across more than one location.This has the effect that observed differences in linkflows might be attributed to many potential causes:it is especially difficult to separate out,say,ambient daily variation in the trip demand matrix from systematic changes in route choice,since both may give rise to similar impacts on observed linkflow patterns across re-corded sites.While methods do exist for reconstructing OD and network route patterns from observed link data(e.g.Yang et al.,1994),these are typically based on the premise of a valid network equilibrium model:in this case then,the data would not be able to give independent information on the validity of the network equilibrium approach.For these reasons it was decided to design and implement a purpose-built survey.However,it would not be efficient to extensively monitor a network in order to wait for something to happen,and therefore we required advance notification of some planned intervention.For this reason we chose to study the impact of urban maintenance work affecting the roads,which UK local government authorities organise on an annual basis as part of their‘Local Transport Plan’.The city council of York,a historic city in the north of England,agreed to inform us of their plans and to assist in the subsequent data collection exercise.Based on the interventions planned by York CC,the list of candidate studies was narrowed by considering factors such as its propensity to induce significant re-routing and its impact on the peak periods.Effectively the motivation here was to identify interventions that were likely to have a large impact on delays,since route choice impacts would then likely be more significant and more easily distinguished from ambient variability.This was notably at odds with the objectives of York CC,170 D.Watling et al./Transportation Research Part A46(2012)167–189in that they wished to minimise disruption,and so where possible York CC planned interventions to take place at times of day and of the year where impacts were minimised;therefore our own requirement greatly reduced the candidate set of studies to monitor.A further consideration in study selection was its timing in the year for scheduling before/after surveys so to avoid confounding effects of known significant‘seasonal’demand changes,e.g.the impact of the change between school semesters and holidays.A further consideration was York’s role as a major tourist attraction,which is also known to have a seasonal trend.However,the impact on car traffic is relatively small due to the strong promotion of public trans-port and restrictions on car travel and parking in the historic centre.We felt that we further mitigated such impacts by sub-sequently choosing to survey in the morning peak,at a time before most tourist attractions are open.Aside from the question of which intervention to survey was the issue of what data to collect.Within the resources of the project,we considered several options.We rejected stated preference survey methods as,although they provide a link to personal/socio-economic drivers,we wanted to compare actual behaviour with a network model;if the stated preference data conflicted with the network model,it would not be clear which we should question most.For revealed preference data, options considered included(i)self-completion diaries(Mahmassani and Jou,2000),(ii)automatic tracking through GPS(Jan et al.,2000;Quiroga et al.,2000;Taylor et al.,2000),and(iii)licence plate surveys(Schaefer,1988).Regarding self-comple-tion surveys,from our own interview experiments with self-completion questionnaires it was evident that travellersfind it relatively difficult to recall and describe complex choice options such as a route through an urban network,giving the po-tential for significant errors to be introduced.The automatic tracking option was believed to be the most attractive in this respect,in its potential to accurately map a given individual’s journey,but the negative side would be the potential sample size,as we would need to purchase/hire and distribute the devices;even with a large budget,it is not straightforward to identify in advance the target users,nor to guarantee their cooperation.Licence plate surveys,it was believed,offered the potential for compromise between sample size and data resolution: while we could not track routes to the same resolution as GPS,by judicious location of surveyors we had the opportunity to track vehicles across more than one location,thus providing route-like information.With time-stamped licence plates, the matched data would also provide journey time information.The negative side of this approach is the well-known poten-tial for significant recording errors if large sample rates are required.Our aim was to avoid this by recording only partial licence plates,and employing statistical methods to remove the impact of‘spurious matches’,i.e.where two different vehi-cles with the same partial licence plate occur at different locations.Moreover,extensive simulation experiments(Watling,1994)had previously shown that these latter statistical methods were effective in recovering the underlying movements and travel times,even if only a relatively small part of the licence plate were recorded,in spite of giving a large potential for spurious matching.We believed that such an approach reduced the opportunity for recorder error to such a level to suggest that a100%sample rate of vehicles passing may be feasible.This was tested in a pilot study conducted by the project team,with dictaphones used to record a100%sample of time-stamped, partial licence plates.Independent,duplicate observers were employed at the same location to compare error rates;the same study was also conducted with full licence plates.The study indicated that100%surveys with dictaphones would be feasible in moderate trafficflow,but only if partial licence plate data were used in order to control observation errors; for higherflow rates or to obtain full number plate data,video surveys should be considered.Other important practical les-sons learned from the pilot included the need for clarity in terms of vehicle types to survey(e.g.whether to include motor-cycles and taxis),and of the phonetic alphabet used by surveyors to avoid transcription ambiguities.Based on the twin considerations above of planned interventions and survey approach,several candidate studies were identified.For a candidate study,detailed design issues involved identifying:likely affected movements and alternative routes(using local knowledge of York CC,together with an existing network model of the city),in order to determine the number and location of survey sites;feasible viewpoints,based on site visits;the timing of surveys,e.g.visibility issues in the dark,winter evening peak period;the peak duration from automatic trafficflow data;and specific survey days,in view of public/school holidays.Our budget led us to survey the majority of licence plate sites manually(partial plates by audio-tape or,in lowflows,pen and paper),with video surveys limited to a small number of high-flow sites.From this combination of techniques,100%sampling rate was feasible at each site.Surveys took place in the morning peak due both to visibility considerations and to minimise conflicts with tourist/special event traffic.From automatic traffic count data it was decided to survey the period7:45–9:15as the main morning peak period.This design process led to the identification of two studies:2.1.Lendal Bridge study(Fig.1)Lendal Bridge,a critical part of York’s inner ring road,was scheduled to be closed for maintenance from September2000 for a duration of several weeks.To avoid school holidays,the‘before’surveys were scheduled for June and early September.It was decided to focus on investigating a significant southwest-to-northeast movement of traffic,the river providing a natural barrier which suggested surveying the six river crossing points(C,J,H,K,L,M in Fig.1).In total,13locations were identified for survey,in an attempt to capture traffic on both sides of the river as well as a crossing.2.2.Fishergate study(Fig.2)The partial closure(capacity reduction)of the street known as Fishergate,again part of York’s inner ring road,was scheduled for July2001to allow repairs to a collapsed sewer.Survey locations were chosen in order to intercept clockwiseFig.1.Intervention and survey locations for Lendal Bridge study.around the inner ring road,this being the direction of the partial closure.A particular aim wasFulford Road(site E in Fig.2),the main radial affected,with F and K monitoring local diversion I,J to capture wider-area diversion.studies,the plan was to survey the selected locations in the morning peak over a period of approximately covering the three periods before,during and after the intervention,with the days selected so holidays or special events.Fig.2.Intervention and survey locations for Fishergate study.In the Lendal Bridge study,while the‘before’surveys proceeded as planned,the bridge’s actualfirst day of closure on Sep-tember11th2000also marked the beginning of the UK fuel protests(BBC,2000a;Lyons and Chaterjee,2002).Trafficflows were considerably affected by the scarcity of fuel,with congestion extremely low in thefirst week of closure,to the extent that any changes could not be attributed to the bridge closure;neither had our design anticipated how to survey the impacts of the fuel shortages.We thus re-arranged our surveys to monitor more closely the planned re-opening of the bridge.Unfor-tunately these surveys were hampered by a second unanticipated event,namely the wettest autumn in the UK for270years and the highest level offlooding in York since records began(BBC,2000b).Theflooding closed much of the centre of York to road traffic,including our study area,as the roads were impassable,and therefore we abandoned the planned‘after’surveys. As a result of these events,the useable data we had(not affected by the fuel protests orflooding)consisted offive‘before’days and one‘during’day.In the Fishergate study,fortunately no extreme events occurred,allowing six‘before’and seven‘during’days to be sur-veyed,together with one additional day in the‘during’period when the works were temporarily removed.However,the works over-ran into the long summer school holidays,when it is well-known that there is a substantial seasonal effect of much lowerflows and congestion levels.We did not believe it possible to meaningfully isolate the impact of the link fully re-opening while controlling for such an effect,and so our plans for‘after re-opening’surveys were abandoned.3.Estimation of vehicle movements and travel timesThe data resulting from the surveys described in Section2is in the form of(for each day and each study)a set of time-stamped,partial licence plates,observed at a number of locations across the network.Since the data include only partial plates,they cannot simply be matched across observation points to yield reliable estimates of vehicle movements,since there is ambiguity in whether the same partial plate observed at different locations was truly caused by the same vehicle. Indeed,since the observed system is‘open’—in the sense that not all points of entry,exit,generation and attraction are mon-itored—the question is not just which of several potential matches to accept,but also whether there is any match at all.That is to say,an apparent match between data at two observation points could be caused by two separate vehicles that passed no other observation point.Thefirst stage of analysis therefore applied a series of specially-designed statistical techniques to reconstruct the vehicle movements and point-to-point travel time distributions from the observed data,allowing for all such ambiguities in the data.Although the detailed derivations of each method are not given here,since they may be found in the references provided,it is necessary to understand some of the characteristics of each method in order to interpret the results subsequently provided.Furthermore,since some of the basic techniques required modification relative to the published descriptions,then in order to explain these adaptations it is necessary to understand some of the theoretical basis.3.1.Graphical method for estimating point-to-point travel time distributionsThe preliminary technique applied to each data set was the graphical method described in Watling and Maher(1988).This method is derived for analysing partial registration plate data for unidirectional movement between a pair of observation stations(referred to as an‘origin’and a‘destination’).Thus in the data study here,it must be independently applied to given pairs of observation stations,without regard for the interdependencies between observation station pairs.On the other hand, it makes no assumption that the system is‘closed’;there may be vehicles that pass the origin that do not pass the destina-tion,and vice versa.While limited in considering only two-point surveys,the attraction of the graphical technique is that it is a non-parametric method,with no assumptions made about the arrival time distributions at the observation points(they may be non-uniform in particular),and no assumptions made about the journey time probability density.It is therefore very suitable as afirst means of investigative analysis for such data.The method begins by forming all pairs of possible matches in the data,of which some will be genuine matches(the pair of observations were due to a single vehicle)and the remainder spurious matches.Thus, for example,if there are three origin observations and two destination observations of a particular partial registration num-ber,then six possible matches may be formed,of which clearly no more than two can be genuine(and possibly only one or zero are genuine).A scatter plot may then be drawn for each possible match of the observation time at the origin versus that at the destination.The characteristic pattern of such a plot is as that shown in Fig.4a,with a dense‘line’of points(which will primarily be the genuine matches)superimposed upon a scatter of points over the whole region(which will primarily be the spurious matches).If we were to assume uniform arrival rates at the observation stations,then the spurious matches would be uniformly distributed over this plot;however,we shall avoid making such a restrictive assumption.The method begins by making a coarse estimate of the total number of genuine matches across the whole of this plot.As part of this analysis we then assume knowledge of,for any randomly selected vehicle,the probabilities:h k¼Prðvehicle is of the k th type of partial registration plateÞðk¼1;2;...;mÞwhereX m k¼1h k¼1172 D.Watling et al./Transportation Research Part A46(2012)167–189。
MODELING THE IMPACT OF DELAY SPIKES ON TCP PERFORMANCE ON A LOW BANDWIDTH GPRS WIRELESS LINKPirkko Kuusela and Pasi LassilaNetworking Laboratory,Helsinki University of Technology,P.O.Box3000,FIN02015HUT,FinlandEmail:Pirkko.Kuusela,ssila@hut.fiABSTRACTWe model the goodput of a single TCP source on a low bandwidth lossless GPRS link experiencing sudden in-creases in RTT,i.e.,delay spikes.Such spikes trigger spurious timeouts that reduce the TCP goodput.Renewal reward theory is used to derive a straightforward expression for TCP goodput that depends on the bandwidth limitation,RTT and delay spike properties through average spike duration and distribution of the spike intervals. The basic model is for i.i.d.spike intervals,and correlated spike intervals are modelled by using a modulating background Markov chain.Also a simple deterministic p-formula is given.Validation by ns2simulations shows excellent agreement and good accuracy even when modeling assumptions are mildly violated,e.g.,regarding our lossless assumption.1IntroductionTCP is the most widely used transport protocol for re-liable data transmission over the Internet.In addition to offering a reliable packet delivery service,TCP also includes functionality for controlling the packet send-ing rate to avoid congesting the network.However,this functionality has been originally designed based on the characteristics of thefixed network.There the basic op-erational principle of TCP rate control is roughly that TCP increases its rate as long it is receiving acknowl-edgements correctly from the receiver,and as soon as a packet loss is detected,it is interpreted to indicate that there is congestion in the network and hence the send-ing rate should be reduced considerably.As a result of a loss the sending rate is either halved or it can even be reduced back to an initial small value,as happens in the case when the loss is detected via a coarse grained time-out timer.To set the value of the timeout timer TCP relies on an adaptive estimation algorithm of the RTT (Round Trip Time).The underlying assumption in the estimation is that changes in the RTT due to random fluctuations of the traffic in the Internet can be tracked but sudden unexpected considerable increases are inter-preted as a sign of congestion and hence the timer will rightly expire,triggering TCP’s Go-Back-N retransmis-sion scheme,and the sending rate is initialized according to the configured initial window size.When considering the operation of TCP over a wire-less link,the delays in the observed RTTs of TCP can be highly variable.Moreover,the pattern of variability can be such that the measured RTTs contain very sharp spikes which can be even an order of magnitude larger than the typical measured RTT[6].The RTT estima-tion algorithm of TCP can not track such sudden con-siderable increases in the measured RTTs.These sud-den increases in RTTs are called delay spikes and po-tential reasons for their occurrence can be for example [8]:handovers typically result in delay spikes of several seconds occurring at a time scale of minutes in urban en-vironment;link layer error recovery(reliable link layer protocols are usually used in modern cellular systems) may also cause delay spikes,especially when the radio channel conditions change abruptly due to the mobile’s movement,e.g.,when entering a tunnel;scheduling of radio resources between circuit switched calls and data (as in GPRS)can cause delay spikes.These delay spikes trigger so called spurious timeouts and result in unnec-essary retransmissions and congestion control actions on the part of TCP,as the packets are not lost,they are sim-ply delayed.To enable TCP to handle RTT spikes,some experimental algorithms have been proposed,namely the Eifel algorithm[8],[10],or F-RTO algorithm[13],but even they can not completely remove the effect of delay spikes,and hence evaluating their impact is necessary. In this paper,we provide a simple modeling frame-work to study the impact of such spurious timeout events on the long run(steady state)goodput of TCP,i.e.,on the amount of successfully sent traffic per time unit.Addi-tionally,we consider a low bandwidth link(typical band-width of a GPRS link is40kbps[8])such that a realiza-tion of the evolution of TCP sending rate is determined by the following:An RTT spike creates a period of si-lence during which there is a(spurious)timeout.Then the source starts increasing its sending rate exponentially (slow start)until it reaches a window size corresponding to the maximum sending rate of the physical link given the RTT.The source keeps sending at the maximum rate until there is an RTT spike again and the process re-peats.In modeling,thefirst observation is that thesespurious timeouts are caused by random events occurring in the mobile’s environment,which are also independent of TCP’s packet sending characteristics.Therefore,we model these events as an outside disturbance with var-ious levels of generality regarding the inter-occurrence time of the RTT spikes which trigger spurious timeouts. First the case of i.i.d.times between RTT spikes is con-sidered in the framework of renewal reward processes. Then a generalization by using Markov renewal reward theory is given,where the distributions between RTT spikes can be modulated by a background discrete time Markov chain(thus producing correlated times between the RTT spikes).Additionally,a simple explicit formula for the goodput is derived which assumes that there is a per packet probability that an RTT spike occurs,i.e., no explicit assumptions are made on the distributions be-tween RTT spikes.The models are validated through ns2 simulations.Also,the impact of different distributions on the performance is investigated.The paper is organized as follows.The derivation of the models is in Section2.Model validation and other numerical results are given in Section3,and Section4 contains our conclusions and suggestions for future re-search.1.1Related workA lot of research has been done in modeling the steady state throughput of TCP infixed networks with a given packet loss process.The simple‘square-root-p’formula was derived,e.g.,in[11].Using more complex assump-tions on the nature of the packet loss process more re-fined formulas have also been derived,see e.g.,[12]and [4].Notably,in[4]the loss process can be an arbitrary point process and the throughput of the congestion con-trol phase can be shown to depend on the correlations such that any positive correlations between loss events actually improves the throughput(deterministic loss pro-cess being the worst case).In TCP models for wireless channels,the per packet loss process is typically presented by a two-state Markov process,representing the wireless channel’s alteration between good/bad states,resulting in correlated packet losses,see,e.g.,[2],[9],[15].However,these models do not include spurious timeouts(RTT delay spikes). Work related to modeling spurious timeouts has been done in[5],where a model for TCP Reno experiencing spurious timeouts has been given which very closely fol-lows the operation of the actual TCP Reno protocol.The model is based on an extension of the well known model in[12]and considers a large bandwidth delay product, with no sending rate limitations(we include the send-ing rate limit).The model takes into account packet losses,but the effect of delay spikes are modelled only through the mean length and mean time between the spikes,whereas our model allows to distinguish between different distributions and non-i.i.d.processes.In[3]the measured RTT process has been modelled with a semi-Markov process and from the dynamics of the model the behavior of TCP has been deduced,including the occur-rence of spurious timeouts and spurious duplicate ACKs (packets can also be reordered due to delay spikes caused by handoffs).The model does not consider the impact of congestion losses.In comparison to[5]and[3],using a more abstract representation of how the TCP’s goodput accumulates (and not attempting to model the protocol itself),we ob-tain a model that is straightforward to utilize and which allows quite general RTT spike processes.2Stochastic TCP modelWe model the TCP goodput of one persistent source on a low bandwidth lossless wireless link.Typically,TCP models are discrete and consider the behavior per packet or per RTT round.Our abstraction is a continuous-time model describing the actual TCP sending rate that pro-duces goodput.In this scenario,the goodput of TCP is determined by an external stochastic process that generates the RTT spikes that lead to TCP timeouts.The intervals between the spikes are random and the duration of the spike is modelled using its mean duration,denoted by.The evolution of TCP’s(successful)sending rate is described by a)a silence for time due to the RTT spike,b)an exponential increase in the sending rate(the slow start phase),and c)reaching an upper bound on the sending rate due to the low bandwidth of the link.Note that the congestion window can actually increase to a quite large value during the time the source is limited by the physi-cal link rate.There can even be‘congestion losses’and corresponding window halvings,but these are ignored in our modeling since we assume that it is the physical link rate that limits the packet sending rate.Remark1:During the RTT spike the TCP protocol is likely to perform an exponential back-off.If the RTT spike duration is short(less than10seconds),it is conve-nient to model the time when no packets get through as the duration of the RTT spike.In this case the times when TCP attempts to send packets is close to the time when the link becomes capable of delivering packets again.If the RTT spike duration is long(15sec or more)the TCP exponential back-off determines when sending becomes possible.2.1Renewal ModelTo calculate the long term average goodput of the con-nection we put our model into the framework of renewal models[14].Let denote the th RTT spike,and take .These are the renewal times.The length of th renewal period is denoted by.The evo-lution of the time derivative of the TCP’s actual sending rate during th renewal period,,is modelled accord-ing to the phases described earlierimplies that the rate doubles for every RTT(slow start).The notation introduced above is illustrated in Figure1.The solution of(1)is.Consider the number of packets sent during the th renewal cycle,denoted by,as the reward.Thenififif(3) Next we briefly summarize the renewal reward theo-rem and state necessary assumptions on the lengths of the renewal cycles,i.e.,the time intervals between RTT spikes.We assume that the s are i.i.d.random vari-ables withfinite mean and probability density function.Let the counting processbe the renewal process associated with sequence.The total reward up to time is given byThe renewal reward theorem states that the time-averaged total reward converges to cycle averaged re-wards,i.e.,where is the common expected value of the cycle rewards.Hence,from now on,we omit the cycle index from the notation.Thus,to get the long term goodput of the TCP source it suffices to integrate(3)over the distribution of the delay spikes,,and then the TCP goodput is given by TCP.The parameters in the goodput formula are the RTT,the bandwidth limitation,the RTT spike duration and the distribution of the spikes.The expression for the expected reward simplifies con-siderably if we assume that the intervals between RTT spikes follow the exponential distribution with parame-ter.In that case the goodput becomesTCPFigure 1:Notation for the renewal model.Figure 2:Notation for the Markov renewal model.If the RTT spike intervals have a probability densityfunction at state 0and at state 1,the goodput formula becomeswhich givespackets have been sent.If ,in the remain-ing time packets will be sent.Adding the packets sent at these two phases yieldsFinally,after taking into account the duration of the RTT spike,SD,in the total time,the simple TCP goodput for-mula readsSD1Patches are available from http://www.cs.helsinki.fi/ u/gurtov/ns/goodput formula of Section2.3and‘Renewal model’refers to the renewal model of Section2.1.In the mod-els,we also take here to include the packet transmis-sion time(in addition to the propagation delay),i.e., link propagation delay packet transmission time. First the case of long RTT spikes is considered;in the simulations the spike durations were uniformly dis-tributed in the range s,i.e.,s in our models.The goodput is evaluated as a function of the mean time between RTT spikes,.In the simula-tions for each,the spikes are uniformly distributed in the range s.In the P-model the value for is obtained as in Remark2.As seen from the results in Figure3,for the small bandwidth delay product case(kbps,leftfigure),correspond-ing to our assumed scenario of a small bandwidth GPRS link,the agreement between the models(both P-model and Renewal model)and with ns2simulations where no congestion losses occur is excellent.Our model does not explicitly include the effect of congestion losses,but to evaluate the impact of conges-tion losses on the accuracy,ns2simulations with a loss module generating packet losses according to a given probability have been performed.The results are shown in thefigures with dashed lines.In the low bandwidth case,we can observe that the models naturally overesti-mate the goodput and that accuracy is still quite good up to moderate loss rates(1%loss),but for high loss rates the effect becomes more clear.For the higher bandwidth delay product case(kbps,rightfigure)we can still see good agreement,although our models seem to overestimate the actual goodput.We assume that the overestimation in the model is due to our assumed ex-ponential increase up to,while,in the simulations, TCP changes the window increase into a linear one af-ter reaching the slow start threshold.Also,in this con-text the impact of congestion losses naturally becomes more pronounced.Note that the P-model and the Re-newal model yield results which are very close to each other(almost indistinguishable in thefigures).This is to be expected as the spike interval and length distributions are uniform distributions.The impact of different distri-butions is evaluated later in this section.In the above simulations the spike process produces rather long spikes(with a mean of7.5s),whereas the typical RTT equals roughly300ms.Then it can be an-ticipated that the spikes are indeed dramatic enough such that TCP’s RTT estimator mechanism can not learn the properties of the spike process.Hence,our assumption that every spike results in a timeout and slow start is mostly valid.However,if the duration of the spikes is shortened,this may not necessarily be the case anymore. To check this,same simulations as earlier(with low/high bandwidth)were performed where the spike durations were generated from a uniform distribution in the range (with a mean s)and in the range (with mean s).The results are shown in Fig-ure4and Figure5.E H TL25k30k 35k G PC TE H T L60k70k 80k 90k G PC TFigure 3:Goodput of TCP in low (left)and higher (right)bandwidth delay product scenarios when delay spikes havelong durations (s).E H T L25k30k 35k 40k G PC T MaxE H T L60k70k 80k 90k 100k G PC T Max Figure 4:Goodput of TCP in low (left)and higher (right)bandwidth delay product scenarios when delay spikes have somewhat shorter durations (s).By comparing Figure 4and Figure 5at low bandwidth (left graphs)we see that in both cases the model agrees well with simulations.However,the simulated goodput is higher because TCP’s retransmission timer adapts to the spike process,in particular when the spikes occur very frequently (see Figure 5).Hence,there are less timeouts and less slow starts (slow starts cause loss of goodput).Our model assumes a slow start after each spike duration and the end result is that in circumstances,where TCP’s retransmission timer learns the properties of the spike process,our model underestimates the good-put.However,as seen from the numerical results,the influence of this is not substantial.For higher bandwidth the more frequent delay spikes improve the accuracy compared to simulated results as now the effects discussed earlier in Figure 3are less prominent (related to the impact of slow start threshold),and delay spike related behavior dominates.However,as seen for the high bandwidth case in Figure 5,at some point the impact of our slow start assumption again be-comes dominant and our model overestimates the good-put.The figures also show how congestion losses affect the accuracy,and the impact can be seen to be (more or less)as earlier.3.2Distribution SensitivityWe have verified the accuracy of the renewal model andthe simple P-model against ns2simulation when the RTT spike distribution was uniform.Now we illustrate in Fig-ure 6the effect of the RTT spike distribution on the TCP goodput.The x-axis is the mean of the given distri-bution and the y-axis is the corresponding estimate for the TCP goodput in bps.In the Pareto distribution the shape parameter equals 1.5;the uniform distribution is located at the interval.Markov modulating process is illus-trated by using exponential distributions withand and ,i.e.,in state 0RTT spikes are more frequent than in state 1.In the P-model the value for is obtained as in Remark 2.The RTT spike duration is SD and the network-ing parameters are:maximum rate 40kbps,packet size 576bytes,propagation delay 0.6sec,which give rise toms and pkts/s.For the TCP goodput the worst case of RTT spikes is the uniform distribution,which resembles closely the simple P-model (in which the delay spikes occur at con-stant intervals of lengthas discussed in Section 2.3).Although the P-model is attractive in its simplicity,we notice that it may underestimate the goodput.This is because bursty spikes yield a better goodput than uniformly distributed spikes on a time interval of the same length.The benefit of bursts can be seen by com-paring the Pareto and exponential distributions.More-over,the exponential distribution and the Markov modu-lated exponential distribution have the same mean,but in the Markov modulated version spikes arrive frequently in state 0and very seldom in state 1.Thus the modulated RTT process yields throughout a higher goodput and ex-E H T L25k30k 35kG PC TE H T L55k70k 85k G PC T Figure 5:Goodput of TCP in low (left)and higher (right)bandwidth delay product scenarios when delay spikes have short durations (s).tending the renewal model to include some correlation in the spike intervals is important.The benefit of correla-tion in RTT spikes on the goodput is in agreement with the result on the TCP AIMD-component analyzed in [4].4ConclusionsOn a wireless link,the observed RTTs of TCP can be highly variable and the pattern of variability may contain sharp spikes,called delay spikes.These result in spuri-ous timeouts that lower the TCP performance.We have provided a facile modeling framework to study the im-pact of such RTT spikes on the goodput of a TCP source.In the modeling,we have considered a lossless low band-width link on which the rate of successfully sent TCP packets is described by a)a silence for the mean duration of the delay spike,b)exponential increase (the slow start phase)and c)reaching the maximum rate due to band-width limitation.Spurious timeouts are triggered by random events oc-curring in the mobile’s environment.Hence we modelled the delay spikes as an outside disturbance.First,the case of i.i.d.spike intervals was considered in the framework of renewal rewards and we derived an expression for the TCP goodput.Correlation between RTT spike intervals was incorporated using embedded discrete Markov chain modulation and Markov renewal reward results.Above models require the RTT spike interval distribution.How-ever,a simple explicit formula for the goodput was de-rived based on per packet probability for an RTT spike,thus requiring no explicit knowledge on the spike distri-bution.Validation with ns2showed that our models closely approximate the TCP goodput in the presence of RTT spikes.In the exact modeling scenario considered (low bandwidth link,no packet losses)the agreement with simulation is excellent,both in the renewal model and in the simple P-model.Moreover,the modeling as-sumptions can be violated mildly.With moderate packet losses the agreement is still good.For a higher band-width link our model slightly overestimates the loss-less goodput (as we allow exponential increase up to the bandwidth limitation).By studying the sensitivity of thegoodput on the RTT spike interval distribution,we ob-served that bursty spikes give higher goodput.This is similar to the result in [4]that studied the congestion control component of the TCP.The deterministic simple P-model is the worst case,thus giving lower bounds for the goodput.Topics for future research include the incorporation of the distribution of the RTT spike lengths into the model.Also,the impact of congestion losses on the goodput should be included in the model.AcknowledgementWe are grateful to Samuli Aalto (Ph.D.)for helpful dis-cussions on semi-regenerative processes.References[1]S.Aalto,“Time-average properties of regenera-tive and semi-regenerative processes”,unpublished manuscript,1995.[2]A. A.Abouzeid,S.Roy,and M.Azizoglu,“Stochastic modeling of TCP over lossy links”,in Proceedings of INFOCOM 2000,Tel Aviv,Israel,March 2000.[3]A.A.Abouzeid,and S.Roy,“Stochastic model-ing of TCP in networks with abrupt delay vari-ations”,ACM/Kluwer Wireless Networks,vol.9,no.5,September 2003[4]E.Altman,K.Avrachenkov,and C.Barakat,“Astochastic model of TCP/IP with stationary random losses”,in proceedings of ACM SIGCOMM 2000,Stockholm,Sweden,August 2000.[5]S.Fu,and M.Atiquzzaman,“Modeling TCP Renowith spurious timeouts in wireless mobile environ-ment”,in proceedings of 12th International Confer-ence on Computer Communications and Networks,Dallas,Texas,USA,October 2003.E TG PC T Figure 6:The impact of the RTT spike distribution on the TCP goodput:exponential,pareto,uniform,and Markov modulated exponential distributions together with the simple P-model.[6]A.Gurtov,“Effect of delays on TCP performance”,in Proceedings of IFIP Personal Wireless Commu-nications ’01,Lappeenranta,Finland,August 2001.[7]A.Gurtov,and R.Ludwig,“Responding to spu-rious timeouts in TCP”,in Proceedings of INFO-COM 2003,San Fransisco,California,USA,April 2003.[8]A.Gurtov,and S.Floyd,“Modeling wire-less links for transport protocols”,to ap-pear in ACM Computer Communications Review,November 2003,available at http://www.cs.helsinki.fi/u/gurtov/papers/mtp.html [9]A.Kumar,“Comparative performance analysis ofversions of TCP in a local network with a lossy link”,IEEE/ACM Transactions on Networking,vol.6,no.4,August 1998.[10]R.Ludwig,and R.H.Katz,“The Eifel algorithm:making TCP robust against spurious retransmis-sions”,ACM Computer Communications Review,vol.30,no.1,January 2000.[11]M.Mathis,J.Semke,J.Mahdavi,and T.Ott,“Themacroscopic behavior of the TCP congestion avoid-ance algorithm”,ACM Computer Communication Review,vol.27,no.3,July 1997.[12]J.Padhye,V .Firoiu,D.Towsley,and J.Kurose,“Modeling TCP throughput:a simple model and its empirical validation”in Proceedings of ACM SIG-COMM’98,Vancouver,Canada,September 1998.[13]P.Sarolahti,M.Kojo,and K.Raatikainen,“F-RTO:an enhanced recovery algorithm for TCP retrans-mission timeouts”,ACM Computer Communica-tions Review,vol.33,no.2,April 2003.[14]R.Wolff,“Stochastic Modeling and the Theory ofQueues”,Prentice-Hall,1989.[15]M.Zorzi, A.Chockalingam,and R.R.Rao,“Throughput analysis of TCP on channels with memory”,IEEE Journal on Selected Areas in Com-munications,vol.18,no.7,July 2000.。