Dynamic Density Forecasts for Multivariate Asset Returns
- 格式:pdf
- 大小:79.88 KB
- 文档页数:27
Ruben Chaer (IEEE-Senior Member)Gerente de Técnica y Despacho Nacional de Cargas - ADME.Prof.Agr. Instituto de Ingeniería Eléctrica - FING - UdelaR.Uruguay - September2020Optimal Energy Dispatch of Power Systems with high integration of VariableRenewable Energies.SimSEE●This tutorial is about the optimal operation of power systems with high variability in their resources.●We will see the tools developed for modeling such variabilities and for the assimilation of theirforecasts.Each system has its peculiarities. The optimal solution is surely differentfor each country.A measure of the difficulty of handling a variability energy resource is the averaging-time needed to obtain the expected value with a 10% error with 90% confidence. Characterization of the variability in Uruguay.16 yearsWS 2 months water inflowswind & solarSource: The risk images are from the IIE studies carried out in 2010 and 2018 respectively .Demand of 2011HydrobiomassprobabilityGW hpetroleumdray yearsrainy yearsSystem of 2011Source: The risk images are from the IIE studies carried out in 2010 and 2018 respectively .System of 2018dray yearsrainy years2018-012018-022018-032018-042018-052018-062018-072018-082018-092018-102018-112018-120,00,20,40,60,81,01,21,41,61,8Uruguay 2018.Wind and Solar installed capacity compared with daily Demand.Daily_Min(Dem)Daily_Average(Dem)Daily_Max(Dem)Wind+Solar Capacity0.4*Wind+0.2*Solar Expected mean[p .u . o f t h e a n n u a l a v e r a g e o f t h e D e m a n d ]The installed capacity of solar plus wind power exceeds the daily peak of La Demand in 70% of the days of the year.Network Codes.All generators under controlSCADAOperatorsCotrol Modes:●Active Power Control●Reactive Power Control●Voltage ControlAdditional tools●Authomatic Generation Control (AGC)●Dynamic Line Rating (DLR)●Remedial Action Scheme (RAS)Any 72 hours of January 2020 of the dispatch just as an exampleExportsSource: https://.uyIn the end, it wasn't that difficult. The ten-minutevariations of the Net-Demand are only the double of those of the True-Demand.The Uruguayan system then only needs an additional 25 MW of rotating reserve.Mission of the System Operator●Centralized Dispatch.●Only Variable Costs.●Contracts are of paper(in the sense that they should not interfere in the Dispatch).Provide energy with acceptable reliabilityand quality at the minimum cost.Platform for simulation of optimal operation of the energy dispatch.100% OOPActors PlayroomDynamic parameters MonitorsFree & OpenSourceSimSEEhttps://Temporary linking of decisions.The use of stored resources (water) in thepresent produces an increase in futureoperating costs. The postponement ofthe use of a stored resource produces anincrease in the costs of the present.The Optimal Policy is the one thatbalances the cost impact betweenpresent and future.The System, The Operator and The Operation PolicyX = Stater = Non-controllable inputsu = Controllable Inputsu=P(X,r,t)FCP (Xahora)=⟨∫nowfuture+∞oc(X,r,u,t)dt⟩Operation Policy:Instant operating cost:oc(X,r,u,t)Future Cost:Time-Step used for simulation.Big time step / implicit inertiaSmall time step / more state variablesbalance restrictionsoverestimate filtering capacityneed of availability models to representfail/repair inertiaBellman's curse of dimensionality.u =P (X ,r ,t )Operation Policy:Dim (u )×N X 1×N X 2...×N XDim (X )×N r 1×N r 2...×N rDim (r )×N tTime-Bands (Patamares)defined by the Monotonous Load Curve ... Makes sense?Only an example, 4 days of july-2018-UruguaySource: ADME - SCADA ten-minute time series035710121417192124262831333538404244474951545658616365687072757780828487899194965001000150020002500Térmica Hydro Biomass Wind SolarhoursM WSource: ADME - SCADA ten-minute time series036811141619222427303235384043454851535659616467697275788083868891945001000150020002500Demand Demand-VREhoursM WTime-Bands (Patamares)defined by the Monotonous Load Curve ... Makes sense?better use Net-Demand instead of Demandtime band 1time band 3time band 2Sources of randomness Stochastic processes●Demand and temperature●Flows of water contributions●Wind speed●Solar radiation●Price of interconnected markets ●Fuel prices●Availability of fuels●Availability of generating plants ●Availability of transport linesEquipment availability(independent booleans) El Niño, Hydro, Wind, Solar, Demand, T emperature. (correlated processes)Representation of uncertainty.We are managing faster dynamics,therefore, the correlation between thedifferent resources has greaterimportance.We need models of variability thatcorrectly represent the correlationbetween resources and the correlationwith the past.That is, we have to represent theinertia behind the stochastic variables.UnavailableAvailability of generators, powertransmission lines, etc.AvailableI f we do not represent the state of the availability when simulating with small time-steps, the consequences of the inertia ofthe fault-repair process are underestimated.Each generator, transmission line, etc. adds a Boolean state variable to the system.FailRepairWind, Solar and Demand correlations.WindSolarW i n dS o l a rDem an dGaussian WorldReal World ModelGaussian World:Multi-variable linear systemfed withGaussian independent white noiseCEGH modeling.X k +1=∑h =0h =n −1A h X k −h +∑h =0h =m −1B h R k −hNLTNLT NLT NLT NLTNLT •reproduces the amplitude histograms of the original processes.•reproduces the spatial and temporal correlations in a gaussian space.Accept state space reductions.Accept forecast information.PRONOS2016-2017https://.uyhttps://.uy/svg/Weather ForecastReal time status informationPower plants modelsLoad, Wind and Solar powerForecastADME_Data y ADME_WindSimOperator without forecastsOperator with forecastsTreatment of forecasts in CEGH modeling. GaussianizationP50P90P10tx1x2P 50P 90P 10Real WorldGaussian SpaceEase integration of FORECASTS in CEGH modeling.The biases (S) change the 50% probability guide and the attenuation factors (F)regulate the noise injection, allowing to go from a Deterministic Forecast (F = 0 = null noise) to the disappearance of the forecast (S = 0; F = 1 =historical noise).X k +1=∑h =0h =n r −1A h X k −h +S k +F k∑h =0h =m −1B h R k −hS k =[s 1,k...s n ,k]biases:F k =[f 1,k 0...00f 2,k 00...0f n ,k]attenuators:Treatment of forecasts in Gaussian space with reduction in CEGH modeling.u =PO z (z,r,t )z =M R X RA X =M A (t )z +B A (t )wP50P90P10tx1x2P50P90P10tProgramming the enegy dispatch withoutwindpower forecasts.σ=0Programming the enegy dispatch with 72h ofwindpower forecasts.ForecastDemand Stochastic Model Eng. ElianaCornalinoFor more information see: https://youtu.be/SvidemGQdG4Hydrological modelingEng. Alejandra de VeraFor more information see: https://youtu.be/DYvZLeotxEkAssimilation of Forecast Ensembles in CEGH Models. Eng. Guillermo FliellerFor more information see: https://youtu.be/glheJY9PPc4Considering the ENSO ForecastsModeling Wind and Solar Power Forecasts usingMixture Density NetworksEng. Damián Vallejo.For more information see: https://youtu.be/ZDUhUMfI-7oContinuous forecast of the next 168 VATEShours of optimal operation.Expected generation by source. (Example from ADME’s WEB)Next 168 h, System Load forecast. (Example from ADME’s WEB)Next 168 h, Windpower forecast. (Example from ADME’s WEB)Next 168 h, Spot Price forecast. (Example from ADME’s WEB)Determination of Exportable Energy BlocksEng. Felipe PalacioFor more information see: https://youtu.be/F7h43i3sxU0What we are working on now for the future.Combined Cycle Model. Eng. Vanina Camacho2 State variables for each group TG+Boiler.●Timer_TGtoCC_ (Purge 4h, Full Load 2h)●Boiler_temperature (startup type:, warm, hot, cold)For more information see: https://youtu.be/_EcEf4w8yn4Optimal dispatch with network representation in SimSEE.Eng. Ignacio Reyes FlucarSimSEEi t er a t i o n For more information see: https://youtu.be/jHRlAaL5mq4Bellman's curse of dimensionality.u =P (X ,r ,t )Operation Policy:SimSEE Self-Learning a pesudo-optimal Operation Policy to Combat Bellman's Curse of DimensionalityANII_FSE_1_2017_1_144926 (2018-2020)IIE-FING-UdelaRRuben Chaer, Ignacio Ramirez, XimenaCaporale, Pablo Soubes, Damián Vallejo,Felipe Palacio, Sergio Tagliafico.N01 N02 N03 N04N11N12N13N14Lin N15N17N18N19EntradaSalida ⏟Capas ocultasMAXTemporal ParsimonyEng. Ximena CaporaleFor more information see: https://youtu.be/4P4yriSpSBk。
大气边界层中的湍流参数化方案大气边界层(ABL)是地球上大气系统中非常重要的一层。
它直接接触地表,对于能量和质量的交换至关重要。
湍流是描述ABL中空气运动和混合过程的关键因素之一。
由于湍流的非线性特性和多尺度特点,准确地描述湍流过程一直是一个具有挑战性的问题。
为了模拟和预测ABL中的湍流现象,科学家们提出了湍流参数化的方案。
一、湍流在大气边界层中的重要性在大气中,湍流通常由大尺度的运动驱动,而小尺度的湍流运动混合和传输能量、质量和动量。
在ABL中,这种混合和传输对于大气的稳定性、温度和湿度的分布以及气象现象的发生都有着重要的影响。
二、湍流参数化的概念湍流参数化的目的是通过简化湍流过程的复杂性,将其表示为数学公式或参数,以便在大规模气象模型中使用。
这样可以对ABL中的湍流进行合理的模拟和预测,从而提高气象预报的准确性。
三、湍流模型的发展历程湍流模型的发展可以追溯到20世纪50年代,最早的模型主要基于实验观测和经验公式。
随着计算机技术的发展和数值模拟方法的应用,湍流模型逐步向基于物理过程的形式发展。
目前常用的湍流参数化方案包括K模型、Eddy-Diffusivity模型、多尺度模型等。
四、常用的湍流参数化方案1. K模型K模型是湍流模型中最常用的一种。
它基于湍流动能方程和湍流能量方程,通过求解这两个方程来得到湍流各向同性扩散性能的参数。
K 模型假设大尺度湍流运动能的传输主导了小尺度湍流运动能的传输,适用于变化较慢的湍流过程。
2. Eddy-Diffusivity模型Eddy-Diffusivity模型通过引入湍流扩散系数来描述湍流运动的传输特性。
它假设涡旋扩散系数与时空尺度无关,适用于中等尺度的湍流过程。
3. 多尺度模型多尺度模型是一种结合了K模型和Eddy-Diffusivity模型的参数化方案。
它将不同尺度上湍流的传输特性综合考虑,适用于同时存在不同尺度湍流运动的情况。
五、湍流参数化的应用湍流参数化方案广泛应用于大气模式和气象预报中。
R的核密度估计和多元统计R核密度估计KDE 密度估计函数density默认情况下在512个点上估计密度值这些估计点可能有些会分布在原始数据的左侧所以要查看原始数据后的密度图形我们需要从这些估值点选取比原始数据大的数据点。
libraryEcdat dataEarningspackageEcdat ind Earningsageg1 x Earningsyind/1000 f densityxn1000 froot densitysqrtxn1000 ind2 frootx sqrtminx 选取比原始数据大的数据点plotfxfytypelylimc0.035xlimc0100 ylabDensityyxlabyincome in 1000lwd2 ablineh0 f2 .5frooty / frootx linesfrootxind22f2ind2typel ylimc0.035xlimc0100ylabDensityyxlabyincome in 1000 mainTKDElty2lwd2 ablineh0legend60.03cKDETKDEltyc12lwd2 残差的获得R 如果模型拟合可以直接返回残差省事了直接用否则我们可以使用residuals模型拟合结果databmwpackageevir bmwas.vectorbmw nlengthbmw fitAR1 arimabmw order c10 0 acf fitAR1residualslag.max20 main acf residualsfitAR1lag.max20 main 两者结果一致R语言多元统计包简介:各种假设检验统计方法聚类分析数据处理/统计分析生物信息sas matlab R语言Multivariate Statistics 多元统计网址/web/views/Multivariate.html 转/Rbbs/posts/list/223.page 基本的R包已经实现了传统多元统计的很多功能然而CRNA的许多其它包提供了更深入的多元统计方法下面做个简要的综述。
Energy Measurement ProductsADE Product Family OverviewThe Analog Devices IC (ADE) family combines industry-leading data conversion technology with a fixed function digital signal processor (DSP) to perform the calculations essential to an electronic energy meter. The portfolio includes single-phase products and polyphase products for stepper motor and LCD display meter designs, with five critical measurements available: watt, V rms, I rms, VA, and VAR.With 175 million units deployed in the field, ADI has added to the portfolio the ADE71xx and ADE75xx product families that simplify energy meter design by providing all the features needed for an accurate, reliable, and fully functional energymeter with LCD display in a single IC.ADE Product Family • High accuracy exceeds IEC and ANSI standards • Proprietary 16-bit ADCs and DSP provide high accuracy over large variations in current, environmental conditions, and time • Reliability proven with over 175 million units deployed in the field• On-chip reference with low temperature drift (20 ppm to 30 ppm typ)• On-chip power supply monitoring• On-chip creep protection (no-load threshold)• Single 5 V supply • Low power consumption • Instantaneous active power output for calibration or interface to an MCU • Miswiring or reverse power indication• Tamper detection optionsADE Pulsed Output Products or Stepper Motor Display Meters • Exceeds IEC 61036/62053-21, IEC 60687/62053-22, ANSI C12.16, and ANSI C12.20• Active energy measurement with less than 0.1% error over a dynamic range of 500 to 1 at 25°C• Power consumption as low as 15 mW (typ) for single-phase products and 30 mW (typ) for polyphase products• Built-in current channelamplifier allows the use of low resistance, low cost shunts • Active energy, low frequency outputs directly driveelectromechanical counters • Single 5 V supplyADE Serial Interface Products for LCD Meters• Exceeds IEC 61036/62053-21, IEC 60687/62053-23 (for multifunction products), ANSI C12.16, and ANSI C12.20• Active energy measurement with less than 0.1% error over a dynamic range of 1000 to 1 at 25°C• Active energy and sampled waveform data• Multifunction products provide VAR, VA, V rms, and I rms • User-programmable power quality monitoring features • Digital calibration for power, phase, and offset• Serial peripheral interface (SPI) with interrupt request pin (IRQ)• Single 5 V supplyIntegrated Products for LCD Meters• Exceeds IEC 61036/62053-21, IEC 60687/62053-23 (for multifunction products), ANSI C12.16, and ANSI C12.20• Single chip solution integrates ADE measurement core for watts, VAR, VA, V rms, and I rms• 8052 MCU core with flash memory• 104-segment LCD driver with contrast control for low/high temperature visibility • Low power RTC (1.5 μA typ) with digital compensation for temperature performance • Power fail and batterymanagement with no external components• Reference with low temperature drift (5 ppm/°C typ)• Noninvasive in-circuitemulation/energymeterValue of ADE Products1. Proven TechnologyAnalog Devices is the market leader in sales of energy metering ICs with over 175 million meters deployed worldwide with ADE products.• Quality: Strict quality and test standards applied to ADE products throughout design and manufacturing stages ensure low meter production failure rate and uniform part-to-part characteristics.• Reliability: Accelerated life expectancy tests on ADE products, representing more than 60 years of field usage, reduce probability of meter failure due to IC failure.• Performance: Proprietary Σ-∆ ADCs provide excellent performance with an error of less than 0.1% over an extended current dynamic range.2. Ease of DesignAnalog Devices ADE solutions aim to simplify energy meter design, reduce system cost, and reduce time to market with:• Integration of ADCs and fixed function DSP on a single chip leading to a single IC energy meter• Integration of ADCs and fixed function DSP on a single chip reduces processing requirements, enabling the use of a lower cost MCU • Embedded essential energy calculations to ensure harmonic content is included (up to 233rd harmonics for watt-hour measurement)• Direct and flexible sensor interface without external gain amplifiers• Unparalleled design support including detailed data sheets, reference designs, application notes, evaluation tools, and technical support • Integrated MCU core, and all necessary peripherals, with field proven metering front end • Greater system control with minimized current consumption in battery mode3. Innovation and ChoiceAnalog Devices is committed to continuing its investment in the ADE product family and to enabling very competitive system costs while maintaining a high level of innovation.• 16 patents granted or pending on innovative energy measurement technology• Many energy measurement products currently available for single-phase and polyphase energy meters with more to come4. Quality• Samples from production lots constantly drawn for rigorous qualifications• ADI’s product analysis group continually addresses customer concerns and feedback on the quality of our products • Constant monitoring of electrical ppm failure rate of finished products• Electrical ppm failure rate is largely comprised of marginal parametric rejects that are fully functional but are most likely to experience failure in the field•ADI products have a consistently low ppm failure rate that reflects the stability and high quality of the manufacturing process• TIME DEPENDENT DIELECTRIC BREAKDOWN • ELECTROMIGRATION• HOT CARRIER INJECTION• THERMAL SHOCK SEQUENCE• TEMPERATURE CYCLING SEQUENCE• ELECTRICAL ENDURANCE—HTOL AND LTOL • EARLY LIFE FAILURE CHARACTERIZATION • HIGH TEMPERATURE STORAGE • MOISTURE ENDURANCE TEST • DIE SHEAR TEST• FABRICATION AND ASSEMBLY QUALIFICATION • ELECTRICAL STATIC DISCHARGE (ESD)• LATCH UPPRODUCTION QUALITY CONTROL SUPPLEMENTAL QUALIFICATIONS APPLICATION SPECIFIC QUALIFICATIONEND PRODUCT QUALIFICATIONFABRICATION QUALIFICATIONASSEMBLY QUALIFICATIONANALOG DEVICESSTANDARD QUALIFICATIONS LIST OF QUALIFICATIONSPASSEDPASSEDCUSTOMER FEEDBACK• BOND STRENGTH • BURN-IN SEQUENCE• CONSTANT ACCELERATION • HERMETICITY• HIGH TEMPERATURE STORAGE • INTERNAL WATER VAPOR TEST • LEAD FATIGUE • LID TORQUE• MARKING PERMANENCY • MECHANICAL SHOCK• MOISTURE SENSITIVITY CHARACTERIZATION• MOISTURE ENDURANCE SEQUENCE AND AUTOCLAVE • RESISTANCE TO SOLDERING HEAT • SOLDERABILITY• THERMAL IMPEDANCE• VIBRATION, VARIABLE FREQUENCY • X-RAY INSPECTIONReliabilityAnalog Devices conducted a high temperature operating lifetest (HTOL) to simulate aging of ADE products in the field.Method—ICs subjected to 150°C for 3000 hours:• With acceleration factor of 179×, the life expectancy correlates to 60 years at operating temperature of 60°C.• Four main parameters monitored: reference voltage, gain error, current, and voltage channel offset.Results—parameter distribution over time shows:• Negligible parameter distribution shifts.• Parameters maintain data sheet specifications.• Zero failures.Conclusions from HTOL test:• If other components in electronic meter have the same life expectancy, meter replacement is only needed every 60 years.• Proven stability and accuracy of digital energy measurement.Meter manufacturers must carefully select components to ensure that the overall reliability of electronic energy meter is maximized.10,000125OPERATING TEMPERATURE (°C)A C C E L E R A T I O N F A C T O R10001001010,0001L I F E T I M E (Y e a r s )100010010303540455055606570758085PerformanceThe unsurpassed accuracy of power calculation over a very wide dynamic range, harmonics, and stability over time are the primary reasons why ADE ICs are preferred by many meter manufacturers around the world. The plot to the right highlights the typical performance of ADE ICs over a dynamic range of 1000:1 and temperature range of –40°C to +85°C. Even at a low power factor (PF = 0.5), the ICs maintain their high accuracy.0.4–0.4AMPSE R R O R (%)0.30.20.10–0.1–0.2–0.3Reliability Lifetime PredictionsTypical Performance for ADE ICs"%$"%$"%$&/&3(:.&"463&.&/5%41*/5&3/"- 7 QQN $4*/(-&$:$-& .$66"3541* *$8%53".108&3 4611-:$0/530-108&3 4611-:.0/*503*/(-%0103"%$*/5&3/"-3&4&5"%$5&.1&3"563&4&/403*/5&3/"- 147 ."9$)"3(& 16.1%"$-$% %3*7&3 4&(.&/541--*/5&3/"-$-0$,35$'-"4).&.03:"%& YY "%& YYSelection GuideADE71xx/ADE75xx: Energy Measurement Computing EngineThe ADE71xx/ADE75xx family builds on Analog Devices’ 10 years of experience in energy measurement to provide the best analog-to-digital converters combined with the advanced digital signal processing required to build an accurate, robust, and fully featured energy meter with LCD display.Energy Measurement Key Features:• Exceeds IEC 61036/62053-21,IEC 60687/62053-22, IEC 61268/62053-23, ANSI C12.16, and ANSI C12.20• 4-quadrant power and energy measurement for:• Active, reactive, and apparent• Tampering protection• 2 current inputs for line and neutral • Tampering algorithms integrated • Special energy accumulation modes• Shunt, current transformer, and di/dt current sensor connectivity enabled• 2 high precision pulse outputs for calibration • Power line quality: SAG, period/frequency, peak, zero-crossing• Large phase calibration (5° @ 50 Hz)• Wide measurement frequency bandwidth (14 kHz) for harmonic measurementADE71xx/ADE75xx: LCD DriverThe ADE71xx/ADE75xx family has a unique LCD driver capable of maintainingmaximum contrast on the LCD independently of the power supply level using charge-pump circuitry. This technology combined with the on-chip temperature measurement enables the lowest power operation and maximum readability of the energy meter LCD display.LCD Driver Key Features:• 104-segment LCD driver• Adjustable LCD voltage (5 V max) independent of the power supply (2.7 V min)• LCD freeze and hardware blink functions for low power operation in battery mode • Low offset to minimize LCD fluid biasingRTC Key Features:•1.5 μA current consumption•Low voltage operation: 2.4 V•2 ppm/LSB digital frequency adjustment for calibration and temperature compensation •Alarm and midnight interruptsADE71xx/ADE75xx: Real-Time ClockThe ADE71xx/ADE75xx family provides a low power RTC with nominal and temperature dependent crystal frequency compensation capabilities enabling low drift and high accuracy timekeeping. The RTC functionality is also maintained at low power supply (2.4 V) and over all power supply connections, extending the operating life of the energy meter in battery mode.ADE71xx/ADE75xx: Battery ManagementThe ADE71xx/ADE75xx family has unique battery management features enabling low power consumption in battery mode and optimal power supply management when line voltage is lost.Battery Management Key Features:•No external circuitry for battery switching•Power supply switching based on absolute level•Early warning of power supply collapse with SAG and preregulated power supply monitoring•Internal power supply always valid by hardware controlled switchover to batteryKey Features Maintained in Battery Mode:•Real-time clock for timekeeping•LCD display•Temperature measurement•Meter wake-up events such as RTC alarms, I/O, UART activitiesADE Development ToolsThe ADE71xx and ADE75xx family of products share a common set of tools designedto minimize design time while improving the part understanding. These tools arecomprised of:• Energy meter reference design• 1-pin emulator with isolated USB interface• Isolated USB to UART debugger interface• Downloader software• Evaluation software• Integrated development environment from well-known vendor• Firmware libraries for common and part specific functionsThe energy meter reference design integrates the main functions of an LCD meter withIR port and RS-485 communication, battery backup, two current sensors, antitamperinterface, and EEPROM interface while using the features of the ADE71xx and ADE75xxseries such as battery management, antitamper detection, temperature compensatedreal-time clock, and LCD driver contrast.The reference design is accompanied by code libraries and an example of systemintegration code allowing easy evaluation and further development of solution.Isolated USB communication boards for debugging and emulation provide a safesolution for code development when the meter is connected to the line.The ADE71xx and ADE75xx series can be used with integrated developmentenvironments (IDE) from open market vendors to simulate, compile, debug, anddownload assembly or C code. A free of charge IDE with unlimited assembly codecapability and 4 kB limited C code capability is included in the evaluation kit. Inaddition, the part can be evaluated with a UART interface and a PC by using theversatile evaluation tools and downloader.Single-Phase Energy Metering ICs with Integrated OscillatorThe AD71056, ADE7768, and ADE7769 are single-phase ICs that provide watt-hour information using pulse outputs that directly drive a stepper motor counter.The AD71056, ADE7768, and ADE7769 are pin-reduced versions of the ADE7755, with the enhancement of an on-chip, precisionoscillator circuit that serves as the clock source for the IC. The direct interface to low resistance, low cost shunt resistors also helps to lower the cost of a meter built with AD71056, ADE7768, or ADE7769.These products are pin compatible. The AD71056 and ADE7769 accumulate bidirectional power information, and the ADE7768accumulates power only in the positive direction providing flexibility for various billing schemes. The ADE7769 indicates when the power is below the no-load threshold by holding the calibration frequencypin high. This is useful to indicate a tampering or miswired condition.Single-Phase Energy Metering ICs with Antitamper FeaturesThe ADE7761B detects two common tampering conditions: “fault” condition (when loads are grounded to earth instead of connected to neutral wire or when the meter is bypassed with external wires) and “missing neutral” condition (when the voltage input and return current are missing). The ADE7761B incorporates a novel tampering detection scheme which indicates the fault or missing neutral conditions and allows the meter to continue accurate billing by continuous monitoring of the phase and neutral (return) currents. A fault is indicated when these currents differ by more than 6.25%, and billing continues using the larger of the two currents. The missing neutral condition is detected when no voltage input is present, and billing is continued based on the active current signal. The ADE7761B also includes a power-supply monitoring circuit which ensures that the voltage and current channels are matched,eliminating creep effects in the meter.Polyphase Energy Metering ICs with Pulse OutputThe ADE7752A, ADE7752B, and ADE7762 are polyphase ICs that provide watt-hour information using pulse outputs that can directly drive a stepper motor counter. Compatible with a wide range of 3-phase grid configurations, including 3-wire and 4-wire delta and wye distributions, each of these products can be used for 3-phase commercial and industrial revenue meters or submeters, 3-phase motors or generators, industrial control, and utility automation. The ADE7762 and ADE7752B are optimized for 3-phase, 3-wire applica-tions with no-load threshold and REVP indication based on the sum of the phases.To ensure that energy is billed properly under miswiring or tamper-ing conditions, any of the ICs can be set to accumulate based on the sum of the absolute value in each phase. The active power accumu-lation is signed by default.The ADE7762 has four additional logic output pins. These four pins drive six LEDs for prioritized indication of phase dropout and phasesequence error as well as reverse polarity per phase.ADE7752A and ADE7752B are pin compatible with the legacy ADE7752 and have up to a 50% power consumption reduction from ADE7752. The four additional pins of ADE7762 are located at the top of the package so that the same PCB may be used with an ADE7752A-, ADE7752B-, or ADE7762-based meter.10Single-Phase Energy Metering ICs with Serial InterfaceADI has a range of product offerings for single-phase energy measurement solutions requiring serial interface. The ADE7756 measures active energy and allows digital calibration of phase, offset, and gain through a serial port interface. The ADE7759 has a built-in digital integrator for direct interface with a di/dt sensor such as a Rogowski coil and includes the capability to interface with low resistance shunts and traditional current transformers. The ADE7753 provides active, apparent, and reactive energy measurements, and incorporates a built-in digital integrator to allow direct interface with a Rogowski coil sensor in addition to a low resistance shunt or CT. The ADE7763 provides the same functionality as the ADE7753 but without reactive energy measurement. All four of these ADE single-phase energy metering ICs with SPI are pin compatible for ease of design migration.11Polyphase Energy Metering ICs with Serial InterfaceADI has a selection of product offerings for3-phase energy measurement solutions requiring serial interface. The ADE7758 features second-order sigma-delta ADCs, and is designed formidrange 3-phase energy meters. For each phase, the chip measures active, reactive, and apparent energy, as well as rms voltage and rms current. These measurements are accessed via an SPI that allows a fully automated digital calibration. The ADE7758 interfaces with a variety of sensors, including current transformers and di/dt current sensors, such as Rogowski coils. Additionally, the ADE7758 provides a programmable frequency pulse output for both active and apparent orreactive power.Analog Devices, Inc.Worldwide Headquarters Analog Devices, Inc. One Technology Way P .O. Box 9106Norwood, MA 02062-9106 U.S.A.Tel: 781.329.4700 (800.262.5643, U.S.A. only)Fax: 781.461.3113Analog Devices, Inc. Europe Headquarters Analog Devices, Inc.Wilhelm-Wagenfeld-Str. 6 80807 Munich GermanyTel: 49.89.76903.0 Fax: 49.89.76903.157Analog Devices, Inc. Japan Headquarters Analog Devices, KK New Pier Takeshiba South Tower Building 1-16-1 Kaigan, Minato-ku, Tokyo, 105-6891 JapanTel: 813.5402.8200 Fax: 813.5402.1064Analog Devices, Inc. Southeast Asia Headquarters Analog Devices22/F One Corporate Avenue 222 Hu Bin Road Shanghai, 200021 ChinaTel: 86.21.5150.3000 Fax: 86.21.5150.3222©2007 Analog Devices, Inc. All rights reserved. Trademarks and registered trademarks are the property of their respective owners.Printed in the U.S.A. BR04915-5-8/07(A)/energymeter。
望[J].计算机集成制造系统,2006,12(2):161‐168.J i nF e n g,W uC h e n g.R e s e a r c hS t a t u s a n dP r o s p e c t sf o r M a s s i v e P r o d u c t i o n S c h e d u l i n g[J].C o m p u t e rI n t e g r a t e d M a n u f a c t u r i n g S y s t e m s,2006,12(2):161‐168.[6] 刘琳,谷寒雨,席裕庚.工件到达时间未知的动态车间滚动重调度[J].机械工程学报,2008,44(5):68‐75.L i uL i n,G uH a n y u,X i Y u g e n g.R e s c h e d u l i n g A l g o-r i t h mB a s e d o nR o l l i n g H o r i z o nD e c o m p o s i t i o n f o r aD y n a m i cJ o b S h o p w i t h U n c e r t a i n A r r i v i n g T i m e[J].C h i n e s eJ o u r n a lo f M e c h a n i c a l E n g i n e e r i n g, 2008,44(5):68‐75.[7] 孙志峻,朱剑英,潘全科,等.基于遗传算法的多资源作业车间智能动态优化调度[J].机械工程学报, 2002,38(4):120‐125.S u nZ h i j u n,Z h u J i a n y i n g,P a nQ u a n k e,e t a l.G e n e t i cA l g o r i t h mB a s e d A p p r o a c ht ot h e I n t e l l i g e n tO p t i-m u mS c h e d u l i n g o fM u l t i‐r e s o u r c e s i nt h eD y n a m i cE n v i r o n m e n t[J].C h i n e s e J o u r n a l o fM e c h a n i c a lE n-g i n e e r i n g,2002,38(4):120‐125.[8] 于晓义,孙树栋,王彦革.面向随机加工时间的车间作业调度[J].中国机械工程,2008,19(19):2319‐2324.Y uX i a o y i,S u nS h u d o n g,W a n g Y a n g e.R e s e a r c ho nS t o c h a s t i c P r o c e s s i n g T i m e O r i e n t e d J o b‐s h o pS c h e d u l i n g[J].C h i n a M e c h a n i c a l E n g i n e e r i n g, 2008,19(19):2319‐2324.[9] 徐震浩,顾幸生.不确定条件下的中间存储时间有限的F l o wS h o p提前/拖期调度问题[J].控制理论与应用,2006,23(3):480‐486.X uZ h e n h a o,G uX i n g s h e n g.E a r l i n e s s a n dT a r d i n e s sF l o wS h o p S c h e d u l i n g P r o b l e m su n d e rU n c e r t a i n t yw i t hF i n i t e I n t e r m e d i a t e S t o r a g e[J].C o n t r o l T h e o r y&A p p l i c a t i o n s,2006,23(3):480‐486. [10] 徐震浩,顾幸生.不确定条件下的F l o wS h o p问题的免疫调度算法[J].系统工程学报,2005,20(4):374‐380.X uZ h e n h a o,G u X i n g s h e n g.I mm u n eS c h e d u l i n gA l g o r i t h mf o rF l o w S h o p P r o b l e m su n d e rU n c e r-t a i n t y[J].J o u r n a l o fS y s t e m sE n g i n e e r i n g,2005,20(4):374‐380.[11] 王清印.灰色数学基础[M].武汉:华中理工大学出版社,1996.[12] G u i l h e r m e E V,J e f f r e y W H,E d w a r d L.R e-s c h e d u l i n g M a n u f a c t u r i n g S y s t e m s:aF r a m e w o r ko f S t r a t e g i e s,P o l i c i e s a n d M e t h o d s[J].J o u r n a l o fS c h e d u l i n g,2003,6(1):39‐62.[13] A n d r a'sP,B o t o n dK,L a's z l o'M.S t a b i l i t y‐o r i e n-t e d E v a l u a t i o n o f R e s c h e d u l i n g S t r a t e g i e s,b yU s i n g S i m u l a t i o n[J].C o m p u t e r si n I n d u s t r y,2007,58:630‐643.[14] G u i l h e r m eE V,J e f f r e y W.H e r r m a n n,E d w a r dL.P r e d i c t i n g t h eP e r f o r m a n c e o fR e s c h e d u l i n g S t r a t e-g i e sf o rP a r a l l e l M a c h i n eS y s t e m s[J].J o u r n a lo fM a n u f a c t u r i n g S y s t e m s,2000,19(4):256‐266.[15] D e b K,P r a t a p A,A g a r w a lS,e ta l.A F a s ta n dE l i t i s tM u l t i‐o b j e c t i v eG e n e t i cA l g o r i t h m s:N S G A‐Ⅱ[J].I E E ET r a n s a c t i o n s o nE v o l u t i o n a r y C o m p u-t a t i o n.2002,6(2):182‐197.[16] 刘敏,曾文华,赵建峰.一种快速的双目标非支配排序算法[J].模式识别与人工智能,2011,24(4):538‐547.L i u M i n,Z e n g W e n h u a,Z h a o J i a n f e n g.AF a s t B i‐o b j e c t i v eN o n‐d o m i n a t e d S o r t i n g A l g o r i t h m[J].P a t t e r n R e c o g n i t i o n a n d A r t i f i c i a l I n t e l l i g e n c e,2011,24(4):538‐547.[17] 郑金华.多目标进化算法及其应用[M].北京:科学出版社,2007.[18] 郑金华,李珂,李密青,等.一种基于H y p e r v o l u m e指标的自适应邻域多目标进化算法[J].计算机研究与发展,2012,49(2):312‐326.Z h e n g J i n h u a,L i K e,L i M i q i n g,e ta l.A d a p t i v eN e i g h b o r M u l t i‐o b j e c t i v eE v o l u t i o n a r y A l g o r i t h mB a s e d o n H y p e r v o l u m eI n d i c a t o r[J].J o u r n a lo fC o m p u t e r R e s e a r c h a n dD e v e l o p m e n t,2012,49(2):312‐326.[19] 朱传军,张超勇,管在林,等.一种求解J o b‐s h o p调度问题的遗传局部搜索算法[J].中国机械工程,2008,19(14):1707‐1711.Z h uC h u a n j u n,Z h a n g C h a o y o n g,G u a nZ a i l i n,e t a l.A G e n e t i cL o c a l S e a r c hA l g o r i t h mf o r S o l v i n g J o b‐s h o p S c h e d u l i n g P r o b l e m s[J].C h i n a M e c h a n i c a lE n g i n e e r i n g,2008,19(14):1707‐1711.(编辑 王艳丽)作者简介:彭建刚,男,1970年生㊂合肥工业大学机械与汽车工程学院博士研究生㊁汽车工程技术研究院副研究员㊂主要研究方向为生产计划与调度,先进制造技术和多目标优化算法㊂刘明周,男,1968年生㊂合肥工业大学机械与汽车工程学院教授㊁博士研究生导师㊂张 玺,男,1985年生㊂合肥工业大学机械与汽车工程学院博士研究生㊂张铭鑫,男,1980年生㊂合肥工业大学机械与汽车工程学院讲师㊂葛茂根,男,1979年生㊂合肥工业大学机械与汽车工程学院讲师㊂㊃6232㊃中国机械工程第25卷第17期2014年9月上半月Copyright©博看网. All Rights Reserved.圆柱滚子轴承磨损失效的A D AM S 仿真及实验姚建雄 谭建平 杨 斌中南大学,长沙,410083摘要:针对滚子轴承磨损失效易导致系统性能恶化的问题,以轴承N J 204为研究对象,通过对轴轴承基座为整体建立模型,对不同程度的磨损失效轴承进行了仿真分析,揭示了轴承的失效过程,得到了失效故障特征值,并进行了实验验证㊂仿真和实验结果都表明基座振动信号频谱集中度能反映轴承磨损失效的程度,其值随轴承间隙的增大而增大㊂关键词:滚动轴承;磨损失效;频谱集中度;间隙中图分类号:T H 133.33 D O I :10.3969/j.i s s n .1004-132X.2014.17.011K i n e t i c S i m u l a t i o no nA D A M Sa n dE x p e r i m e n t a l S t u d y o fW e a rF a i l u r e o fC y l i n d r i c a lR o l l e rB e a r i n gs Y a o J i a n x i o n g T a n J i a n p i n g Y a n g Bi n C e n t r a l S o u t hU n i v e r s i t y ,C h a n gs h a ,410083A b s t r a c t :T h ew e a r f a i l u r e o f c y l i n d r i c a l r o l l e r b e a r i n g s c a ne a s i l y c a u s ed e t e r i o r a t i o no f t h e s y s -t e m.A m o d e l o f b e a r i n g s ‐s h a f t ‐b a s ew a sm a d e o n t h e b a s i s o f b e a r i n g N J 204.T h e s i m u l a t i o no f v a r -y i n g d e g r e e s o fw e a r f a i l u r ew a sa n a l y s i z e dt or e v e a l t h e f a i l u r e p r o c e s s .T h e f a i l u r ee i g e n v a l u ew a s o b t a i n e d f i n a l l y a n dv e r i f i e d e x p e r i m e n t a l l y .T h e r e s u l t s o f s i m u l a t i o n s a n d e x p e r i m e n t s s h o wt h a t t h e s p e c t r u mc o n c e n t r a t i o no f t h e v i b r a t i o n s i g n a l s c a n r e f l e c t t h ed e g r e e o f b e a r i n g we a rf a i l u r e .K e y wo r d s :r o l l e r b e a r i n g ;w e a r f a i l u r e ;c o n c e n t r a t i o no f s p e c t r u m ;c l e a r a n c e 收稿日期:2013 01 05基金项目:教育部支撑技术项目(625010339)0 引言轴承故障诊断一直是轴承领域研究的热点,也出现了很多不同的有效的故障诊断方法㊂从飞云等[1]提出了基于自回归预测滤波的谱峭度分析方法;蒋玲莉等[2]提出了将经验模态分解和模糊聚类相结合的方法;易挺等[3]介绍了倒频谱的方法㊂上述文献主要针对轴承外圈故障㊁内圈故障和滚动体故障进行诊断,而对轴承磨损失效的研究较少涉及㊂张宪文[4]通过实验的方法,测出了不同磨损程度径向间隙下轴承的油膜压力分布情况;徐淑萍[5]在对滚子轴承载荷分布进行推导后得出了不同间隙下轴承的使用寿命;K a b u s 等[6]针对一种高精度准静态六自由度摩擦理论模型模拟了圆柱滚子轴承的接触碰撞,发现轴承的磨损间隙越大,系统的非线性度越大㊂上述对轴承故障诊断的研究大多单独针对轴承,没有将轴轴承基座作为整体,对传动系统的研究较少;磨损轴承的间隙均在相关标准允许的范围内,对远超出标准的间隙系列未有研究,也没有得出一个可以很好地表征轴承磨损失效的特征值㊂本文以轴轴承基座台架这一整体为研究对象,对轴承不同程度的磨损失效形式进行了建模,揭示了轴承的失效过程,得出了有效表征滚动轴承磨损失效的特征值,并进行了实验验证㊂1 系统建模图1所示为本文研究的轴轴承基座台架的A D AM S 动力学仿真模型㊂设定台架和大地固定连接,基座和台架固定连接,轴承外圈㊁内圈分别与基座㊁轴固定连接,滚动体与轴承内外圈及保持架碰撞接触㊂为了更加贴近实际情况,对基座进行了柔性化处理㊂轴承型号为N J 204,外径为47mm ,内径为20mm ,滚动体直径为6.5mm ,滚动体个数为11;正常轴承间隙为10μm ,有一定磨损量的轴承间隙系列为80㊁150㊁200㊁250㊁300μm ㊂仿真模拟时间为0.25s ,2500步㊂图1 轴轴承基座A D A M S 仿真模型1.1 滚动轴承载荷变形协调方程由于轴承外圈受基座约束㊁内圈受轴颈约束,为了简化计算,假设变形仅是由于滚动体与内外㊃7232㊃圆柱滚子轴承磨损失效的A D AM S 仿真及实验姚建雄 谭建平 杨 斌Copyright ©博看网. All Rights Reserved.圈滚道间的接触变形而产生的,而内外圈整体保持原有的尺寸和形状,那么考虑轴承间隙h 时,不同位置角下的滚动体与内外圈的接触变形为δψ=(δm a x +h 2)c o s ψ-h 2(1)δψ=δi ψ+δo ψ(2)式中,δm a x 为径向最大变形量;ψ为滚子位置与垂直径向力之间的夹角;δi ψ㊁δo ψ分别为滚子和内外圈的接触变形量㊂滚子的修缘处理避免了接触区域的应力集中,所以滚子和内外圈的接触就不能简单地认为是经典的赫兹线接触形式,对此将滚子沿轴向使用切片法分成n 个圆片[7‐8],这样接触变形量δ与载荷Q 的关系为δ=0.39(8E ')0.9Q 0.9l0.8(3)E '=2(1-υ21E 1+1-υ22E 2)-1式中,E 1㊁E 2为滚子和内外圈的等效弹性模量;Q 为法向接触载荷;l 为接触长度;υ1㊁υ2分别为滚子和内外圈的泊松比,取值为0.3㊂对位置角ψ处的滚子进行受力分析,考虑离心力,可得受力平衡方程如下:1n ∑nb =1(Q i ψb -Q o ψb )+12m j ω2d D m =01n ∑nb =1(Q i ψb 12∑nb =1Q i ψb x b 1n ∑nb =1Q i ψb )-Q o ψb 1n ∑n b =1Q i ψb x b 1n ∑nb =1Q i ψb =üþýïïïïïï0(4)x k =-l 2+(k -0.5)l n式中,m j 为第j 个滚子的质量;x k 为第k 个原片中心和滚子质心的距离;ωd 为滚动体的自转速度;D m 为滚动体直径;n 为对滚动体进行切片的数目㊂通过N e w t o n ‐R a ph s o n 方法可以求出不同位置角ψ的接触载荷Q i ψ㊁Q o ψ及接触变形量δi ψ㊁δo ψ㊂从而可以计算得到滚子与内外圈之间的接触刚度为K i ψ=d Q i ψd δi ψK o ψ=d Q o ψd δo üþýïïïïψ(5)1.2 碰撞模型主要参数及求解方法在A D AM S 中,碰撞力定义为F =0q >q 0k (q 0-q )e -c q ㊃S T E P (q ,q 0-d ,1,q0,0)q ≤qìîíïïï0(6)式中,q 为两个对象之间实际距离;q ㊃为变量q 的时间导数;q0为触发距离,用来确定冲击力是否起作用,该参数为一个实常数;k 为刚度系数;e 为弹性力指数;c 为阻尼系数;d为刺入深度㊂通过式(5)计算可以得出轴承承受最大径向载荷时,滚子与外圈接触刚度为1.8×107N /mm ,滚子与内圈接触刚度为1.5×107N /mm ,非线性指数为1.5,最大接触阻尼为1.42×102N ㊃s /m ,最大穿透深度为1μm ,仿真求解方法采用适合高频系统的非刚性稳定算法积分器A B AM 求解器㊂1.3 仿真模型准确性分析在仿真中,滚动体和内外圈及保持架是碰撞接触的,由于轴的旋转,内圈与滚动体碰撞,滚动体与保持架碰撞,在碰撞力的作用下,保持架会有一个轴向角速度,这个角速度是一个可以评价系统运行平稳性的重要指标,保持架理论转速ωc 计算公式如下:ωc =ωi2(1-D W d m)(7)式中,ωi 为轴的转速;D W 为滚动体直径;d m 为滚动轴承节径㊂从图2可以看出,在稳定情况下,保持架角速度理论计算值为14.1r a d /s,仿真平均值为14.3r a d /s ,理论计算值与仿真值误差为1.3%,由此可见该仿真方法能够准确地分析轴承动力学特性㊂图2 保持架角速度理论值与仿真值2 磨损失效仿真及特征提取在图1所示模型基座上定义6个关键位置,不同间隙下,提取这些位置的加速度信号,图3所示为间隙h =10μm 时位置点1的加速度振动信号㊂通常,机械故障诊断中所遇到的时域信号都是实信号,实信号的傅里叶变换含有负频率,对信号处理带来麻烦,若对仿真信号x (t )进行H i l b e r t 变换可以对负频率成分做+90°的相移,得到原信号的解析信号x a (t )(其频谱是原实信号正频谱的2倍):^x 1(t )=x (t )⊗1πt =1π∫∞-∞x (t )1t -τd τ(8)㊃8232㊃中国机械工程第25卷第17期2014年9月上半月Copyright ©博看网. All Rights Reserved.图3 基座位置点1加速度值(间隙h=10μm)x a(t)=x(t)+j^x1(t)=a(t)e-jω(t)(9)式中,a(t)为原信号的包络信号㊂由于滚动轴承发生故障时产生的振动信号具有调制的特点[9‐10],所以对其进行H i l b e r t变换可以实现包络解调,实现载波和调制波分离㊂对包络信号a(t)进行傅里叶变换得到:y(ω)=2∫∞-∞a(t)e-jωt d t(10)定义信号的频率中心f m及信号频谱的集中度fσ为f m=∑f s/2f=0f y(ω)/∑y(ω)(11)fσ=∑f s f=0(f-f m)2y(ω)N(12)其中,N为y(ω)的长度;f s为信号的采样频率㊂信号频谱的集中度fσ能反映出信号频谱的集中程度,其值越小反映出信号特征频率越集中,反之则表示信号特征频率越分散,绘制fσ与间隙的关系图,结果见图4㊂图4 信号频谱集中度与轴承间隙的关系从图4可以看出,信号的频谱集中度与轴承径向间隙为非线性关系,集中度fσ能够比较明显地反应出轴承磨损失效时径向间隙的规律,是表征轴承磨损故障有效的特征值㊂3 实验验证3.1 实验台搭建为验证滚动轴承磨损失效的故障特征,搭建的滚动轴承磨损失效机理实验台以湖南科技大学S p e c t r a Q u es t公司生产的机械故障综合模拟实验台为平台,如图5所示㊂信号采集及监测系统包括:奥地利D e w e t r o n公司的D EW E‐16通道高精度数据采集仪㊁美国P C B6O S A n加速度传感器及数据处理系统㊂图5 滚动轴承磨损失效机理实验台3.2 实验条件及过程由于条件限制,无法收集到大量不同间隙系列的轴承进行研究,所以需对轴承进行故障模拟㊂实验轴承型号与仿真轴承相同,本实验所用的轴承为内圈单挡边可分离式圆柱滚子轴承N J204,对24个样品内圈滚道进行精磨,得到不同间隙系列见表1所示㊂实验过程中电机转速为10㊁15㊁20㊁25㊁30r/m i n㊂转盘质量为2.03k g,采样频率f s=10k H z㊂表1 样本轴承径向间隙及数量系列轴承间隙(μm)样本数量(个)110~404240~10043100~15044150~20045200~25046250~3004实验所用传感器为美国P C B608A11加速度传感器,其灵敏度为100m V/g,在电机端及负载端基座上分别在轴向(x方向)㊁垂直方向(y方向)㊁水平方向(z方向)安装传感器㊂实验过程分两组进行:第1组实验电机端和负载端的轴承分别安装表1所示的同一间隙系列,总计实验12次;第2组实验电机端轴承间隙为10μm,负载端轴承每一个系列选两个轴承进行实验,第2组总计实验12次㊂实验过程中电机转速设定为10㊁15㊁20㊁25㊁30㊁35r a d/s㊂实验总计144×6组样本数据㊂3.3 实验结果分析对实验采集的数据进行小波包消噪,图6所示为电机端轴承间隙h=40μm,主轴转速为35 r a d/s时,y方向原信号与消噪信号㊂消噪后的振动信号幅值比原信号幅值变小,密集度降低,冲击过程更加明显㊂3.3.1 不同轴承安装方式的对比分析考虑到电机和轴之间的柔性连接器会对电机端轴承振动信号产生影响,将第1组实验电机端㊁㊃9232㊃圆柱滚子轴承磨损失效的A D AM S仿真及实验 姚建雄 谭建平 杨 斌Copyright©博看网. All Rights Reserved.(a )膝关节角度(b)膝关节角度相位回归图 (c)人腿膝关节扭矩(d)人机相互作用力矩图9 有助力机器人辅助时结果比较可知,有机器人辅助比没有机器人辅助时人膝关节扭矩明显地减小了一半左右,这说明该动力学系统的使用可以减小扭矩,降低人的运动强度,证明了所提出方法的有效性㊂图9d 示出了人和机器人相互作用力矩,其作为矢量场逐次迭代系统的输入而存在㊂3.2.2 扭矩成分分析由于在实际应用中传感器所检测到的机器人关节扭矩包括重力矩㊁惯性矩㊁哥氏/离心力矩和人机相互约束力矩等四项成分,而矢量场逐次迭代系统的输入信号只需要人机相互约束力矩,所以有必要把其他成分从总扭矩中分离出去㊂目前,对于重力项可采用重力补偿[12]等分离方法来处理,而惯性矩和哥氏/离心力矩较难处理,但对关节扭矩信号中各成分进行如图10所示的F F T 频谱分析后发现:哥氏/离心力矩相当小,对运动的同步影响较小;惯性扭矩主要包括5H z 以上的较高频率成分,而这些高频成分对助力运(a)信号(b)相互作用力 (c)惯性矩(d)哥氏力矩和离心力矩图10 关节扭矩输入信号F F T 分析动产生同步的频率段(一般在0.2~3.0H z 范围内)亦不会造成大的影响㊂这为在实际应用中只需从传感器所检测到的信号中减去重力项作为系统的输入信号就可进行实时控制提供了理论依据㊂4 结论(1)提出了一种基于矢量场逐次迭代算法的人机身体交流智能控制方法,并将该方法应用于下肢外骨骼助力机器人研究,矢量场逐次迭代系统的输出被用作机器人各关节的期望规迹,机器人腿与人腿的相互作用关节扭矩信号被反馈到矢量场逐次迭代系统中㊂仿真结果表明,所设计的矢量场逐次迭代系统实现了下肢助力机器人和人运动在频率和振幅上的同步,同时,通过调节动力学系统的遗忘参数λ和同步阈值μ,可以调节同步的程度㊂(2)通过对机器人膝关节扭矩输入信号进行频谱分析发现,矢量场逐次迭代系统的输入信号中即使包含惯性矩和哥氏/离心力矩亦不会对系统输出关节位移同步信号造成太大的影响㊂此结果为在实际应用中只需从传感器所检测到的信号中减去重力项作为系统的输入信号就可进行实时控制提供了理论依据㊂参考文献:[1] I k e u r aR ,I n o o k aH.V a r i a b l e I m pe d a n c eC o n t r o l of aR o b o t f o rC o o p e r a t i o n w i t ha H u m a n [C ]//P r o -c e ed i n g so ft h eI E E EI n te r n a t i o n a lC o nf e r e n c eo n R o b o t i c s a n d A u t o m a t i o n .P i s c a t a w a y,N J ,U S A :I E E E ,1995:3097‐3102.[2] H i r a t aY ,T a k a g i T ,K o s u g eK ,e t a l .M o t i o nC o n -t r o l o f M u l t i p l eD R H e l p e r sT r a n s p o r t i n g aS i n g l e O b j e c t i nC o o p e r a t i o nw i t haH u m a nB a s e do n M a pI n f o r m a t i o n [C ]//P r o c e e d i n g s o f t h e I E E E I n t e r a n a -t i o n a l C o n f e r e n c e o nR o b o t i c s a n dA u t o m a t i o n .P i s -c a t a w a y,N J ,U S A :I E E E ,2002:995‐1000.[3] M a r d e rE ,B u c h e rD.C e n t r a lP a t t e r n G e n e r a t o r sa n d t h eC o n t r o l o fR h y t h m i c M o v e m e n t s [J ].C u r -r e n tB i o l o g y,2001,11(23):986‐996.[4] 张益军,朱庆保,田恩刚.实现C P G 模型的细胞神经网络的分支分析方法[J ].控制理论与应用,2006,23(3):362‐366.Z h a n g Y i j u n ,Z h uQ i n g b a o ,T i a nE n g a n g.M e t h o d o fB i f u r c a t i o n A n a l ys i so fC e l l u l a rN e u r a lN e t w o r k f o rC P G M o d e l s [J ].C o n t r o lT h e o r y &A p p l i c a -t i o n s ,2006,23(3):362‐366.㊃2432㊃中国机械工程第25卷第17期2014年9月上半月Copyright ©博看网. All Rights Reserved.。
Dynamic Hedging with Futures: ACopula-based GARCH ModelChih-Chiang Hsu, Yaw-Huei Wang and Chih-Ping Tseng*ABSTRACTIt has been demonstrated in a number of the prior studies that the traditional regression-based static approach is inappropriate for hedging with futures, with the result that a variety of alternative dynamic hedging strategies have emerged. In this paper we apply a new copula-based GARCH model to the estimation of the optimal hedging ratio, and compare its effectiveness with that of other hedging models, including the conventional static, the constant conditional correlation (CCC) GARCH and the dynamic conditional correlation (DCC) GARCH models. As regards the reduction of variance in the portfolio returns, our empirical results show that in both the ‘in-sample’ and ‘out-of-sample’ tests, with full flexibility in the distribution specifications, the copula-based GARCH hedging model performed better than all of the other models.Keywords: GARCH; Copula; Hedging; Risk managementJEL Classification: C52, G11, G13.* Chih-Chiang Hsu (the corresponding author) is at the Department of Economics at the National Central University, Taiwan; Yaw-Huei Wang and Chih-Ping Tseng are at the Department of Finance at the National Central University, Taiwan. Address for correspondence: Chih-Chiang Hsu, Department of Economics, Management School, National Central University, No.300 Jhongda Road, Jhongli City, Taoyuan County 32001, Taiwan. Tel: +886 3422 7151 ext. 66308, Fax: +886 3422 2876, Email: cchsu@.tw.Dynamic Hedging with Futures: ACopula-based GARCH ModelABSTRACTIt has been demonstrated in a number of the prior studies that the traditional regression-based static approach is inappropriate for hedging with futures, with the result that a variety of alternative dynamic hedging strategies have emerged. In this paper we apply a new copula-based GARCH model to the estimation of the optimal hedging ratio, and compare its effectiveness with that of other hedging models, including the conventional static, the constant conditional correlation (CCC) GARCH and the dynamic conditional correlation (DCC) GARCH models. As regards the reduction of variance in the portfolio returns, our empirical results show that in both the ‘in-sample’ and ‘out-of-sample’ tests, with full flexibility in the distribution specifications, the copula-based GARCH hedging model performed better than all of the other models.1. INTRODUCTIONWith financial and commodity futures having become popular instruments for investors and producers, in-depth analysis of the optimal hedging strategy has also become an important research topic in the field of risk management. Risk has conventionally been reduced by constructing a futures hedging portfolio with a constant hedging ratio generated from a regression model.1 However, the joint distribution of the spot and futures returns may change over time; thus the hedging ratio should be adjusted accordingly, in order to account for the arrival of new information. A dynamic hedging approach may therefore be more appropriate than the traditional regression-based static hedging approach.Recent studies have demonstrated that a time-varying hedging ratio can result in greater risk reduction than the constant hedging ratio, and indeed, various conditional volatility models have been specified as a means of constructing a dynamic hedging portfolio. Cecchetti et al. (1988) applied the ARCH model to the estimation of a time-varying hedging ratio with treasury bonds. Baillie and Myers (1991) and Myers (1991) used bivariate GARCH models to examine commodity futures and found that the time-varying hedging ratio outperformed the constant hedging ratio. Kroner and Sultan (1991, 1993) considered the bivariate error correction model with GARCH errors for currency futures and showed that, in terms of both in-sample and1See Ederington (1979), Figlewski (1984), Lee et al. (1987) and Ghosh (1993) among many others.out-of-sample performances, dynamic hedging resulted in lower portfolio variance. Tong (1996) and Brooks and Chong (2001) argued that the advantages of dynamic hedging were more significant in a situation of cross-hedging with currency futures, while Park and Switzer (1995) and Choudhry (2003) also found similar results for stock index futures.In order to simplify the estimation of optimal hedging ratios, most of the preceding models were based on the constant conditional correlation (CCC) GARCH model of Bollerslev (1990).2 However, while the CCC GARCH model has clear computational advantages over the multivariate GARCH model of Engle and Kroner (1995), the correlation structure between the spot and futures markets is nevertheless quite restricted. Engle and Sheppard (2001) and Engle (2002) subsequently proposed the dynamic conditional correlation (DCC) GARCH model as a means of considering the flexible correlation structure and simplifying the estimation procedure with two steps. This innovation provided an ideal alternative model for the construction of hedging portfolios.Dynamic hedging strategies formed with these multivariate GARCH models have, to a certain extent, met with empirical success in risk reduction. However, all of these models assume that the spot and futures returns follow a multivariate normal distribution with linear dependence. This is an assumption which is at odds with numerous empirical studies, in which it has been shown that the returns of most 2See, for example, Kroner and Sultan (1991, 1993), Park and Switzer (1995) and Lien et al. (2002).financial assets were skewed, leptokurtic and asymmetrically dependent.3 Various explanations for the nature of these empirical facts have been provided, such as leverage effects and asymmetric responses to uncertainty. Hence, these characteristics should be considered in the specifications of any effective hedging model.In this paper, in an attempt to improve the effectiveness of dynamic hedging by specifying, more realistically, the joint distribution of the spot and futures returns, we introduce an alternative model, which we refer to as the copula-based GARCH model, for the estimation of the optimal hedging ratio. Without the assumption of multivariate normality, the joint distribution can be decomposed into its marginal distributions and a copula, which can then be considered both separately and simultaneously. The marginal distributions can be any non-elliptical distributions, while the copula function describes dependence structure between the spot and futures returns.4Specifically, we determine the marginal distributions with the GJR-skewed-t model and the dependence parameter of the Gaussian copula function as a conditional process in order to capture possible dynamic and non-linear dependence between spot and futures returns. Following the multi-stage maximum likelihood method of Patton (2006b) and Bartram et al. (2006), we estimate the model parameters and generate the dynamic hedging ratio with covariance which is computed by numerical integration on the copula-based joint density.3See, for example, Longin and Solnik (2001), Ang and Chen (2002), and Patton (2006a).4 Explanations of copula theory can be found in Joe (1997) and Nelsen (1998); the theory and application of the conditional copula has already been described in detail by Patton (2006a,b) and Bartram et al. (2006).As compared to other hedging methods, including the conventional, CCC GARCH and DCC GARCH models, the copula-based GARCH model provides the most effective performance. For both S&P 500 and FTSE 100 futures hedging, our empirical results show that, in terms of variance reduction in both the in-sample and out-of-sample tests, the copula-based model outperformed all other models by an appreciable margin.This paper contributes to the literature by proposing and demonstrating a new model for effective futures hedging. The remainder of this paper is organized as follows. Section 2 describes the hedging models, including the conventional, CCC GARCH and DCC GARCH models. Section 3 presents the copula-based GARCH model for futures hedging. Section 4 provides details of the data used in this study and the empirical results of the different hedging models. The conclusions drawn from this study are presented in Section 5.2. THE HEDGING MODELSThe optimal hedging ratio is defined as the holdings of futures which minimize the risk of the hedging portfolio. Let s t and f t be the respective changes in the spot and futures prices at time t . If the joint distribution of the spot and futures returns remains thesame over time, the conventional risk-minimizing hedge ratio δ* will be*cov(,)var()t t t s f f δ=. (1) Estimation of this static hedging ratio is easily undertaken from the least-squaresregression of s t on f t . However, with the arrival of new information, the joint distribution of these assets may be time-varying, in which case, the static hedging strategy is not suitable for extension to multi-period futures hedging. Conditional on the information set at time t –1, we can obtain the optimal time-varying hedging ratio by minimizing the risk of the hedged return s t – δt – 1 f t ;5 that is,*111cov (,)var ()t t t t t t s f f δ−−−=. (2) Obviously, the dynamic hedging ratio depends on the way in which the conditional variances and covariances are specified.Kroner and Sultan (1993) proposed the following bivariate error correction model of s t and f t with a constant correlation GARCH (1,1) structure for the estimation of *t δ:01110111()()t s s t t stt f f t t s S F f S F ,ft ααλεααλ−−−−=+−+=+−+ε),t RD (3)1|~(0,st t ft N H εε−⎡⎤Ψ⎢⎥⎣⎦(4) 2,,,,2,,,,001,001s t s t s t sf t t t t f t f t sf t f t h h h h H D h h h h ρρ⎡⎤⎡⎤⎡⎤⎡⎤===⎢⎥⎢⎥⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦⎣⎦ (5) ,21,21,2,21,21,2,−−−−++=++=t f f t f f f t f t s s t s s s t s h b a c h h b a c h εε (6)where S t – 1 and F t – 1 are the spot and futures prices, respectively, S t – 1 – λ F t – 1 is the error correction term, Ψt – 1 is the information set at time t –1, and the disturbance term 5 See Kroner and Sultan (1993).εt = (εst , εft )′ follows a bivariate normal distribution with zero mean and a conditionalcovariance matrix H t with a constant correlation ρ. The GARCH term allows the hedging ratio to be time-varying, while the error correction term characterizes the long-run relationship between the spot and futures prices.However, since the assumption of constant correlation may be too restrictive to fit in with reality, we can adopt the DCC GARCH model proposed by Engle and Sheppard (2001) and Engle (2002) to release this restriction and improve the flexibility of the hedging models. In contrast with the CCC GARCH model, the DCC GARCH model allows the correlation R to be time-varying:,t t t t t t t t t H D R D D J Q J D == (7)where D t is the diagonal matrix of conditional standard deviations from univariate GARCHmodels, Q t = (q ij ,t )2x 2 is a positive definite matrix, , and Q 1/21/2,,diag {,}t s t f J q q −−=t t satisfies'121112(1)t t t Q Q θθθζζθ1t Q −−=−−++−D , (8)where ζt is the standardized disturbance vector, such that 1t t t ζε−=, Q is the unconditional correlation matrix of ζt , and θ1 and θ2 are non-negative parameterssatisfying θ1 + θ2 <1.Engle (2002) proposed the use of the two-stage maximum likelihood method for estimation of the parameters for the DCC GARCH model. Let denote the parameters of the univariate GARCH model in D Ξt and Θ denote the other parameters in R t . Under the assumption of normality, we can decompose the likelihood functioninto a volatility component, ()V L Ξ, and a correlation component, ; i.e.,(,)C L ΞΘ'111ˆˆ(,)(2log(2)log ||)()(,)2T t t t t V C t L H H L πεε−=ΞΘ=−++=Ξ+ΞΘ∑L , (9) where ˆt εis the residual vector of (3). In the first step, the parameters in the univariate GARCH models are estimated for each residual series. Taking the estimates as given and using the transformed residuals ˆΞ1ˆˆˆt tD t ζε−=, we can estimate the parameters of the dynamic correlation in the second step. Given the estimates ˆtH obtained in the CCC GARCH and the DCC GARCH models, the optimal dynamic hedging ratios can be estimated by*,ˆˆˆ/t sf t f h h δ=2,t. (10) 3. THE COPULA-BASED GARCH MODEL3.1 Model SpecificationAll of the models mentioned in the previous section are estimated under the assumption of multivariate normality. By contrast, the use of a copula function allows us to consider the marginal distributions and the dependence structure both separately and simultaneously. Therefore, the joint distribution of the asset returns can be specified with full flexibility, and will thus be more realistic.Following Glosten et al. (1993) and Hansen (1994), we specify the GJR-skewed-t models for shocks in the spot and futures returns. Under the same error correction model (3), the conditional variance for asset i , i = s , f , is given by2222,,1,1,1,2,1,1,i t i i i t i i t i i t i t h c b h a a k εε−−−=+++−,1,,,,)i t t i t i t i t i i i|, ~-(|h z z skewed t z εηφ−Ψ=, (11)with k i,t –1 = 1 when εi,t –1 is negative, otherwise k i,t –1 = 0. The density function of the skewed-t distribution is()12212211,21-|,11,21bz a a bc z b skewed t z bz a a bc z b ηηηφηφηφ+−+−⎧⎛⎞⎛⎞+⎪⎜⎟+<−⎜⎟⎪⎜⎟−−⎝⎠⎪⎝⎠=⎨⎪⎛⎞⎛⎞+⎪⎜⎟+≥−⎜⎟⎪⎜⎟−+⎝⎠⎝⎠⎩. (12) The values of a , b and c are defined as,124−−≡ηηφc a and ,3122a b −+≡φ12,2ηc η+⎛⎞Γ⎜⎟⎝⎠≡⎛⎞⎜⎟⎝⎠))) where η is the kurtosis parameter and φ is the asymmetry parameter. These are restricted to 4 < η < 30 and – 1 < φ < 1 .6Assume that the conditional cumulative distribution functions of z s and z f are and , respectively. The conditional copula function, denoted as , is defined by the two time-varying cumulative distribution functions of random variables ,,1(|s t s t t G z −Ψ,,1(|f t f t t G z −Ψ1(,|)t t t t C u v −Ψ,,1(|t s t s t t u G z −=Ψ and ,,1(|t f t f t t v G z −)=Ψ. Let Φt be the bivariate conditional cumulative distribution functions of z s,t and z f,t . Using the 6 This distribution becomes symmetrical Student t -distribution when the asymmetry parameter φ is equal to 0 and turns to the standard normal distribution when the kurtosis parameter η approaches ∞.Sklar theorem, we have,,11,,1,,11(,)(,|)((),()t s t f t t t t t t t s t s t t f t f t t t z z C u v C G z G z −−−−ΦΨ=Ψ=ΨΨΨ)−,. (13) The bivariate conditional density function of z s,t and z f,t can be constructed as,,1,,1,,11,,1,,1(,|)((|),(|)|)(|)(|)t s t f t t t s t s t t f t f t t t s t s t t f t f t t z z c G z G z g z g z ϕ−−−−−Ψ=ΨΨΨ×Ψ×−Ψ (14) where 211(,)(,|)t t t t t t t t t t C u v c u v u v −−∂ΨΨ=∂∂, ,,1(|s t s t t g z )−Ψ is the conditional density ofz s,t and ,,1(|f t f t t g z −)Ψ is the conditional density of z f,t .3.2 Parameter EstimationAt time t , the log-likelihood function can be derived by taking the logarithm of (14):,log log log log .t t s t c g g ,f t ϕ=++ (15)Let the parameters in g s,t and g f,t be denoted as θs and θf , respectively, and the other parameters in c t be denoted as θc . These parameters can be estimated by maximizing the following log-likelihood function:,()()()()s f s s f f c L θL θL θL θc =++ (16)with θ = (θs , θf , θc ) and L k representing the sum of the log-likelihood function values across observations of the variable k .Since the dimensions of the estimated equation may be quite large, it is quite difficult, in practice, to achieve simultaneous maximization of L s,f (θ ) for all of theparameters. In order to effectively solve this problem, we follow the two-stage estimation procedure adopted by Patton (2006b) and Bartram et al. (2006).In the first stage, the parameters of the marginal distribution are estimated from the univariate time series by,,11,,11ˆargmax log (|;),ˆargmax log (|;).T ss t s t t s t T ff t f t t f t θg z θθg z θ−=−=≡Ψ≡Ψ∑∑ (17) In the second stage, given the marginal estimates obtained above, the dependence parameter is estimated by11ˆargmax log (;,,)Tct t t θc θθθ−=≡Ψ∑ˆˆs f c . (18) 3.3 The Copula FunctionThe Gaussian copula function is employed to combine the marginal distributions into the joint distribution. Let ψ be the cumulative distribution function of the standardnormal, the Gaussian copula density function can be stated as22(,)]}t t t t t t c u v x y ρ=+, (19) with 11(t t x u ψ−−=Ψ)t and 11()t t y v ψ−t −=Ψ.Following Patton (2006a) and Bartram et al. (2006), we assume that the dependence parameter ρ relies on the previous dependence, ρt –i i , and historical information, (u t –1 –0.5)(v t –1 –0.5). If the number of the latter term is positive, thereby indicating that both u t –1 and v t –1 are either bigger or smaller than the expectation (0.5), we can infer that thecorrelation is higher than when the number is negative. The dependence parameter process is specified as:()()1211(1)(1)0.50.5t t t L L u v ββρωγ−−−−=+−−, (20)where both β1 and β2 are positive and satisfy 0 ≤ β 2 ≤ β 1 ≤ 1; ρt is constrained within the interval [–1, 1] , and θc = (β1 , β2 , ω , γ )' the copula parameter. After estimating the parameters in the copula-based GARCH model, the conditional variances 2,s t h and 2,f t h can be obtained from Equation (11) and the conditional covariancecan be generated by numerical integration with respect to Equation (14); i.e.,,,,,,,,1(,|)sf t s t f t s t f t s t f t t h h h z z z z drd ϕ∞∞−−∞−∞=Ψ∫∫w . (21)Hence, the dynamic hedging ratio for the copula-based GARCH model is calculated from Equation (10).4. EMPIRICAL RESULTS4.1 Data and Diagnostic AnalysisIn this empirical study, we examine the performance of alternative hedging models for S&P 500 and FTSE 100 index futures, with the sample period of the stock and futures data, obtained from Datastream, running from 2 January 1995 to 31 October 2005. The asset returns are the changes in the logarithm of the daily closing prices. The results of diagnostic analysis are reported in Table 1.<Table 1 is inserted about here>All of the returns have similar means, but the variations in the stock returns are lower than those for the futures returns. The stylized facts of the asset returns, such as negative skewness, leptokurtosis and significant Bera-Jarque statistics, are present in our data, which implies that the unconditional distributions of stock and futures returns are asymmetric, fat-tailed and non-Gaussian. The Ljung-Box tests show that there is no serial correlation in any of the S&P 500 returns; however, serial correlation is displayed in the FTSE 100 returns. Finally, both the Q2(24) and LM statistics for the ARCH effects present strong autocorrelations in the squared returns for the S&P 500 and FTSE 100 data.We report the results of the unit root and cointegration tests in Table 2. The augmented Dickey-Fuller (ADF) tests show that the stock and futures prices have a unit root, but first-differencing leads to stationarity. The Johansen trace statistics show that the stock and futures prices are cointegrated and that the error correction term should be considered in the model specification.<Table 2 is inserted about here>Since the cointegrating parametersˆλ, which are estimated using the Johansen approach, are close to 1 for the S&P 100 and FTSE 100 data, it is reasonable to impose the restriction λ= 1 into the error correction models; that is, the error correction term becomes S t–F t–1.–14.2 Estimation of the ParametersThe estimation results of the different hedging models are presented in Tables 3, 4 and 5, with all of the parameters having been estimated by the maximum likelihood method. For the CCC GARCH model (Table 3), the constant correlation ρ between the stock and the futures returns is positive and close to 1, which suggests that the futures provide an effective hedging instrument for the stocks. All a i +b i estimates are also close to 1, which implies that shocks in the stock and futures markets have high persistence in volatility. Hence the long-run average variance var(f) is not a goodtproxy for use in calculating the short-horizon hedging ratio.<Table 3 is inserted about here>For the DCC GARCH model (Table 4), the θ+θ2estimates are close to (but less1than) 1, which implies that the correlations between the stock and futures markets are highly persistent. Such high persistence means that shocks can push correlation away from its long-run average for some considerable time, although correlation will eventually be mean-reverting. Replacing the CCC GARCH model with the DCC GARCH model captures the persistence in correlation between the stock and futures markets.<Table 4 is inserted about here>For the copula-based GARCH model (Table 5), the significant a i,estimates2indicate that negative shocks have greater impacts than positive shocks on the conditional variances, with the asymmetric effect on volatility in the S&P 500 returnsbeing stronger than that for the FTSE 100 returns.<Table 5 is inserted about here >Since the autoregressive parameter β1 is greater than 0.9, the dynamic copulaparameter ρt is highly persistent. It implies that shocks to the dependence structurebetween the stock and futures returns can persist for some considerable time, which, in turn, affects the estimated hedging ratio. The parameter γ is significantly positive, which suggests that the latest information on returns is an appropriate measure for modeling the dynamic dependence structure. Finally, the log-likelihood function in the copula model is higher than those for the CCC GARCH and DCC GARCH models. In terms of model fitting, the copula-based method outperforms all of the other models.4.3 In-sample Comparison of Hedging PerformanceIn this section, we evaluate the in-sample hedging performance of the different models.A hedging portfolio is constructed comprising of an index and its futures. We then calculate the variance of the returns to these portfolios over the sample:*var()t t t s f δ−, (22)where *t δ represents the estimated hedging ratios. The results of the comparison of thein-sample hedging performance of the different models are summarized in Table 6.<Table 6 is inserted about here >The risk of hedging portfolios based on contemporaneous hedging ratios isconsidered in Panel A of Table 6, showing that the copula-based GARCH model outperforms the conventional and dynamic hedging models for both stock indices, with the improvement over other models by the proposed method varying from 4.25 per cent to 15.95 per cent (the percentage change in variance reduction). As expected, the dynamic hedging model performs much better than the conventional hedging model for the FTSE 100 data; however, the performance of the conventional hedging model is inferior to only the copula-based model for the S&P 500 data, demonstrating smaller variance than both the CCC GARCH and DCC GARCH models. These results suggest that when estimating the optimal hedging ratio, not only it is important to have time-varying variances, but it is of considerable importance to employ suitable distribution specifications for the time series, as evidenced by the superior performance of the copula-based GARCH model.Following the evaluation of the contemporaneous variation in hedging portfolios in Panel A of Table 6, in Panels B and C we extend the respective holding period of the portfolios to one day and two days, as a check for robustness. Again, as compared to the other three hedging models, the copula-based GARCH model is most effective in reducing the variance in hedging portfolio. As expected, the amount of variance reduction becomes less significant when the portfolio holding period is prolonged.Overall, our comparison of in-sample performance suggests that our proposed model has greater effectiveness in hedging than either the conventional approach orthe other two GARCH models. In addition, our empirical results show that the copula-based hedging ratios are more volatile, which implies that the copula-based GARCH model is very sensitive and is capable of capturing the time-varying dependence more accurately as new information is received by the market.4.4 Out-of-sample Comparison of Hedging PerformanceWhen a model achieves good in-sample performance, it will not necessarily achieve good out-of-sample performance, because such ‘over-fitting’ could be quite penalizing. If we set out to use a different hedging model, our concern is certainly more about how well we can do in the future. It is therefore necessary to determine whether the proposed model still works well in terms of its out-of-sample hedging performance.To compare such performance, we adopt a method which involves rolling over a particular number of samples to determine the series of out-of-sample hedging ratios. More specifically, we take 2,000 observations from the sample (from 2 January 1995 to 29 November 2002) and use these observations to estimate all of the hedging models. We then forecast the hedging ratio for the next day (30 November 2002) by computing the one-period-ahead covariance forecast divided by the one-period-ahead variance forecast. The calculation is repeated for the following day (1 December 2002), using the nearest available 2,000 observations before 1 December 2002, i.e. from 3 January 1995 to 30 November 2002. By continually updating the model estimation through to the end of the dataset, we complete 733 one-period forecasted hedging ratios for theS&P 500, and 739 one-period forecasted hedging ratios for the FTSE 100.The out-of-sample evaluation results of the different hedging models based on the forecasted hedging ratios are reported in Table 7. In Panel A of Table 7, as in our in-sample findings, the estimation of the hedging ratio using the copula-based GARCH model outperforms the other hedging methods for the FTSE 100 index, with all of the dynamic hedging strategies having smaller variances than the conventional hedging model. However, the copula-based GARCH model performs slightly worse than the CCC GARCH and DCC GARCH models for the S&P 500 index, although the difference is less than 1 per cent.<Table 7 is inserted about here>In Panel B of Table 7, it is assumed that variance in the portfolio returns is evaluated after holding for one more day. The result indicates that the copula-based GARCH model outperforms all of the other models with the improvement for the S&P 500 data being much more significant than that for FTSE 100 data.In general, the copula-based GARCH model provides the best performance in out-of-sample hedging, with the DCC GARCH model in second place, followed by the CCC GARCH model, and the conventional model in last place. Therefore, by specifying the joint distribution of stock and futures returns with full flexibility, we can use the copula-based GARCH model to effectively reduce the risk in hedging portfolios.5. CONCLUSIONSIn this paper, we have adopted a copula-based GARCH model to estimate risk-minimizing hedging ratios, and compared the hedging effectiveness of the model with that of three other models, conventional, constant conditional correlation GARCH and dynamic conditional correlation GARCH hedging models. Through a copula function, the proposed model can specify the joint distribution of the spot and futures returns with full flexibility and hence the distribution is quite realistic. Since the marginal and joint distributions can be specified separately and simultaneously, we can estimate the conditional variance and covariance to compute the optimal hedging ratio without the restrictive assumption of multivariate normality.With full flexibility in the distribution specifications, the hedging effectiveness based on the proposed model is substantially improved as compared to the alternative models. Both the in-sample and out-of-sample evidences for the S&P 500 and the FTSE 100 indices indicate that the copula-based GARCH hedging model outperforms other methods. Therefore, with more precise specification of the joint distribution of assets, we can effectively manage the risk exposure of stocks. Our results have crucial implications for risk management.。
利用双参数预测储层的孔隙率
云美厚
【期刊名称】《石油地球物理勘探》
【年(卷),期】1997(032)004
【摘要】本文介绍了双参数预测储层孔隙率的新方法,旨在同时利用纵,横波速度或时差和密度双参数来预测储层的孔隙率,据D.Eberhart-Phillips的岩石物理实验测定结果,JLK地区6口测井资料的计算结果表明:利用纵,横波速度联合求取向孔隙率的平均相对误差为7.86%,利用时差和密度联合求取的孔隙率的平均相对误差为13.0%,另外。
该方法在获得预测孔隙率值的同时,还可得较可靠的泥质含量值,这对于揭示储层
【总页数】5页(P565-569)
【作者】云美厚
【作者单位】大庆石油学院勘探系
【正文语种】中文
【中图分类】P631.4
【相关文献】
1.基于岩石物理的储层预测技术在GJZ地区沙二段储层预测中的应用 [J], 张睿璇
2.利用波阻抗反演预测地层孔隙率 [J], 马如辉;张平
3.利用地震分频处理技术预测河流相储层--基于精细储层预测调整海上高含水油田开发方案实例 [J], 胡光义;王加瑞;武士尧
4.随机森林储层预测及关键参数探讨——以SC某工区储层预测为例 [J], 范晓;邹
文;刘璞;李乐;张茜
5.利用储层品质指数提高测井产能预测精度——以鄂尔多斯盆地二叠系S_3~2储层为例 [J], 陈彦竹;李戈理;及成林;郑昊;苟顺超
因版权原因,仅展示原文概要,查看原文内容请购买。
Dynamic Density Forecasts for Multivariate Asset ReturnsArnold PolanskiEvarist StojaSeptember 2009D iscussion Paper No. 09/616Department of EconomicsUniversity of Bristol8 Woodland RoadBristol BS8 1TNDynamic Density Forecasts for Multivariate Asset ReturnsArnold Polanski, Queen's University BelfastEvarist Stoja, University of BristolSeptember 2009AbstractWe propose a simple and flexible framework for forecasting the joint density of asset returns. The multinormal distribution is augmented with a polynomial in (time-varying) non-central c o-moments of assets. We estimate the coefficients of the polynomial via the Method of Moments for a carefully selected set of co-moments. In an extensive empirical study, w e compare the proposed model with a range of other models widely used in the literature. Employing a recently proposed technique to evaluate multivariate forecasts, we conclude that the augmented joint density provides highly accurate forecasts of the negative tail of the joint distribution.Keywords: Time-varying higher co-moments; Joint Density Forecasting, Method of Moments, Multivariate Value-at-Risk.JEL classification: C22, C51, C52, G11Address for correspondence: Arnold Polanski, Queen’s University Management School, 25 University Square, Queen's University Belfast, Belfast, BT7 1NN, UK. Tel: +44(0)2890975024, Email: a.polanski@. Evarist Stoja, School of Economics, Finance and Management, University of Bristol, 8 Woodland Road, Bristol, BS8 1TN, UK. Tel: +44(0)1173310603, Email: e.stoja@.1.IntroductionThe finance literature is replete with m odels and applications of point forecasts. For example, the celebrated ARCH model of Engle (1982) projects the next period’s level of volatility as a function of current and lagged squared returns. However, the amount of information contained and exploited in point forecasts is considerably lower than in the case of interval or density forecasting (see, for example, Christoffersen, 1998). A density forecast is an estimate of the probability distribution of the possible realizations of a variable, thereby providing a full description of the uncertainty associated with the forecast. This contrasts sharply with the point forecast which, by definition, provides no such information. This simple observation in itself provides a strong case for density forecasting. Moreover, the recent prominence of the risk management industry which heavily relies on density forecasting has strengthened this approach. Indeed, financial companies such as Reuters, Bloomberg, J.P. Morgan regularly provide density forecasts for their clients. The aim of this practice is to provide the user with a procedure which generates density forecasts of tailored portfolio returns over a specified horizon. One example is the Standard Portfolio Analysis of Risk (SPAN) framework which has become an industry standard for calculating margin requirements for customers and clearing house members. SPAN is essentially a mixture of stress tests performed on each underlying asset in the portfolio (see, for example, Artzner et al., 1999). Moreover, density forecasts are also of particular interest to regulators where the obvious examp le is Value-at-Risk (VaR), which since its adoption by the Basle Committee on Banking2Supervision (1996) has become the most widely used risk measurement tool in the banking sector.As already pointed out, the ARCH model is mainly concerned with point forecasting. Under certain circumstances, it is possible to construct density forecasts with the volatility estimate obtained from an ARCH model and, hence, it could be argued that the former stems from the latter. Indeed, under normal or, more generally, elliptical distribution s, one can obtain a forecast for the entire distribution from a volatility forecast. The simplifying assumption of elliptically distributed returns (in some cases augmented with a time-varying variance), which ensures tractability, is p aramount in finance with wide ranging applications in risk management, asset and option pricing and portfolio decisions. There is however, increasing evidence that asset returns are not elliptically distributed. For example, Singleton and Wingender (1986) find evidence of skewness in the distribution of stock returns. Simkowitz and Beedles (1978) show that the degree of skewness preference of investors will impact on the extent of their diversification since the higher the degree of skewness, the fewer assets investors will hold given that diversification reduces the skewness of a portfolio, while Cotner (1991) documents the impact of asymmetries on option prices. More recent evidence on non-normality of return distribution is provided i.a. by Bae et al. (2003), Bali and Weinbaum (2007), and Polanski and Stoja (2009a).When the normality assumption is violated then a specification and forecast of the entire conditional joint distribution is necessary. More generally, decision making under uncertainty with asymmetric loss function and non-3Gaussian variables involves density forecasts. Applications of density forecasting in finance and economics are surveyed and discussed in Tay and Wallis (2000).The literature on modeling the entire density of a random variable is more limited. Gallant et al. (1991) employ a semi-parametric framework to forecast the density which relies on a series expansion of the normal.Hansen (1994) proposes the auto regressive conditional density model which employs a skewed student-t conditional distribution. Other approaches to modelling the conditional density of returns include the exponential generalized beta of McDonald and Xu (1995) and skewed generalized-t distribution of Theodossiou (1998). Another strand of literature on density forecasting originates in the seminal contribution of Breeden and Litzenberger (1978) and relies on extracting the implied density from the option prices via the Black-Scholes pricing model (see, for example, Fackler and King, 1990; Jackwerth and Rubinstein, 1996; Bahra, 1997).The cited literature is exclusively focused on modeling the univariate density of returns. However, financial decision making usually involves more than one risky asset. The standard practice here, as in the early univariate density literature, is to assume that the density is multinormal with a possibly time-varying covariance matrix (see Diebold et al., 1999). In this case, the correlation coefficient adequately captures the dependence between assets. However, joint normality is not supported by empirical evidence (see, for example, Guidolin and Timmermann, 2006). Moreover, correlation is only a measure of linear dependence and suffers from a number of limitations (see Embrechts et al., 2002). For example, Patton (2004) argues that the dependence between assets is stronger during4market downturns than during market upturns while Polanski and Stoja (2009a) show that the probability of extreme events, as measured by the tail thickness, varies over time. These findings imply that results obtained via the standard elliptical distributions would generally be invalid.To address this shortcoming, Patton (2007) advocates the copula approach in which the multivariate density is modelled as the product of the marginal densities of variables and the copula function which captures the dependence between the variables. The copula technique is flexible as the marginal densities can all be different from each other (and from the copula). However, in the finance literature it is mainly employed in the bivariate case. Recent attempts to generalise it to the multivariate case turned out to be technically and computationally demanding which detracts somewhat from their usefulness (see, for example, Aas et al., 2009).In this paper, we propose a novel technique to modelling the multivariate density of asset returns. We approximate the joint density as the product of a multivariate normal and a polynomial which adjusts it for the estimated time-varying (co-)moments of the variables of interest. We estimate the coefficients of the polynomial via the Method of Moments (MM). This approach provides a flexible tool for modelling the empirical joint distribution of financial data, which in addition to volatility, exhibits time-varying higher (co-)moments. While our method maintains the simplicity of the multivariate normal distribution, it can be easily adapted to more complex distribution functions and generalized moments. Furthermore, we extend the extant literature by employing the56MM-estimated augmented joint density (AJD) to forecast multivariate VaR (MVaR). Similar to its univariate counterpart, MVaR determines the probability of extreme losses for two or more assets.Our analysis employs daily returns of highly traded stock indices and exchange rates. The results suggest that the time-varying conditional (co-)moments are very important in characterising the joint return distributions and yield significant improvements in the forecasting of MVaR. While a specifically developed test shows that our density forecasts cannot adequately approximate the distribution over the entire domain, they are successful in approximating the negative tail of the joint distribution of returns which is the focus of risk management.The outline of the remainder of this paper i s as follows. In Section 2 we discuss the theoretical framework for the MM multivariate density forecasting. Section 3 describes the statistical evaluation method. Section 4 presents the data and the empirical results while Section 5 concludes.2.Theoretical FrameworkWe approximate the multivariate distribution of a vector X =),...,(1n X X of returns by apolynomially-adjusted multinormal probability density function (pdf) )(x f ). If f(x;µ,S) is the n -dimensional normal pdf with mean µ and the variance-covariance matrix S, thenthe AJD )(x f ) is the product of f(x;µ,S) and a polynomial in x = ),...,(1n x x ,7∑Σ=s 2s n s 1)...)(,(x;(x)n 1x x f s λµϕ)(1)where ),...,(1n s s s =n N ∈ is a vector of exponents and s λ is the coefficient of the termn 1sn s 1...x x . Note that the polynomial, and hence the pdf )(x f ), assumes only non -negative values. We estimate the coefficients s λ via the MM procedure. From historical returnsx = ),...,(1n x x , where i x is the return vector of asset ),...,1(n i =, we compute the co-moment estimates ]...[n 1v n v 1x x E ) for a set V of exponent vectors v = ),...,(1n v v n N ∈ andthen solve the multivariate quadratic system,]...[]...[n 1n 1v n v 1v n v 1x x E X X E f ))= for all v ∈V , (2)for the coefficients s λ, where [.]fE ) is the expectation operator with respect to f ). Since solving quadratic equations is in general an NP-hard problem (with several applications in cryptography), we can approximate the solution to (2) for a large number of assets by minimizing the weighted sum of squared deviations,∑−v f v x x E X X E 2v n v 1v n v 1])...[]...[(n 1n 1))ω, (2a)8with respect to the coefficients s λ, where v ω is the weight of the co-momentcorresponding to v . Note that for a sufficiently high weight v ω for v = (0,…,0), )(x f )integrates to one,↑↓−=−∫v Rv f v as dr r f x x E X X E n ωωω0)1)((])...[]...[(220n 010n 01)))The estimation of )(x f ) requires, therefore, a set of non-central co-moments, eachdefined by a vector of exponents v . In the empirical applications of Section 4, each co-moment vector v in (2) is also used as a co-moment vector s in the polynomial part of(1) and vice versa. The co-moments employed in the MM estimation (2) mirror, therefore, the polynomial terms in the forecast (1), although this is not a necessary condition for the applicability of our model.The AJD (1) provides a flexible modelling framework. Interestingly, by including in it higher co-moments, the multinormal distribution can be tailored to the specific features of financial data such as fat tails, joint asymmetry and – in a slightly modified version – asymmetric dependence among assets.11 For example, the AJD (1) could be modified to capture the higher dependence among assets during market downturns than during market upturns. To this end, we would include a term which is activated whenever all returns are negative as illustrated in thebivariate density forecast )(x f )= 221211121...)0)&0I()(...;.,(+<<⋅+x x x x x x λϕ.9The model allows also for the incorporation of time-varying co-moments into dynamic forecasts. For example, the exponentially weighted moving average (EWMA) specifies the next period’s th k moment 1,+t k u of the portfolio return to be a weighted average ofthe current th k moment t k u , and the current actual return t x , raised to the power of k ,∑∞=+=0i i -t k k k t k t k,k 1t k,)-(1)x -(1+u =u k i x γγγγ (3)Therefore, an EWMA moment can be interpreted as a weighted average of past returns , raised to the appropriate power, encapsulating information from all past shocks with exponentially declining importance attached. This can be extended to n assets and non-central co-moments defined by a vector of exponents ),...,(1n v v v =n N ∈,∑∞=++==0i i -t n,i -t 1,t n,t 1,t n,t 1,1t n,1t 1,....x x )-(1....x )x -(1+....u u ....u u 1111n n n n v v i v v v v v v v v v v γγγγ (4)The forecast n vv 1t n,1t 1,....u u 1++ can then be employed as the co-moment estimate ]...[n 1v n v 1x x E ) in the MM optimization (2). The polynomial coefficients that minimize(2), together with the estimates of the mean and variance of the multinormal part in (1),fully define the density forecast f ). Note that the forecasts are dynamic in the sense thatthey evolve over time. In the next section, we discuss the evaluation of multidimensional density forecasts.103. Statistical Evaluation MethodA correct density forecast should be unconditionally and conditionally accurate.Unconditional accuracy of a continuous univariate (time-varying) density forecast t f )implies that the frequency of observations t x , for which the probability integraltransformation (PIT) )Pr()(:t t t t t x X x F z <==) is less than ]1,0[∈α, approaches α as thesample size T increases (Rosenblatt, 1952). In other words, the PIT sequence Tt t z 1}{=,computed from observations T t t x 1}{=, must be uniformly distributed on the unit interval when the forecasts are correct. Polanski and Stoja (2009b) showed that this criterion does not generalize di r ectly to multivariate density forecasts. The PIT sequencet z =)),...,((,,1t n t t t x x x F =) from multidimensional density forecasts t f ) is not necessarily uniform even when the forecasts are correct. However, a simple modification in the PIT computation restores the uniformity of the scores. First, each observation ),...,(,,1t n t t x x x = is transformed into a vector M t x := )1,....,1(*),...,(,,1t n t x x Max and then the scoreM tz =)(M t t x F ) is computed. Note that for unidimensional forecasts, M t z and the standard PIT t z are identical. Polanski and Stoja (2009b) proved that the scores M t z are ...d i i accordingto the uniform distribution U[0,1] if the AJDs t f ) are correct.Testing the conditional accuracy of forecasts entails the proof that the current score M t z does not convey any information on the score in the next period or, alternatively, that the scores M t z are distributed independently across the time. The unconditional and conditional11accuracy can be, therefore, tested with the usual goodness-of-fit tests (see Noceti et al., 2003 for a comparison of the existing tests) and serial independence tests (e.g., portmanteau test) on M t z -scores.From the latter scores, we can also compute the exceedance rates for the MVaR. For aunidimensional cdf t F ), the VaR at the 1-α coverage level is defined as (the negative of)the quantile αt q for which αα=)(t t q F ). By analogy, for an n -dimensional cdf t F ), werequire that the MVaR ),...,(ααtt q q satisfies the condition ααα=)),...,((t t t q q F ). Then, from the definition M tz = )(M t t x F ) it can be deduced that M t z is less than α whenever all components of the observation ),...,(,,1t n t t x x x = exceed the critical value αt q ,M t z < α ⇔ t x < ),...,(ααt t q q .As in the univariate case, the assessment of MVaR forecasts is inherently difficult. Since the actual MVaR is not observable, the accuracy of the forecasted MVaR cannot be directly evaluated. One can, however, compute the proportion of observations, which exceed the MVaR forecast and compare this number with a required significance level. We refer to this procedure as unconditional accuracy. On the other hand, the conditional accuracy requires that the number of observations that exceed the MVaR forecast should be unpredictable when conditioned on the available information. To assess both types of accuracy, we employ the unconditional accuracy test of Kupiec (1995) and the conditional accuracy test of Christoffersen (1998). Although both tests are designed forunivariate densities, they still apply for joint distributions, because the score computation has effectively converted a multivariate problem into a univariate one. Indeed, as the exceedence of the MVaR at level αis equivalent to Mz<α, the test statistics for bothttests can be computed directly from the Mz-scores.t4.Empirical StudyDataTo evaluate the performance of our model, we tested out-of-sample the forecasts for the joint distribution of the daily returns of S&P500, Dow Jones and Nasdaq equity indices as well as the forecasts for the joint distribution of the exchange rates of GBP, CHF and JPY measured against the USD. We investigated also the performance of our model in the inter-temporal dependence context by modelling the joint distribution of returns of an asset over five consecutive business days.Table 1 (Panel A) presents summary statistics for the continuously compounded daily return series of equity indices computed from the raw prices.The mean returns are almost identical for all series, and close to zero. The ARCH(4) portmanteau test for up to fourth order serial correlation in squared returns shows that all returns display significant volatility clustering and are highly leptokurtic (which is consistent with the existence of time-varying volatility). The returns for Dow Jones are slightly negatively skewed while for the remaining two series the returns are positively skewed. The three return series are relatively strongly correlated, with the highest correlation between S&P500 and Dow Jones, and the lowest between Dow Jones and Nasdaq. In line with previous evidence for1213daily returns, the null hypothesis of normality is strongly rejected by the Bera-Jarque statistic. Panel B in Table 1 reports the same summary statistics for the exchange rates, where, with the exception of the correlation coefficients which are significantly lower, similar observations apply. In particular, the null hypothesis of normality is again strongly rejected for the exchange rates.[Table 1]The joint distribution of the log returns of n assets,t x = ),...,(,,1t n t x x = 100))/ln(),...,/(ln(1,,1,1,1−−t n t n t t p p p p ,where t i p , is the closing price at date t, were forecasted according to three models.In the basic model N, the forecast 1+t f ) at date t+1 was simply the multinormal pdf withzero mean and the variance-covariance matrix estimated from the historical returns in the moving window ),...,(t T t r r −.Model N2 used th e EWMA specification (4) to forecast the second (co-)moments,n n n v v t v v v t v v v t n,t 1,,t n,t 1,,1t n,1t 1,....x )x -(1+....u u ....u u 111γγ=++ (5)14for all vectors )...,,(n 1v v v = such that n 1+...v v += 2 and with the parameters t v ,γ in(5) set to minimize the squared s um of the historical forecast errors in the moving window ),...,(t T t x x −,2∑−=−t T t s v v v v n n 2s n,s 1,s n,s 1,)....x x ....u u (11 (6)The forecasted latter moments were used to construct the variance-covariance matrix 1+Σtfor the AJD 1+t f )= N(0, 1+Σt ). N2 corresponds, therefore, to the EWMA-multinormalmodel (see Diebold et al., 1999).Finally, in the model N24, the forecasted multinormal pdf from N2 was multiplied by the polynomial term,∑v v v v n x x 2n 1)...(1λ for all )...,,(n 1v v v = such that }4,2,0{∈i v , }4,0{+...n 1∈+v v (7)that included the fourth (co-)moments.3 The coefficients νλ in the polynomial (7) were estimated by MM with the moment forecasts (5) for the vectors of exponents v from (7). 2 Polanski and Stoja (2009a) show that this procedure is equivalent to the ML estimation of t v ,γ.15As in N2, the parameters t v ,γ in (5) were set to minimize the squared sum of historicalerrors (6).ResultsThe out-of-sample evaluation of density forecasts was based on the scores M t z =)(M t t x F ),where M t x = )1,....,1(*),...,(,,1t n t x x Max . For all three models and for all data sets under study, the Pearson’s 2χ test strictly rejected the null of uniform distri bution of M t z -scores with the resulting p-values virtually equal to zero in each case. None of the models generates, therefore, an acceptable forecast for the multivariate distribution of returns over the whole domain. However, risk man agers and regulators are interested generally in the likelihood of large losses. If this is the case, then a model which accurately describes the extreme events while failing in the interior of the distribution will not be rejected. In other words, a model superior in forecasting the central part of the distribution will be eschewed in favour of another model which accurately forecasts the negative tail. This objective motivates the censored likelihood test of Berkowitz (2001), in which the observations not falling into the negative tail of the distribution (with cut-off point being decided by the user’s requirements) are truncated.3 We also experimented with a polynomial including the odd co-moments, but in our datasets they were found to have little impact on the negative tail of the joint density which was our main focus.Henceforth, we focus only on the negative tail of the distribution which is the relevant part of the distribution for risk man agement purposes. We observed that the MVaR forecasts from N24 were accurate for relatively low values of the nominal level αin all data sets under study. Note that the latter MVaR are of particular importance for the assessment of the joint risk of financial assets of interest.In Table 2, we report the actual MVaR exceedance rates for different nominal levels α, together with Kupiec’s (1995) unconditional (t) and Christoffersen’s (1998) conditionalu(LR) test statistics for the joint density of S&P500, Dow Jones and Nasdaq indices. The csimple constant-higher-moments N and N2 models perform poorly, conditionally and unconditionally, with exception rates often higher than twice the nominal exception level. In sharp contrast to N and N2, the inclusion of the fourth co-moments in the specification of N24 dramatically improves the quality of the forecasts both conditionally and unconditionally. Similar conclusions can be drawn from Table 3, where we report the same statistics for the joint density of GBP, CHF and JPY exchange rates. These findings confirm the importance of (even) higher (co-)moments in the density forecasting of financial data as observed previously fo r unidimensional variables (Polanski and Stoja, 2009a).[Table 2 and 3]In addition to the joint density of returns on different assets, financial institutions may be interested in the joint density of extreme losses on the same asset over consecutive16business days. Table 4 presents the actual MVaR exceedance rates for different nominal levels α, together with Kupiec’s (1995) unconditional (t) and Christoffersen’s (1998)uconditional (LR) test statistics for the joint density of returns on five consecutive cbusiness days for Dow Jones. The out-of-sample period is 09/12/1976 to 07/09/2005 (7500 daily observations were used to construct a sequence of 1500 quintuple observations). Although for high αthe improvements on MVaR forecasts are not significant, for lower levels of α the dominance of the AJD over its simpler counterparts is overwhelming both conditionally and unconditionally. This illustrates that incorporating higher co-moments in the joint density of returns yields significant improvements in MVaR forecasting, not only cross-sectionally but also in an intertemporal context.[Table 4]5.Summary and ConclusionWe propose a simple and flexible framework for forecasting the joint density of asset returns. The multinormal distribution is augmented with a polynomial in time-varying higher co-moments, where the coefficients of the polynomial are MM estimated for a carefully selected set of co-moments. In an empirical study, we compare the proposed model with a range of other models widely used in the literature. Although a recently proposed goodness-of-fit test (Polanski & Stoja, 2009b) shows that none of the models examined provides an accurate description of the entire joint distribution of returns, the17AJD performs well in the negative tail of the distribution. By focusing on the negative tail, which relates to the probability of joint extreme losses, we keep with the standard risk management practice. In spite of its conceptual and computational simplicity, our framework appears to deliver a highly accurate forecast of the joint risk.A consequence of the conceptual simplicity and flexibility is its intuitive appeal. The co-moments, used in the MM estimation, relate directly to the shape of the joint distribution. Furthermore, t he structure of density forecasts is the same for an arbitrary number of assets. Regarding the computational cost, we note that the estimation of multi-dimensional functions can be limited by specifying only a few co-moments in the objective function (2) whereas each forecast is evaluated by computing a single multidimensional integral. Possible extensions of the basic model could include the asymmetric dependence of returns on joint positive and negative shocks and/or augmenting other functions than the standard normal.18ReferencesAas, K., Czado, C., Frigessi, A., and Bakken, H., (2009), “Pair-copula constructions of multiple dependence”, Insurance: Mathem atics and Economics 44, 182–198.Artzner, P., Delbaen, F., Eber, J.-M., and Heath, D. (1999), “Coherent Measures of Risk”, Mathematical Finance, 9, 203-228.Bae, K.-H., Karolyi, G.A., and Stulz, R.M., (2003), “A New Approach to Measuring Financial Contagion”. Review of Financial Studies, 16, 717-764.Bali, T.G., and Weinbaum, D., (2007), “A Conditional Extreme Value Volatility Estimator Based on High Frequency Returns”, Journal of Economic Dynamics and Control, 31, 361–397.Bahra, B., (1997), “Implied risk!neutral probability density functions from option prices: theory and application”. Working Paper No. 66, Bank of England, London.Basle Committee on Banking Supervision, (1996), “Overview of the Amendment to the Capital Accord to Incorporate Market Ri sks”, January.Berkowitz, J., (2001) “Testing Density Forecasts with Applications to Risk Management”. Journal of Business and Economic Statistics, 19, 465-474. Breeden, D.T., and Litzenberger, R.H., (1978), “Prices of state-contingent claims implicit in options prices”, Journal of Business, 51, 621-651.Christoffersen, P. F., (1998), “Evaluating Interval Forecasts”, International Economic Review, 39, 841–862.Cotner, J.S., (1991), “Index option pricing: do investors pay for skewness?”, Journal of Futures Markets, 11, 1-8.Diebold, F.X., Hahn, J., and Tay, A.S., (1999), “Multivariate Density Forecast Evaluation and Calibration in Financial Risk Management: High Frequency Returns on Foreign Exchange”. Review of Economics and Statistics, 81, 661-673.Em brechts, P., McNeil, A., and Straumann, D., (2002), “Correlation and dependence properties in risk management: properties and pitfalls”, in M. Dempster, (ed.), Risk Management: Value at Risk and Beyond, Cambridge University Press. Engle, R.F., (1982), “Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of U.K. Inflation,” Econometrica, 50, 987-1008.19。