Assessing The Starburst Contribution to the Gamma-Ray and Neutrino Backgrounds
- 格式:pdf
- 大小:109.71 KB
- 文档页数:3
Assessing the deposition and canopy penetration of nozzles with different spray qualities in an oat(Avena sativa L.)canopyJ.Connor Ferguson a,*,Rodolfo G.Chechetto a,b,Andrew J.Hewitt a,c,Bhagirath S.Chauhan d,Steve W.Adkins a,Greg R.Kruger c,Chris C.O'Donnell aa The University of Queensland,Gatton4343,Queensland,Australiab S~a o Paulo State University-FCA,Department of Rural Engineering,Botucatu,S~a o Paulo,Brazilc University of Nebraska-Lincoln,North Platte69101,NE,USAd Queensland Alliance for Agriculture and Food Innovation(QAAFI),The University of Queensland,Toowoomba4350,Queensland,Australiaa r t i c l e i n f oArticle history:Received4September2015 Received in revised form24November2015Accepted26November2015 Available online17December2015Keywords:Spray qualityApplication volume rateCoverageDroplet number densityNozzleSpray driftPesticide efficacy a b s t r a c tConsistent spray coverage that is evenly distributed throughout the canopy is necessary to control pest populations that can negatively affect yield.As applicators are switching to Coarser spray quality nozzles to reduce risk and liability of pesticide spray drift,concerns about efficacy loss are growing.Previous research has indicated that small droplets are the most effective at penetrating through crop canopies, but newer nozzle technologies have improved the effectiveness of larger droplet or Coarser sprays. Research was conducted to assess the canopy penetration of nozzles that produce Coarse,Very-Coarse and Extremely-Coarse spray qualities compared to nozzles that produce Fine and Medium spray quali-ties.Kromekote collectors were positioned in four configurations in an oat(Avena sativa L.)var.‘Yarran’(AusWest Seeds,Forbes,NSW,Australia)crop to quantify the coverage and droplet number densities (droplets cmÀ2)across three application carrier volume rates:50,75and100L haÀ1.Applications were made in thefield in30cm tall,tillering oats,with collectors arranged in a randomised complete block design with three replications.The entire study was repeated on the following day.Results showed that droplet number densities were inversely related to the droplet size produced by the nozzles,yet coverage was increased more by application volume rate than droplet size.Thus,both spray drift reduction and improved canopy penetration can be achieved with proper nozzle selection and operation parameters for the control of agronomic pests.©2015Elsevier Ltd.All rights reserved.1.IntroductionCrop canopy penetration is important for pesticide efficacy, especially for the control of invertebrate and fungal pests.Poorly distributed sprays in a crop canopy reduce the effectiveness of a spray application(Uk and Courshee,1982;Wolf et al.,2000),which can cause growers and applicators to have to reapply their sprays to achieve adequate pest control.Post-emergence herbicide efficacy is directly related to the ability to penetrate through crop canopies (Knoche,1994).Concerns about spray drift have led to the adoption of Venturi and other nozzle types which seek to increase droplet size to reduce this risk(Ferguson et al.,2015).The spray droplet size is the greatest factor affecting spray drift in broadacre crops (Hewitt,1997)and droplets with diameters smaller than100m m have the greatest tendency to drift(Grover et al.,1978;Byass and Lake,1977).When the droplet diameter is below200m m,deposi-tion is influenced primarily by atmospheric and wake effects from the sprayer(Spillman,1984)which often cause droplets to be captured in the upper portion of a plant canopy(Uk and Courshee, 1982).Physics dictates that a projected object with more mass under the influence of gravity would experience greater mo-mentum which should cause it to move deeper into dense canopies (Spillman,1984).Droplet size classification is based on a standard developed by the British Crop Protection Council(Southcombe et al.,1997)that has been updated and approved under the American Society of Agricultural and Biological Engineers(ASABE,formerly ASAE) producing the current version of its S572.1standard in2009(ASAE, 2009).The droplet size classes according to the ASAE standard(in increasing droplet size order)are:Extremely-Fine,Very-Fine,Fine, Medium,Coarse,Very-Coarse,Extremely-Coarse,and Ultra-Coarse.*Corresponding author.E-mail address:j.ferguson@.au(J.C.Ferguson).Contents lists available at ScienceDirect Crop Protectionjou rnal homepage:www.e lse /loca te/cropro/10.1016/j.cropro.2015.11.0130261-2194/©2015Elsevier Ltd.All rights reserved.Crop Protection81(2016)14e19The exact delineation of the size classes are based on a set of certified reference nozzles and each laboratory's droplet mea-surement system(ASAE,2009).Previous studies have examined droplet coverage and canopy penetration across a range of nozzle types(Knoche,1994;Zhu et al., 2004;Derksen et al.,2008;Hanna et al.,2009;Wolf and Daggupati, 2009)and found varying results.The results generally fall into two categories:smaller droplets penetrate canopies better(Knoche, 1994;Wolf and Daggupati,2009),or smaller droplets do not penetrate canopies better than large droplets(Zhu et al.,2004; Derksen et al.,2008;Hanna et al.,2009).Many of the studies appraised in the Knoche(1994)review occurred well before Venturi nozzle technology was developed.Wolf and Daggupati (2009)observed improved canopy penetration below a dense soybean(Glycine max(L.)Merr.)crop from Fine and Medium spray quality nozzles.Nozzles in that study were classified as Fine,Me-dium or Coarse.Hanna et al.(2009)compared Fine,Medium,and Coarse spray quality nozzles at three heights in a soybean canopy, but unlike Wolf and Daggupati(2009),observed no difference in droplet size and canopy penetration or coverage.Venturi nozzles led to higher deposits in a peanut(Arachis hypogaea L.)canopy as compared to non-Venturi nozzles across three collector heights, and especially at the bottom collector(Zhu et al.,2004).This result was also observed by Derksen et al.(2008)where Fine spray quality nozzles reduced deposits compared to Coarser spray quality noz-zles in a soybean crop.Application carrier volume rate has an effect on the perfor-mance of certain pesticides.In some cases,a lower volume rate can increase both coverage and efficacy(Fritz et al.,2005,2007). Smaller droplets tend to have a greater affinity for plant surfaces, especially on grasses due to their primarily vertical orientation (Lake,1977;Spillman,1984),but this does not seem to impact herbicide efficacy.In56%of studies examined by Knoche(1994),a decrease in the application volume rate led to an improvement or no effect on the efficacy of herbicides.This is consistent with other studies(McMullan,1995;Etheridge et al.,2001;Ramsdale and Messersmith,2001),conducted after1994.Changing the spray droplet size will impact the deposition,but the effect on coverage was not as correlated(Hanna et al.,2009);Wolf and Daggupati, 2009).Relatively few studies have explored the effect of nozzle type and droplet size on spray deposition rates and cereal canopy penetration for afixed liquidflow and spray pressure.The objec-tives of this study are:1)assess the coverage and droplet number densities(droplets cmÀ2)fromfive different spray quality nozzles with four collector arrangements in an oat canopy.2)Quantify the crop canopy penetration with different droplet size classes to determine if a greater percentage of small droplets in a spray will improve crop canopy penetration.3)Understand the relationship between spray volume rate and crop canopy penetration.2.Materials and methods2.1.Nozzles and application parametersA study to examine canopy penetration and coverage across nozzle type and application volume rate was conducted at the University of Queensland,Gatton,Queensland,Australia.Nozzle types that span acrossfive spray qualities at a standard pressure andflow rate were selected to compare deposition and canopy penetration in an oat canopy.The study compared nine nozzle types listed in Table1acrossfive different ASABE S572.1spray quality categories as tested in Table2.The XR11003VS operated at 300kPa was included for comparison as it is the reference nozzle used in international drift reduction technology(DRT)studies(van de Zande et al.,2002;ISO,2006,2008,2010).Each nozzle except the XR11003VS(which was operated at300kPa),was operated at 350kPa.This pressure was selected based on work from a previous study(Ferguson et al.,2015)where this pressure was a standard pressure within the standard operating parameters for each nozzle type.Treatments were made using a6m pull-behind quad bike sprayer.Application volume rates in the study were50,75,and 100L haÀ1applied at driving speeds listed in Table1.Nozzle spacing was50cm and boom height was85cm above the ground,50cm above the top collectors.2.2.Collector description and placementCollectors were3Â5cm Kromekote cards sprayed with waterþ1g LÀ1addition of Brilliant Blue(Tintex Dyes,Kelvin Grove, QLD Australia),placed at one of four positions:ground,lower canopy,middle canopy,and top canopy.The use of Kromekote cards for deposition analysis have been described before(Johnstone, 1960;Higgins,1967;Hewitt and Meganasa,1993).Each collector type was positioned on a5Â8cmflat metal plate attached to a post that was driven in the ground at various depths(Fig.1).The ground cards were placed10cm above the ground and left in open space. The lower canopy cards were placed on metal plates at17cm above the ground with four30cm tall oat plants in10cm pots placed in a square around the collector.The middle canopy collectors were positioned at20cm above the ground,and10cm above the soil surface of oat plants which were situated as described with the lower canopy.The top collectors were placed at35cm above the ground and just above the tips of the oat plants,positioned as mentioned above(Fig.1).Each collector type was positioned in a randomized complete block design with three replications.There were576cards in total with72for each nozzle type,and six for each nozzle type by spray volume rate by collector position.The study was applied on June9th and then the entire study repeated on June 10th,2015.2.3.Oat planting and growing conditionsOats were planted individually(one seed per pot)at a6cm depth.Seeds were planted into plastic pots(10Â10cm diameter),filled with0.5L of a standard University of Queensland Gatton nursery potting media(1m3of composted pine bark(2e10mm); 2kg Osmocote Plus8e9month(NPK:15.0þ3.9þ9.1g plus 1.5g Mg and trace elements);1kg Osmocote Exactþ3e4month (NPK:16.0þ5þ9.2g plus1.8g Mg and trace elements);2kg Nutricote7month(NPK:16.0þ4.4þ8.3g plus trace elements);1.2kg Saturaid granular wetting agent;1.2kg Dolomite;1.3kg Osmoform4month release(IBDU)(NPK:18.0þ2.2þ11.0g plus trace elements).Plants were grown outside and irrigated twice daily.Pots were placed into0.5Â0.4m trays(20pots per tray)and trays were rearranged every4days in a completely randomised design to ensure conditions would be evenly spread out across pots. At the time of the study,oat plants were at the tillering stage, measuring25e32cm in height.Plants were placed around the collector positions as described above and plants were removed after thefirst day of spraying and new plants were placed out the second day.2.4.Card analysisEach card was separately photographed on a light table using a high resolution digital-single-lens-reflex(DSLR)camera positioned at a10cm height above the light table.Photographs of the sprayed cards were analysed using Image J software(Rasband,2008).Each card was cropped to remove background area,changed into8-bitJ.C.Ferguson et al./Crop Protection81(2016)14e1915format,and then individually threshold adjusted to ensure that only sprayed droplets were included in the sample analysis.Each image was analysed for droplet number density and percent coverage.Coverage was determined as the percent cover of the card from the blue dye of deposited droplets.2.5.Droplet size analysisThe sprays were analysed for droplet size distribution spectra in the Centre for Pesticide Application and Safety (CPAS)Wind Tunnel Research Facility at the University of Queensland on June 19,2015.Wind speed was set at 8.0m s À1,a required speed to avoid a spatial sampling bias (SDTF,1997).Each treatment was analysed on a laser diffraction instrument (Sympatec Helos Sympatec Inc.,Clausthal,Germany)to measure droplet size from each nozzle type.The laser diffraction instrument was placed 30cm downwind from the nozzle,to allow for full liquid sheet breakup.Nozzles were actuated in a downward direction to allow the spray plume to pass through the measurement area for 9s per measurement.The volumetric droplet size spectra parameters selected for data interpretation were the D v0.1,D v0.5,and the %of the spray volume contained in droplets with a diameter <150m m (used often to classify theTable 1Nozzles listed with their respective operating parameters and the driving speeds for each of the three application volume rates used in the study.Nozzle typeAngle and flow rateManufacturerOperating pressureDriving speed 50L ha À175L ha À1100L ha À1kPa km h À1XR (reference)11003Spraying Systems Inc.30028.819.214.4XR 11002Spraying Systems Inc.35020.713.810.4TT11002Spraying Systems Inc.35020.713.810.4TDADF 11002Agrotop GmbH35020.713.810.4AIXR 11002Spraying Systems Inc.35020.713.810.4MD 11002Hardi International 35020.713.810.4AI 11002Spraying Systems Inc.35020.713.810.4TTI11002Spraying Systems Inc.35020.713.810.4Table 2Droplet size distribution and ASABE S527.1spray quality classi fication based on D v0.1and D v0.5for nozzles used in this study and the ASABE reference nozzles measured on that day.NozzlePressure kPa D v0.1m mD v0.5%<150m m ASABE classi fication110014505711273.5Very-Fine/Fine XR 110023507615743.3Fine XR 110033009021325.5Fine110033009522026.2Fine/Medium TurboTeejet 1100235012027915.9Medium TDADF 11002350176365 6.6Medium Mini Drift 11002*3501673757.6Medium11006200180373 6.0Medium/Coarse Mini Drift 11002*3501673757.6Coarse AIXR 11002350188381 6.2Coarse8008250222456 3.5Coarse/Very-Coarse AI 11002350261499 2.0Very-Coarse6510200288585 1.6Very-Coarse/Extremely-Coarse TTI 110023503607220.6Extremely-Coarse65151503827290.5Extremely-Coarse/Ultra-Coarse*The Mini Drift 11002had a D v0.1classi fied as Medium,but a D v0.5classi fied as Coarse.The ASAE S572standard approach would classify as Medium,but ef ficacy studies generally use the D v0.5for droplet size classi fication.Thus it has been included twice and classi fied differently based on the D v0.1or D v0.5.yout of one replication of the oat plants with the lower collector,middle collector,ground collector,and top card placement from left to right.The collectors were arranged in a randomised complete block design with 3replications.J.C.Ferguson et al./Crop Protection 81(2016)14e 1916‘driftablefines’for a treatment).These parameters were selected because they are widely used to assess spray drift potential(D v0.1 and%<150m m)(SDTF,1997)and efficacy potential(D v0.5).The D v0.1is the diameter at which10%of the volume of droplets are contained in droplets at or below that diameter.The D v0.5(volume median diameter)is the diameter at which half of the volume is contained in droplets of larger or smaller diameter to help classify sprays for efficacy potential,and understand the size classification of each.In order to help classify the nozzle treatments,the ASABE references nozzles were also measured at the same time,a method consistent with ASABE S572.1(ASAE,2009).2.6.Statistical analysesCollector coverage and droplet number density were analysed in separate generalised linear mixed models(PROC GLIMMIX)in SAS (Statistical Analysis Software,version9.4,Cary,North Carolina, USA)with means separations made at the a¼0.05level.Both models were analysed separately for each of the four collector placements:coverage or droplet number density¼nozzle type x volume rate x application day and block was randomised.Fixed effects were nozzle type,application volume rate,and application day.The denominator degrees of freedom(df)was protected from bias through the inclusion of the Kenward-Roger adjustment for the generalised linear mixed model(Kenward and Roger,1997).The Sidak adjustment was included in comparisons of variables to improve the power and confidence in reported differences(Sidak, 1967).Application day was not significant,thus data were pooled, giving six replications of each nozzle x application volume rate x collector placement treatment.A separate generalised linear mixed model was constructed for the droplet size data,where the Dv0.5with water was analysed by nozzle type.The Kenward-Rogers and Sidak adjustments were both included as described above.3.Results3.1.Application conditions over both study daysEnvironmental conditions did not vary across both application days,except that the temperature was3 C cooler,and the relative humidity was11%higher on the second day(Table3).The data for both the coverage and the droplet number density were not sig-nificant across days,thus data points were pooled.3.2.Droplet size analysisNozzles were classified based on results from the ASABE refer-ence nozzles taken during the same wind tunnel period.When operated at350kPa,the TDADF11002,AI11002,and TTI11002 were all classified as a droplet size category smaller than the clas-sification listed in their respective nozzle catalogues(Table2).The greatest difference in droplet size was across nozzles of different spray qualities.The AIXR11002and MD11002were not different at the D v0.5(Table4).The Fine and Medium spray quality nozzles were not similar,and each of the other spray qualities were different (Table4).3.3.Droplet number densityNozzle type was significant(p<0.001)for droplet number density as was the nozzle type by spray volume rate(p¼0.001) (Table5).The nozzles that produce a Fine spray quality XR11002 nozzle had the highest droplet number density across collector positions(Table5)followed by the larger D v0.5Fine spray quality nozzle,XR11003.The trend followed where the Medium spray quality nozzles(TT11002,TDADF11002)had the next highest droplet number density.This trend continued until the TTI11002, an Extremely-Coarse spray quality nozzle,which produced the lowest droplet number density across collector type(Table4). There was not a correlation between droplet number density and coverage.The canopy appeared to have little effect on droplet number density across nozzle types.In the case of Coarse,Very-Coarse and Extremely-Coarse spray quality nozzles,(AIXR11002, MD11002,AI11002and the TTI11002)the canopy increased the droplet number density.This trend was not the case with the XR 11002,XR11003,and the TT11002(Table4).The significant interaction with nozzle by volume rate was dependent on nozzle type.The Fine spray quality nozzles(XR11002 and XR11003)resulted in a trend where increasing the application volume from50to100L haÀ1reduced the droplet number density (115e90and99to82respectively).The Medium spray quality nozzle(TT11002),Medium/Coarse spray quality nozzle(MD 11002),and the Very-Coarse spray quality nozzle(AI11002)did not see a change in their droplet number density.The trend of increased droplet number densities with increased volume rate was observed with the TDADF11002,AIXR11002,and TTI11002.3.4.CoverageThe relationship between coverage and application volume rate was significant(p<0.001)as was the relationship between coverage and nozzle type(p<0.001)for each collector position (Table6).The100L haÀ1volume rate resulted in the highest coverage across nozzle and collector type(39%)and the50L haÀ1 volume resulted in the lowest coverage(21%).The75L haÀ1volume rate resulted in an overall coverage of30%,whichfits the linear trend that with each increase of25L haÀ1application volume rate, coverage is increased by9%(Table7).Coverage was similar at the top collector placement between the Fine(XR11002,XR11003)and Medium(TT11002)spray quality nozzles(Table2).Coverage was also similar across the rest of the nozzles classified as Medium or Coarser,including the Extremely-Coarse TTI11002.The XR11002 and the XR11003always had a similar coverage across each col-lector position(Table2).This was also the case for the AI11002and the TTI11002.At the middle canopy placement,the XR11002had a similar coverage to the Medium(TDADF11002)and the Coarse spray quality nozzles(AIXR11002and MD11002).At the lower canopy collector position,the XR11003was similar to the AIXR 11002and TDADF11002,but not the TT11002which resulted in a lower coverage(Table4).At the ground collector position,the Fine XR11002was similar to the Coarse AIXR11002.Table3Weather data for the two days of the study on June9and10,2015.Weather data taken was summarised only for the hours of the day for which spraying occurred.Date Avg.Temp Avg.Dew point Avg.RH Wind speed and direction Gust( C)( C)(%)(km hourÀ1/NNE)(km hourÀ1)09/06/201520.711.656.98.011.510/06/201517.711.467.88.112.3J.C.Ferguson et al./Crop Protection81(2016)14e19173.5.Canopy penetrationCanopy penetration was similar across all nozzle types.There was no observed difference in coverage across nozzle type when volume rate was consistent.Differences were observed in coverage from the top card compared against the ground card (p ¼0.002)but no differences observed in the droplet number density (Table 4).The middle and lower canopy cards had the greatest droplet number density,followed by the top card and ground cards respectively.The ground collector resulted in the lowest coverage for five of the eight total nozzles e without any plant material nearthe card.4.DiscussionThe droplet number densities were strongly correlated to the droplet size categories of the nozzles used in the study,where the smaller the D v0.5,the greater the droplet number density (Tables 2and 4).The interaction of volume rate on the droplet number density was nozzle dependent,but this was not the case with the interaction of volume rate on coverage which was consistent across nozzle type (Table 7).This result agrees with those of both Wolf and Daggupati (2009)and Hanna et al.(2009)(Table 5).Coverage was not as strongly correlated to droplet size,though the nozzles that resulted in the highest and lowest coverage across collector placement were the Finest and Coarsest spray quality nozzles respectively (Table 4).This result is consistent with Wolf and Daggupati (2009)and Hanna et al.(2009)where they also observed no clear trend with spray quality and coverage even though there was a clear link to droplet size and its effect on the droplet number density.Coverage also varied with nozzle type.Across two collector positions,the AIXR 11002had a higher coverage than the TDADF 11002(Table 4).This was even more apparent where the AIXR 11002had a higher coverage across each collector position than the TT 11002.The coverage and the droplet number density did not appear to be affected by the presence of the canopy,and in some cases,the canopy increased both results.This suggests that the nozzle types were able to effectively penetrate the canopy to an equal degree,or that the canopy was not dense enough to have an effect.Studies by Wolf and Daggupati (2009),Hanna et al.(2009),Zhu et al.(2004),and Derksen et al.(2008)examined canopy penetration in a dense soybean or peanut canopy,and did observe some nozzle effect on canopy penetration.Wolf and Daggupati (2009)observed improved canopy penetration with Finer spray quality nozzles where the others observed equal to or better canopy penetration with Coarser spray quality nozzles (Zhu et al.,2004;Derksen et al.,2008;Hanna et al.,2009).The oat canopy is structurally different from both a peanut and soybean canopy and thus it may be hard to successfully compare results from the two.The Wolf and Daggupati (2009)study used only collectors at one location (the bottom of the canopy)so it is hard to know the total canopy effect from this study which would have been observable if multiple collector positions within the canopy were included.Results from the present study suggest that Coarser spray qualities are aided by the presence of the canopy as the Coarse,Very-Coarse,and Extremely-Coarse spray quality nozzles saw an increase in coverage and the droplet number density.The spray volume rate effects on coverage across nozzle typeTable 4Droplet deposition results for each nozzle type listed by the collector position and the statistical breakdown for each model.NozzleGround Lower Middle Top Pressure (kPa)D v0.5(m m)Density (cm À2)Coverage (%)Density (cm À2)Coverage (%)Density (cm À2)Coverage (%)Density (cm À2)Coverage (%)XR 11003(reference)300213E 92A 35.1ab 90AB 34.4ab 96A 36.4ab 89A 37.7ab AI 11002350499B 37CD 20.2e 46DC 24.5d 37CD 22.9cd 34BC 25.2c AIXR 11002350381C 57B 30.9abc 56C 31.7bc 54BC 30.7ab 53B 34.2abc Mini Drift 11002350375C 46BC 29.3bc 52C 28.4cd 57BC 30.5ab 56B 30.9abc TDADF 11002350365C 60B 28.5c 57C 31.9bc 62B 34.2ab 57B 32.1abc TurboTeejet 11002350279D 82A 26.8cd 79B 26.9cd 100A 29.0bc 88A 30.1abc TTI 11002350722A 26D 21.0de 30D 23.5d 29D 21.5d 27C 26.0bc XR 11002350157F95A36.6a104A37.4a100A37.5a105A38.3aEach column was analysed separately using a generalised linear mixed model as described in Section 2.6.Different letters indicate statistical signi ficance at a ¼0.05with Sidak's adjustment.Each column was analysed separately,denoted with the uppercase and lowercase letters to indicate differences.Table 5Droplet number density Type III Test across nozzle type,spray volume rate and collector position with the Kenward-Roger adjustment.Type III test of fixed effects EffectNum df Den df F Value Pr >F nozzle7475119.69<0.001collector-position3475 1.540.204nozzle*collector-position 214750.910.574volume rate24750.070.931nozzle*volume rate14475 3.060.001collector-position*volume rate6475 1.480.183nozzle*collector-position*volume rate424750.520.995Table 6Coverage Type III Test across nozzle type,spray volume rate and collector placement with the Kenward-Roger adjustment.Type III test of fixed effects EffectNum df Den df F Value Pr >F nozzle747633.83<0.001collector-position3476 4.420.005nozzle*collector-position 214760.520.961volume rate2476262.51<0.001nozzle*volume rate144760.530.915collector-position*volume rate64760.340.918nozzle*collector-position*volume rate424750.341.000Table 7Coverage by volume rate comparisons pooled across nozzle type and collector placement with Sidak's adjustment.Volume rateEstimate %vs.100L ha À1vs.75L ha À1ADJ P Value ADJ P Value 10039ee 7530<0.001e5021<0.001<0.001J.C.Ferguson et al./Crop Protection 81(2016)14e 1918and collector placement were clear(Table6),but the effect was evenly distributed throughout the canopy.The higher volume rate led to a greater droplet deposition across the entirety of the study.The results obtained from this study(Table4)show the importance of making the proper nozzle selection,which can allow the applicator to select a Coarser spray quality nozzle such as the AIXR11002and improve their coverage compared to a smaller droplet producing nozzle such as the TT11002.This was also observed with the Coarser spray quality nozzle such as the TTI which had a similar coverage across collector placement to the AI. This result can both effectively reduce the risk of spray drift as the AIXR had a2.5x decrease in droplets less than150m m compared to the TT and the TTI had a4x decrease in droplets less than150m m compared to the AI.With a growing concern of spray drift,results from this study can help to allay concerns of reduced coverage and subsequently reduced efficacy that follow discussions of adopting Coarser spray quality nozzles.There was also no loss in coverage due to the canopy for both the Very-Coarse AI11002and the Extremely-Coarse TTI11002,which means that both could be adopted without perceived loss in efficacy.The extrapolation of the results from this study and the effect on herbicide efficacy are difficult,but results from previous studies (Knoche,1994;McMullan,1995;Etheridge et al.,2001;Ramsdale and Messersmith,2001)suggest that reduced coverage from reduced application volume rate did not affect efficacy with some pesticides.It is clear that coverage is reduced when spray volume rate is decreased(Table7).These results would suggest then,that a minimum coverage is necessary to achieve acceptable levels of weed control.This level of coverage can be achieved with Coarser spray quality nozzles as long as the spray volume rate is sufficient. The trend with coverage in this study suggests that between 50L haÀ1and100L haÀ1there is an18%gain in coverage with an increase of50L haÀ1spray volume rate(Table7).This trend is unlikely to continue indefinitely but was consistent across nozzle type between these volume rates.In summary,droplet size is an important factor influencing spray deposition and canopy penetration in most applications.It was observed that with the optimal nozzle choice,a Coarser spray quality nozzle can be selected to minimise spray drift potential and maximise coverage.Application volume rate affected coverage across collector placement,but there was less correlation with the volume rate effects on droplet number density which was nozzle dependent.Canopy effects aided in the deposition of Coarser spray quality nozzles,but did not reduce the coverage or droplet number density across nozzle type.Future efficacy studies will be con-ducted to apply results from this study to understand how volume rate and coverage affect control of weeds and if a minimum level of biologically effective coverage can be determined to further aid in nozzle and technology selection to reduce spray drift while achieving acceptable levels of control.AcknowledgementsThe authors acknowledge the Grains Research and Development Corporation of Australia(GRDC)for their support of this work through the project titled“Options for improved insecticide and fungicide use and canopy penetration in cereals and canola”, UWA00165.The authors also would like to thank Peter Tame of AusWest Seeds for his support of this work by providing seed for this study.ReferencesASAE,2009.Spray Nozzle Classification by Droplet Spectra.Standard572.1American Society of Agricultural and Biological Engineers,St.Joseph,MI. Byass,J.B.,Lake,J.R.,1977.Spray drift from a tractor-poweredfield sprayer.Pest.Sci.8,117e126.Derksen,R.C.,Zhu,H.,Ozkan,H.E.,Hammond,R.B.,Dorrance,A.E.,Spongberg,A.L., 2008.Determining the influence of spray quality,nozzle type,spray volume, and air-assisted application strategies on deposition of pesticides in soybean canopy.Trans.ASAE51,1529e1537.Etheridge,R.E.,Hart,W.E.,Hayes,R.M.,Mueller,I.C.,2001.Effect of venturi type nozzles and application volume on post-emergence herbicide efficacy.Weed Technol.15,75e80.Ferguson,J.C.,O'Donnell,C.C.,Chauhan,B.S.,Adkins,S.W.,Kruger,G.R.,Wang,R., Urach Ferreira,P.H.,Hewitt,A.J.,2015.Determining the uniformity and con-sistency of droplet size across spray drift reducing nozzles in a wind tunnel.Crop Prot.76,1e6.Fritz,B.K.,Kirk,I.W.,Hoffmann,W.C.,Martin,D.E.,Hofman,V.I.,Hollingsworth,C., McMullen,M.,Halley,S.,2005.Aerial application methods for increasing spray deposition on wheat heads.Appl.Eng.Agric.22,357e364.Fritz,B.K.,Hoffmann,W.C.,Martin, D.E.,Thomson,S.J.,2007.Aerial application methods for increasing spray deposition on wheat heads.Appl.Eng.Agric.23, 709e715.Grover,R.,Kerr,L.A.,Maybank,J.,Yoshida,K.,1978.Field measurements of droplet drift from ground sprayers.Can.J.Plant Sci.58,611e622.Hanna,H.M.,Robertson,A.E.,Carlton,W.M.,Wolf,R.E.,2009.Nozzle and carrier application effects on control of soybean leaf spot diseases.Appl.Eng.Agric.25, 5e13.Hewitt,A.J.,1997.The Importance of droplet size in agricultural spraying.Atomi-zation Sprays7,235e244.Hewitt, A.J.,Meganasa,T.,1993.Droplet distribution densities of a pyrethroid insecticide within grass and maize canopies for the control of Spodoptera exempta larvae.Crop Prot.12,59e62.Higgins,A.H.,1967.Spread factors for technical malthion.J.Econ.Ent62,912e916. ISO22369-2,2010.Crop Protection Equipment e Drift Classification of Spraying Equipment-Part2:Classification of Field Crop Sprayers by Field Measure-ments.International Organization for Standardization,Geneva.ISO22856,2008.Equipment for Crop Protection e Methods for the Laboratory Measurement of Spray Drift e Wind Tunnels.International Organization for Standardization,Geneva.ISO22369-1,2006.Crop Protection Equipment e Drift Classification of Spraying Equipment Part1:Classes.International Organization for Standardization, Geneva.Johnstone,D.R.,1960.Assessment Techniques2.Photographic Paper,p.13.CPRU Porton Report No.177.Mimeographed.Kenward,M.G.,Roger,J.H.,1997.Small sample interference forfixed effects from restricted maximum likelihood.Biometrics53,983e997.Knoche,M.,1994.Effect of droplet size and carrier volume on performance of foliage-applied herbicides.Crop Prot.13,163e178.Lake,J.R.,1977.The effect of drop size and velocity on the performance of agri-cultural sprays.Pest.Sci.8,515e520.McMullan,P.M.,1995.Effect of spray volume,spray pressure and adjuvant volume on efficacy of sethoxydim and fenoxaprop-p-ethyl.Crop Prot.14,549e554. Ramsdale,B.K.,Messersmith,C.G.,2001.Drift-reducing nozzle effects on herbicide performance.Weed Technol.15,453e460.Rasband,W.S.,2008.ImageJ.U.S.National Institutes of Health,Bethesda,Maryland, USA,pp.1997e2008.SDTF,1997.Spray Drift Task Force Study A95-010,Miscellaneous Study Nozzle Study.EPA MRID No.44310401.Sidak,Z.,1967.Rectangular confidence regions for the means of multivariate normal distributions.J.Am.Stat.Soc.62,626e633.Southcombe,E.S.E.,Miller,P.C.H.,Ganzelmeier,H.,Van de Zande,J.C.,Miralles,A., Hewitt,A.J.,1997.The international(BCPC)Spray classification system includinga drift potential factor.In:Proceedings of the Brighton Crop Protection Con-ference e Weeds.Brighton,UK.5A-1,pp.371e380.Spillman,J.J.,1984.Spray impactions,retention and adhesions:an introduction to basic characteristics.Pest.Sci.15,97e106.Uk,S.,Courshee,R.J.,1982.Distribution and likely effectiveness of spray deposits within a cotton canopy fromfine ultralow-volume spray applied by aircraft.Pest.Sci.13,529e536.van de Zande,J.C.,Porskamp,H.A.J.,Holterman,H.J.,2002.Influence of reference nozzle choice on spray drift classification.Aspects of Appl.Bio.66.Int.Adv.Pest.Appl.Wellesb.2002,49e56.Wolf,R.E.,Daggupati,N.P.,2009.Nozzle type effect on soybean canopy penetration.Appl.Eng.Agric.25,23e30.Wolf,T.M.,Harrison,S.K.,Hall,F.R.,Cooper,J.,2000.Optimizing postemergence herbicide deposition and efficacy through application variables in no-till sys-tems.Weed Sci.48,761e768.Zhu,H.,Dorner,J.W.,Rowland,D.L.,Derksen,R.C.,Ozkan,H.E.,2004.Spray pene-tration into peanut canopies with hydraulic nozzle tips.Biosyst.Eng.87, 275e283.J.C.Ferguson et al./Crop Protection81(2016)14e1919。
Knowledge brokering on emissions modelling in Strategic Environmental Assessment of Estonian energy policy with special reference to the LEAP modelPiret Kuldna a ,⁎,Kaja Peterson a ,Reeli Kuhi-Thalfeldt a ,ba Stockholm Environment Institute Tallinn Centre,Lai 34,Tallinn 10133,Estonia bTallinn University of Technology,Ehitajate tee 5,Tallinn 19086,Estoniaa b s t r a c ta r t i c l e i n f o Article history:Received 22December 2014Received in revised form 4June 2015Accepted 4June 2015Available online 11June 2015Keywords:Strategic Environmental Assessment Knowledge brokering LEAPEnergy policyStrategic Environmental Assessment (SEA)serves as a platform for bringing together researchers,policy devel-opers and other stakeholders to evaluate and communicate signi ficant environmental and socio-economic effects of policies,plans and programmes.Quantitative computer models can facilitate knowledge exchange between various parties that strive to use scienti fic findings to guide policy-making decisions.The process of facilitating knowledge generation and exchange,i.e.knowledge brokerage,has been increasingly explored,but there is not much evidence in the literature on how knowledge brokerage activities are used in full cycles of SEAs which employ quantitative models.We report on the SEA process of the national energy plan with re flections on where and how the Long-range Energy Alternatives Planning (LEAP)model was used for knowledge broker-age on emissions modelling between researchers and policy developers.Our main suggestion is that applying a quantitative model not only in ex ante ,but also ex post scenario modelling and associated impact assessment can facilitate systematic and inspiring knowledge exchange process on a policy problem and capacity building of par-ticipating actors.©2015Elsevier Inc.All rights reserved.1.IntroductionSEA is a type of impact assessment to integrate environmental con-cerns into strategic decision-making with the aim to promote sustain-able development.In a decision-making process,impact assessments can serve various and simultaneous functions:information generation,debate and deliberation,attitude shifting and complexity structuring (Hugéet al.,2011).Farrell et al.(2001)also emphasise the communica-tive role of assessment processes as bridges between science and policy.In the energy sector,where impact assessment deals with complex technical and structural issues,uncertainties and multiple impacts,quantitative models are widely used as planning and analysis tools for addressing these challenges.While modelling itself is not driving the policy,nor deciding the political feasibility of targets,quantitative models can offer structured insight into areas of uncertainty (Strachan et al.,2009;Mundaca et al.,2010),help understand the interactions and promote discussion (Jebaraj and Iniyan,2006)and support various approaches to future studies (Finnveden et al.,2003).Van Daalen et al.(2002)have summarised four contributions of quantitative models to the environmental policy-making life cycle:1)eye-opener,2)visualiserof alternative future scenarios,3)vehicle for inspiring political consen-sus,and 4)assistant in identifying concrete policy decisions and poten-tial policy outcomes.However,the use of models in policy development is not always self-evident.Brugnach et al.(2007)suggest that both the modelling and the policy-making communities should contribute to the integration of modelling information into policy formulation.For modellers,this involves promoting models as tools of communication,learning and exploration,and for policy-makers,this suggests a need to view modelling as a tool for informing the public and scienti fic com-munity in an uncertain world.SEAs can serve as platforms for enhancing such knowledge ex-change,where information is not simply transferred from researchers to decision-makers,but developed and co-produced interactively with actors involved (Sheate and Partidário,2010;Fazey et al.,2012).Assess-ments are also more likely to in fluence decision-making if the decision-makers are sharing and acquiring not just information but knowledge (information that has been processed through learning)(Sheate and Partidário,2010).Similarly,model generated output alone may not be enough for decision-making.However,knowledge exchange is often conducted on an ad-hoc basis,based on ‘what seems to work ’(Reed et al.,2014).In impact assessments,Partidário and Sheate (2013)sug-gest not simply hope that the process of facilitating knowledge genera-tion and exchange,i.e.knowledge brokerage,will happen,but explicitly design it in order to support constructive and collaborative planningEnvironmental Impact Assessment Review 54(2015)55–60⁎Corresponding author.E-mail address:piret.kuldna@seit.ee (P.Kuldna)./10.1016/j.eiar.2015.06.0010195-9255/©2015Elsevier Inc.All rightsreserved.Contents lists available at ScienceDirectEnvironmental Impact Assessment Reviewj o u r n a l h o me p a g e :ww w.e l s e v i e r.c o m /l o c a t e /e i a rprocesses and ultimately more sustainable outcomes.At the same time impact assessors need to adopt aflexible,adaptive and learning ap-proach themselves too,as the decision-making processes are often long and unpredictable(Kørnøv and Thissen,2000).This is supported by Runhaar and Driessen(2007)who argue that SEA studies will obvi-ously have a greater impact when they areflexible and tailored to the actual evolvement of policy processes,rather than adhering to strict, standardised and detailed procedures.Likewise,knowledge brokering activities in SEA should also depend on the needs and expectations of actors involved.Knowledge brokering has been categorised in several ways.For example,Ward et al.(2009)identify three models of knowledge brokering:knowledge management,linkage and exchange,and capaci-ty building,which represent common roles performed by knowledge brokers.In environmental modelling,Krueger et al.(2012)highlight the convening,communicating,mediating and translating roles of knowledge brokers.Turnhout et al.(2013)differentiate three knowl-edge brokering repertoires:supplying,bridging and facilitating,illus-trating the relation between knowledge production and use.Michaels (2009)specifies six knowledge brokering strategies to environmental policy problems and settings and based on her work,Jones et al. (2013)set out six functions of knowledge intermediaries.These strate-gies/functions are listed in order of increasing intensity of relationship building and commitment of resources:•informing(disseminating research results to policy developers,limit-ed exchange between the producers and users of the knowledge);•consulting/linking(advising on problems delineated by parties seeking counsel,linking policy developers with the expertise needed for a par-ticular policy area);•matchmaking(bringing together individuals who can contribute to policy-making and who would not otherwise meet,e.g.from another discipline or with different types of knowledge);•engaging(involving other parties in discussions that are framed by one party,to provide knowledge on an as-needed basis);•collaborating(parties jointly frame the process and negotiate the sub-stance of issues,may need significant resources of time and money);•building adaptive capacity(parties jointly direct interactions and dis-cussions of problems with multiple dimensions in which learning is considered,to improve the ability of parties in handling multiple and emerging issues).Depending on the decision-making issue,all or some of these strat-egies may be appropriate at different points of the policy cycle.For in-stance,Jacobson et al.(2005)have shown that consulting can be an effective strategy for interactive knowledge transfer to enhance the use of research-based knowledge in decision-making.Sheate and Partidário(2010)demonstrate the role of engagement and capacity building in strategic planning and assessment.In order to distinguish between different knowledge brokerage strategies related to environmental policy problems,we take the typol-ogy of Michaels(2009)as a starting point and examine the knowledge exchange process on emissions modelling between researchers and pol-icy developers in the national energy plan SEA.In our study the model is LEAP(Long-range Energy Alternatives Planning model)which is a soft-ware tool for energy-environment policy analysis and impact predic-tion,developed at the Stockholm Environment Institute(Heaps,2008; ).LEAP can be used for both demand and supply side energy modelling,including for building and comparing scenarios to meet the energy demand in all sectors of an economy and account for energy and non-energy sector greenhouse gas(GHG) emission sources and sinks.The literature shows that most of the LEAP-model applications have been at the national level,primarily driv-en by policy requirements:for developing low-carbon economies, future sustainable energy and climate policies,and renewable energy development scenarios or to reduce CO2emissions(e.g.,Tao et al., 2011;Yophy et al.,2011;Roinioti et al.,2012;Özer et al.,2013;Park et al.,2013).At the city level LEAP has been applied for urban energy saving and emissions reductions(e.g.,Lin et al.,2010)and globally for developing forward-looking assessments of energy challenges (Nilsson et al.,2012).In Estonia,the LEAP model has been used for the national level ex ante SEA of Energy Plan2020and in ex post scenario modelling and associated impact assessment for the preparation of En-ergy Plan2030+.The paper is structured as follows.Section2describes the knowledge needs in the context of Estonian energy policy and knowledge generation with LEAP in the national energy plan. Section3describes the study methodology.In Sections4and5,the study results are presented and discussed,respectively.Thefinal section draws conclusions based on the studyfindings.2.Test case contextThe knowledge needs for formulating a national development plan for the energy sector(hereinafter:national energy plan)primarily in-clude information on available resources and policy measures needed to meet European Union(EU)and national climate and energy policy targets and to ensure energy rmation on environmental, economic and social impacts of energy policy measures as well as im-pact assessment procedural knowledge is also essential.In Estonia,the energy sector is dominated by one primary energy source:oil shale. As a domestic fossil fuel,oil shale provides for approximately70%of the country's total primary energy supply.Although oil shale increases energy security of Estonia(the country's overall level of energy import dependence is approximately15%),the energy sector is Estonia's most environmentally impactful sector.The majority of the country's GHG emissions come from oil shale extraction and combustion,which are the main drivers behind the national economy's high carbon intensity values(OECD/IEA,2013).The national energy plan is the highest-level strategic document on the Estonian energy sector which elaborates on development visions and objectives and describes measures for achiev-ing them.As such,the document formulates national energy policy,and its targets and measures have to be reflected down to the lower level development plans that implement this policy.The authority responsi-ble for national energy plan preparation is the Ministry of Economic Af-fairs and Communications(MoEAC,hereinafter:policy developers).The first Estonian energy sector national development plan was approved by the Estonian Parliament in1998and has now been reviewed three times,combined with SEA according to the EU Directive SEA Directive 2001/42/EC.The second and third authors of the current paper together with other energy experts(hereinafter:researchers)from the SEI Tallinn Centre(SEIT,a non-governmental research institute)carried out the SEA for Energy Plan2020upon being selected by the MoEAC through public procurement.The guidelines for SEA on the website of the Minis-try of the Environment do not outline specific requirements on SEA tool choice or application(Ministry of the Environment,2009).However, there has been a certain history of the use of quantitative models in SEAs for national energy plans since Energy Plan2020(Table1).2.1.The use of LEAP model in the SEA of the national energy planIn the Energy Plan2020,the application of LEAP–a new model for Estonian national energy planning–was proposed by the SEA re-searchers who had the LEAP modelling expertise.In particular,we pro-posed to model the most important emissions of energy sector and to find out which electricity and heat production scenarios have the lowest level of carbon dioxide(CO2)and sulphur dioxide(SO2).Since LEAP is a scenario-based modelling system,it was possible to work closely with the policy developers responsible for the preparation of the national en-ergy plan throughout the development of alternative scenarios.Issues regarding how detailed the model should be,what sectors should be modelled,and which parameters and input data to use were negotiated56P.Kuldna et al./Environmental Impact Assessment Review54(2015)55–60with them.The alternative scenarios in Energy Plan2020were policy scenarios which are appropriate to use if the future can be affected through strategic action(Finnveden et al.,2003).The policy changes of scenarios were characterised by different shares of fuels for electricity and heat generation and respective production capacities,based on the national targets in the energy sector.The alternative scenarios for elec-tricity and heat production werefirst drafted by the researchers during the SEA scoping stage and then specified in the impact assessment stage based on information gained from the policy developers.The scenarios were analysed according to the implications of different futures,on the basis of environmental,social and economic criteria proposed by both the researchers and policy developers,ing the backcasting approach(Robinson,2003).Out of the total26criteria,the LEAP model was used to develop projections for CO2and SO2emissions of the scenarios(two criteria),EcoSense model for calculating the full costs of electricity production technologies(1criterion)and qualitative multi-criteria analyses for assessing the impact of the scenarios against the rest24criteria(SEI Tallinn,2009).During the emissions modelling stages,the main role of the policy developers was to provide informa-tion on the planned energy supply and demand data,verify modelling assumptions and the set of scenarios.For data visualisation and communication in LEAP,one can present and share model assumptions and results through Excel,Word and PowerPoint.In Energy Plan2020,the LEAP model was used to visualise scenarios and to compare emissions values.The results were presented by the researchers in working meetings with the policy developers,in legally mandated public consultations of the SEA and at a series of public energy forums organised by the MoEAC that introduced national energy plan scenarios and implementation measures to a wider public.In the communication process,the main role of the policy developers was to assist in the interpretation of the quantitative modelling results of the emission scenarios.2.2.Energy Plan2030+In the autumn of2012,MoEAC and Estonian Development Fund (EDF)(hereinafter together or separately referred to as policy devel-opers)launched the updating of Energy Plan2020.The new plan,Energy Plan2030+,addresses a longer time horizon and wider scope issues than its predecessors.Its overall aim is to ensure energy supply in accor-dance with European Union climate and energy policy objectives while improving the state of the environment and long-term competitiveness of Estonia.The plan establishes strategic objectives for power engineer-ing,heat production,transportation,housing and domestic fuel produc-tion until2030and with an outlook to2050.Based on analyses and modelling results,Estonia's preferred energy supply scenario is selected in the plan(MoEAC,2015).Thus,at the initiation of the process,the pol-icy planning addressed a problem that can be characterised as a moder-ately structured policy problem(Hisschemöller and Hoppe,1995; Turnhout et al.,2007;Runhaar and Driessen,2007):there was an agree-ment of necessity for a new energy plan and a broad policy goal was set, but the means to reach the goal most effectively and efficiently were un-certain.Along with updating the plan,ex post evaluation of the imple-mentation of the previous national energy plan was carried out.While the need for mid-term and/or ex post evaluation of a sectoral develop-ment plan is stipulated in government regulations(Government of the Republic,2007),no explicit legal requirements specify that sectoral de-velopment plan impacts arising from implementation must be assessed during this process.The ex ante SEA of Energy Plan2030+was conduct-ed by the EDF(a state-run public institution),who coordinated together with the MoEAC the preparation of the national energy plan.As the LEAP model was applied to the impact assessment of the previous na-tional energy plan,the policy developers also considered it a candidate tool for the SEA of the new plan.In summary,knowledge exchange process between SEA experts, modellers and policy developers can evolve at any modelling stage within SEA,such as:1)while deciding on modelling objectives and scope at the SEA scoping stage,2)during model design and quantitative modelling(as scenarios are built and while model inputs are deter-mined)and3)in ex post evaluation and modelling(Table2).3.MethodologyIn order to reflect on where and how a knowledge brokerage ap-proach was applied to the emissions modelling process in the SEA and in the preparation of the new energy plan,we use Michaels' (2009)framework of strategies which has specifically been devel-oped for responding to environmental policy problems or settings (described in the Introduction).Our focus is on expert knowledge of researchers and policy developers,excluding knowledge generat-ed from other stakeholder and public participation.The study was divided into two parts:1)Analysis of SEA process for Energy Plan2020and ex post identifica-tion of knowledge brokerage strategies that the authors applied dur-ing the SEA.The analysis was based on information from the secondTable1Estonian national energy plans and main tools used in SEAs.Title of the national energy plan Year ofparliamentaryapprovalMain tools used in the SEANational Long-TermDevelopment Plan for theFuel and Energy Sector until2005and vision until20181998[No SEA was applied]National Long-Term Development Plan for the Fuel and Energy Sector until 2015,including SEA 2004Qualitative assessmentmatrices(Fischer,2007)—scoring of impact significanceNational Development Plan for the Energy Sector until2020 (Energy Plan2020), including SEA 2009LEAP(Heaps,2008)—modelling of CO2and SO2emissions in energy supplyand consumption scenariosEcoSense(Schleisner,2000;Finnveden et al.,2003)—calculation of full costs(including both productionand external costs)ofelectricity productionscenariosQualitative assessmentmatrices—scoring of impactsignificanceNational Development Plan for the Energy Sector until 2030,with outlook to2050 (Energy Plan2030+), including SEA Scheduled forapproval in thefirst half of2015AirViro(SMHI,2010)—modelling of air emissionsSimaPro(PRé,2013)—modelling of impacts onhealth,ecosystems,resourceuse and climate changeIndicator sets(de Ridder et al.,2007)—measuring changes inimpactTable2Main stages of knowledge exchange process between SEA experts,modellers and policydevelopers,on the example of national energy plan and application of LEAP.Stage of policyplanningStage of SEA Stage of modellingInitiation Scoping Deciding on modellingobjectives and scopeDevelopment of draftpolicy documentImpact assessment Model design,quantitativemodelling andcommunication of resultsMid-term and/or expost evaluation ofpolicy documentMid-term and/or ex postevaluation of impacts(SEA follow-up)Ex post modelling57 P.Kuldna et al./Environmental Impact Assessment Review54(2015)55–60and third authors of the paper(who were respectively the main SEA expert and the LEAP modeller of the ex ante SEA),and on the ex ante SEA report(SEI Tallinn,2009).2)Ex post scenario modelling and preparation for the SEA of EnergyPlan2030+.Ex post modelling was the re-assessment of LEAP pro-jections made for Energy Plan2020in order tofind out to what ex-tent LEAP model results had realised and to identify possible reasons for deviation.It was guided by research and initiated by the authors without a formal government mandate,thus not being part of the of-ficial SEA process.Although in the ex ante SEA report for Energy Plan 2020,ex post modelling was not directly proposed as a form of mon-itoring,the updating of the national energy plan was known to be started soon,and thus it was an appropriate time to follow up on the SEA modelling outcomes.The additional aim of ex post modelling was to interact with the policy developers for continuing the use of LEAP in updating the energy plan and associated SEA(collaborating strategy of knowledge brokerage).The policy developers,in turn, were interested in identifying the most suitable model(s)for conducting an impact assessment of the new national energy plan.In order to discuss the ex post modellingfindings and offer opportu-nity for policy learning,a workshop was arranged by the researchers.Besides collaborating strategy,we did not elaborate any other knowledge brokerage strategy in advance to be employed in the process,but according to Michaels's(2009)framework,it could be anticipated that elements of other strategies can also be adopted.Michaels(2009)suggests for fundamental decision-making which involves opportunities to rethink how to approach a policy domain, collaborating as the primary brokering strategy,and for addressing moderately structured problems capacity building strategy.Develop-ment of the national energy plan can be characterised as belonging to both of these problem and decision-making types.4.ResultsWe identifiedfive primary strategies and their combinations which we used for brokering modelling knowledge between the researchers and policy developers in the development of Energy Plan2020and in the preparation of Energy Plan2030+(Table3).In Energy Plan2020, these strategies were identified upon reflection after the SEA process. In ex post modelling of Energy Plan2020and in the preparation of Ener-gy Plan2030+one strategy was chosen beforehand and others speci-fied in the course of action.The most frequently used knowledge brokerage strategies with re-gard to modelling in the national energy planning process were consult-ing and collaborating.Both approaches were applied formally(as an expert work in the SEA,in the working groups and Advisory Board of Energy Plan2030+)as well as informally(initiated by the researchers). Consultations were also part of the formal SEA process.The researchers brought in knowledge about modelling and environmental impact of power and heat generation while the policy developers contributed with knowledge on available and future energy production capacities and technology.High-involvement strategies,collaborating and engag-ing,were adopted by the researchers when an agreement with the pol-icy developers was sought on issues such as modelling objectives or model design and scenarios.For example,in order to discuss and agree on the ex ante and ex post modelling objectives,face-to-face com-munication and meetings with the policy developers were organised. Engaging strategy refers to the involvement of policy developers in the ex post modelling workshop,which was set up and moderated by the re-searchers.At this workshop,the researchers presented the ex post modelling results which demonstrated higher actual CO2and SO2emis-sions levels than those modelled in the ex ante SEA.The reasons for this were mainly related with larger scale electricity export than expected and with the postponement of the shutdown of old oil-shale production units.After the presentation,the policy developers were given an op-portunity to interpret the modellingfindings and discuss how the out-come can be used for the national energy plan renewal.The policy developers also described their overall expectations towards modelling of energy sector impacts,such as integration of regional energy market trends and impacts into the model and analysing the effects of policy measures on energy consumption.Informing strategy was appropriate to apply at the end of ex ante SEA process,when major changes to the modelling were no longer neces-sary.However,communication with the policy developers at this stage was still two-way and consequently yielded information for both sides,in particular with regard to interpretation of modelling re-sults and uncertainties.Capacity building was our main strategy in learn-ing from ex post modelling,since in the national energy policy such a re-assessment of emissions scenarios with the same model and comparing the forecast with present situation was carried out for thefirst time. Consulting and collaborating strategies were again employed when the researchers were invited to participate in Advisory Board and thematic working groups for Energy Plan2030+,during which the potential util-ity of LEAP model was discussed several times,initiated both by the re-searchers and the policy developers.Matchmaking was the only strategyTable3Knowledge brokerage(KB)strategies,examples and outcomes in relation to LEAP during the national energy plan process.National energy plan Primary KBstrategies inrelation toLEAPExamples of application of KB strategies SEA cycle stage OutcomesEnergy Plan 2020ConsultingandcollaboratingInteractions with policy developers for settingmodelling objectives and building scenarios in LEAPScoping and impactassessmentAgreement to project CO2and SO2emissions in electricity andheat production scenariosInforming Visualisation of emissions scenarios at workingmeetings and public energy forums.Impact assessment Approval of modelling results and the SEA reportDescribing modelling process,assumptions,uncertainties and outcomes in the SEA report.Ex post modelling Collaborating Interaction with policy developers for clarifyinginformation needs and setting ex post modellingobjectivesEx post(follow-up)evaluation of thenational energy planAgreement to extend the modelling time period from2030until2040and to use revised statistical data on CO2and SO2emissions for the past time periodEngaging Workshop with policy developers to discuss modellingoutcomes of electricity scenarios and modelling needsfor the new energy planAgreement to conduct ex post modelling also for heat scenariosCapacitybuildingLearning from ex post scenario modelling Joint understanding of external factors and uncertainties thatare difficult to predict in long-term emission scenariosEnergy Plan 2030+ConsultingandcollaboratingDiscussions in working groups and Advisory Board onmodelling needs and LEAP possibilitiesScoping Intention to use LEAP for the analyses of scenarios;however,the intention did not realise58P.Kuldna et al./Environmental Impact Assessment Review54(2015)55–60。
Significant person in Christianity: Catherine BoothCatherine Booth is the co-founder of the Salvation Army with her husband William. She is referred to as “Army Mother”. She was born in 1829 in Derbyshire, England. Raised in a deeply religious home she had read her Bible several times before the age of 12.A long period of illness in her mid-teens gave Catherine time for extended reading. She developed a great theological understanding that she later used in her preaching and teaching. She also kept herself busy writing articles for a magazine, encouraging people not to drink and highlighting the problems associated with alcohol.In 1855 she married William Booth a Methodist Minister. Although he shared her belief in social reform, he disagreed with her belief in female equality especially in the church. She convinced him otherwise with her powerful preaching and teaching and he later wrote “the best men in my Army are the women”.Catherine raised eight children, two of which later became Generals of the Salvation Army, Bramwell and Evangeline. Catherine Booth died of cancer in 1890.Contribution to the development and expression of Christianity.1)Co-founder of the Salvation Army2)Fearless preacher, role model for female ministry3)Social Reform campaigner1)1864 William and Catherine started the Christian Mission, which later becameknown as the Salvation Army. The aim was to work amongst the most desperate and poorest people in England. In particular with alcoholics. By providing basics like food (through the Food-for-the-Million Shops) and preaching in peoples homes andholding cottage meetings for converts, Catherine was able to reach people alienated or ignored by the ordinary churches. A survey taken in 1882 of London discoveredthat on one weeknight, there were almost 17 000 worshipping with the SalvationArmy, compared to 11 000 in ordinary churches. The Salvation Army had had animpact on the people the Church of England had failed to reach. They saw a need fora more accepting church that would reach out to those rejected by middle classcongregations.2)In Catherine Booth’s time it was unheard of for women to preach at adult meetingsof Christian believers. The Church of England was hostile towards the SalvationArmy and Catherine and William received much criticism for the elevation ofwomen to a man’s status. Catherine challenge people with her preaching andbecame a role model for female ministry. It was her preaching to the affluent areas that provided the financial assistance for William’s ministry t o the poor and needy.Catherine was convinced that women had an equal right to preach and teach.3)While working with the poor and desperate Catherine became aware of particularproblems they faced. She gave voice to the inequality of wages between men and women including “sweated labour”, workers in match factories, and raising the age of consent from 13 to 16. Catherine viewed the role of a Salvationist was not toretire to the quiet of a religious community to keep themselves pure but to plunge themselves joyously into a sinful world to bring freedom to those in chains.Although she didn’t see social reform on some of her campaigns come to fruition before she died, she was responsible for drawing the attention of those who could bring about the reform, to the real problems facing the lower class.Impact of Catherine Booth on Christianity1)Developed the theology and structure of the Salvation Army2) A voice for Women, g ifted writer, “Woman’s Right to Preach the Gospel”3)Social reform4)Best known and respected faith-based charity organisation- Salvation Army1)Catherine developed the theology of the Salvation Army. In particular their use, ormore correctly their non-use of the sacraments. She placed an emphasis on Christian holiness. She felt that meaningful symbols such as the sacraments of baptism and communion could easily become meaningless rituals. She also persuaded William to not use alcohol at all, not even for medicinal purposes. The Booths also didn’tdistinguish between the deserving and undeserving. Their charity and ministryreached out to the alcoholics, prostitutes and criminals usually rejected by charities and churches at the time. Catherine also designed the flag, bonnets for the ladies and was instrumental in the equality of women within the Army giving themfreedom and opportunities in worship and service not usually afforded to them in other denominations. They also introduced singing hymns to popular music andadopted the theatrical revivalist style of preaching and worship. This less rigid form of ministry was appealing and more accessible to the poor and lower class.2)In 1859 Catherine Booth wrote the pamphlet “Female Ministry- A Woman’s Right toPreach the Gospel”. She outlined her arguments on why women should be permitted to preach the gospe l and that they are not the “weaker sex”. This pamphlet is still considered relevant today. Catherine Booth is a notable first-wave Christian Feminist and respected by feminist theologians today. Not only did she preach when women were meant to remain silent but she and William proclaimed their marriage to be a marriage of equality, further fighting the established idea of female inferiority.Equality in the home and workplace as well as the church is one of the aims offeminist theology. Catherine Booth managed to be an example in all three aspects of life.3)Catherine died in 1890 from cancer but William continued her work. Catherine hadfought for major match making factory, Bryant and May to stop using the toxicyellow phosphorus to make matches and switch to the safe red phosphorus. Bryant and May claimed that red phosphorus was more expensive and would drive theprices up and people wouldn’t pay the increase. In 1891 William and the SalvationArmy opened its own match-factory using only red phosphorus and offering betterequal pay and better conditions. The workers were soon producing 6 million boxes a year. William took MPs and journalists on tours of the ‘model’ factory. He also tookthem to the homes of ‘sweated workers’. He hoped the bad publicity wou ld shamelarge companies like Bryant and May to change their workplace practices and stoptaking advantage of their workers. In 1901 Bryant and May announced it hadstopped using yellow phosphorus.Catherine also worked extensively with Prostitutes and was instrumental in raisingthe age of consent from 13 to 16. The Salvation Army set up refuges for “fallenwomen” and she referred to prostitution as “white slavery”. She was appalled by the "white slave trade", a Victorian euphemism for child prostitution. Wicked peoplewould kidnap and force destitute girls into a life of prostitution that was nearlyimpossible to escape. Catherine and her husband exposed the white slave trade inEngland. Three hundred and ninety-six thousand signatures later, they saw thepractice outlawed. On 20th May 1882, Catherine Booth gave evidence to the Houseof Lords on juvenile prostitution. Later she told her husband what had happenedwhen she argued that the age of consent should be increased from thirteen toeighteen. “I did not think we were as low as this! One member suggested that itshould be reduced to ten, and oh my God, that it was hard for a man having a charge brought against him not to be able to plead the consent of a child like that.” Theypromoted women’s suffrage (right to vote) so that women had a say in choosing the lawmakers who made the prostitution laws.4)The Salvation Army operates in109 countries and helps millions of people every year.It is one of the best-known and most respected charities in the world. Although they have continued to help the needy, the destitute and the groups largely rejected bysociety the Salvation Army’s goal is still the promotion of the Christian faith. AsCatherine and William have always advocated being a Christian means being active in the sinful world to those people find salvation.ConclusionThe effect of Catherine Booth on Christianity is probably in some ways immeasurable. The legacy that she and William leave is still strong and very active today. The freedoms that people, especially women and the lower class, have today in worship and life are due largely to the work of Catherine and William Booth.。
外文文献翻译原文及译文(节选重点翻译)作为战略决策者的管理会计师外文文献翻译中英文文献出处:Strategic Management Accounting, Volume 2,2018,1- 48译文字数:4500 多字英文The Management Accountant as a Strategic Decision-MakerVassili LautourRole Contingency FactorsAssumedly, the role of a management accountant consists of ensuring that operations are aligned with organisational strategy and can be put in numbers that can be managed and followed (Kaplan & Norton, 2008; Simons, 2005, 2010). Even this is multiple, the actual role depending on numerous contingency factors: organisation type, position within the organisation and hierarchic ranking.Organisation TypeGiven that not all organisations are confronted with the same strategic and operational issues and do not face the same challenges, their management accountants shall be assigned different duties.Manufacturing CompanyThis section develops the case of a manufacturing company as certainly the best-known context in which a management accountant operates. The general situation, taught in most managerial accounting courses and books, is presented first. This is then followed by a discussion of the impact each tier of strategy has on the work of a manufacturing accountant’s work.The General SituationIn such a company, the strategic issue comprises managing the following:Raw materials that are to be transformed or assembled;The manufacturing process;The end product.In manufacturing companies, quantities of raw materials and end products are central to management accounting. Associated with this issue is that of cost control:Cost of materials;Labour costs;Overhead costs.Therefore, it is central that management accountants operating manufacturing companies have an accurate view of costs and quantities and assess their evolution over time as well as across branches (e.g. factories). This leads indeed management accountants to focus on internal financial reporting and CVP analyses.Generic StrategiesEven within a manufacturing company, roles can vary, depending on the generic strategy adopted. That is, cost domination, product differentiation or a bundle of both shall affect strategic priorities and therefore a management accountant’s work.Cost DominationA cost domination strategy applies to manufacturing companies producing and commercialising a generic product with substitutes on the market. Cost domination then applies to manufacturing companies operating on a market with numerous competitors, none of them being particularly known as a brand. As a consequence, a competitive advantage can be gained from selling at lower price than their competitors would. In this situation, traditional cost accounting is management accountants’ main occupation.Case 1. ST MicroelectronicsManufacturing Chips at the Lowest CostST Microelectronics produces chips utilised in any electronic products, such as mobile phones, computers hi-fi but also in vehicles (car GPS, aircraft entertainment, etc.) The company is confronted with international competition, many substitutes existing on the market, since the end customer does not even know the components assembled in the end product.Given the pressure by client companies, ST Microelectronics can only sell its chips if at the lowest possible price. This results in management accountants placing a particular emphasis on the management of all sorts of costs and focusing on producing periodical schedules of costs.As the main components of chips are various metals, managementaccountants are very attentive to the variation of these metals’ prices on the markets. Part of their role consists of suggesting alternative suppliers or materials to meet the final selling price, set by client companies. Below is a management accountant’s job description:follow up contracts with suppliers;follow up contracts with clients;follow up raw materials’ rates on the market;prepare P&L reports.Product DifferentiationA product differentiation strategy applies to a manufacturing company when its product is semi-generic and it can differentiate itself from others on the market. It is then purchased, not necessarily because of a low price but because of its intrinsic characteristics that are different from those of other products. Such is the case of manufacturing companies operating on a market where clients expect rare or relatively exclusive products, not necessarily a generic item.Thence, a management accountant’s wor k consists of ensuring that the manufacturing process allows for this differentiation on the market.Position on the MarketDepending on company positioning on the market, a management accountant’s role shall vary. When the company is setting of its own product characteristics and price, management accountants do not placeemphasis on the same items as in a company where prices and costs are imposed from the market. Thence, management accountants have different roles if they work for an incumbent company, a challenger company or a following company.IncumbentThe incumbent is the leader on its market. When it is a manufacturing company, massive investments in equipment, procedures or research and development have already been incurred. Hence, this company leads the market by imposing its own standards that other manufacturers endeavour to mimic.In the twenty-first century, being an incumbent tends to be associated with patents and intellectual property protection. The central focus is to remain in the position of being the incumbent on the market, which implies controlling technology, patents and procedures, as well as ensuring new product development. Hence, other manufacturers can only appear as followers and in no way overcome the incumbent.The incumbent’s management accountant shall ensure that technology and procedures are protected and stringently followed, and hence the product can remain a leader on its market. The management accountant shall also focus on the cost of losing control of a patent or a technology and suggest ways of overcoming this. Costs appear as a secondary concern and shall of course remain under control but are by nomeans the core of the management accountant’s professional attention.Case 2. Apple vs. Samsung Patents’and Procedures’ ProtectionSince the launch of the first generation iPhone in November 2007, Apple has been the incumbent on the smartphone market, especially through the development and protection of its touchscreen. This technological advance has then been confirmed with the launch of the first generation iPad in March 2010. Both products being sold at a high price, cost control was not central to the company. Its core strategic and operational concern was that products meet quality requirements and can be manufactured in such a way that no technology advances could be made public or shared. Hence, a management accountant’s work consists of the following:control the value chain’s operational efficiency;control coordination between units in the manufacturing process;follow up technology transfers to suppliers and ensure they are not using technology for their own profit;follow up competitors’ technology and compute the amounts we can ask in case of patent breach;compute the possible cost of opportunity of losing our technology or patents.ChallengerThe challenger is a company that enters a market with the ambition of outperforming the incumbent. A challenger tends to offer an alternative to the product sold by the incumbent, by revising its characteristics, offering a higher quality product or selling it at a slightly lower price. Nowadays, two types of challengers are known:Incumbent’s historical competitors endeavouring to outperform it;A new business associated with institutional investors.Mode of ProductionModes of production and issues in management accounting are further developed in Chapter 2. Here, they are briefly elicited and exposed, and hence the reader has a first hint thereof and can view the scope of management accountants’ roles.Mass ProductionWhen the product is very generic and allows profit based upon sales of large quantities, the company is doing mass production. Usually, mass production is associated with a cost domination or a follower’s strategy. In this case, the management accountant’s role focuses on conventional cost accounting and CVP analysis.Lean ProductionAfter c.50 years of product standardisation, customers and corporate clients seek for more custom-tailored products. Increasingly, they tend to select by themselves product characteristics and place their orders on thisbasis. As a result, the manufacturing company making this product cannot count on inventories and just respond to this order. Rather, what must be in place is a set of procedures allowing for lean production and management. Just-in-time fully applies. Such orders are often placed when a client company is to acquire a large number of units of the same product. In this case, counting on a relatively large amount of units enables and facilitates profit-making. Such can be the case for airlines ordering a certain amount of aircrafts or a company purchasing new computers for their employees.In such manufacturing companies, the management accountant’s work consists of measuring the profitability of an order placed and recommending accepting or declining it. Their duties shall consist of measuring the profit made on a single order and identify the possible drivers for extra costs or lower profit. In so doing, the management accountant can determine either the minimal amount of units making the order profitable or product characteristics that can be negotiated to avert its costs exceeding selling price:Profit accounting;Manufacturing timetable control;Just-in-time accounting.Case 3. Airbus and airlinesJob Order Management AccountantAirbus, the European aircraft manufacturer, only responds to orders placed by airlines. When receiving an order, company managers transfer it to their management accounting team in order to determine order profitability as well as levers for negotiation with the client. Those management accountants know product specifications very well, hence they can decide which ones can be adapted to client orders and which ones cannot.Receiving the terms of the provisional contract between Airbus and the airline, the management accountant determines:above how many aircrafts the order is profitable;what profit can be generated from this specific order;what parts of the aircraft, given the order, can be generic or by how much they can be custom-tailored.Non-manufacturing CompaniesTraditionally, management accounting technologies—especially calculative ones—are presented as the core of management accountant’s work. Addressing costs and quantities, it is implicitly assumed that these calculative threads operate in manufacturing companies. Yet, management accountants’ work takes on different forms in non- manufacturing companies, such as merchandising companies, service companies, non-profits or public sector organisations.Merchandising CompanyIn a merchandising company, the core concern relates to the management and measurement of inventories and logistics. Management accounting issues shall differ, depending on whether products have an expiry or obsolescence date.Inventories that do not expire can be construction materials or product without a known life cycle. In this case, management accountants’ work consists of the following:Accounting for quantities and flows;Valuing inventories;Tracing orders to refill inventories.For those companies dealing with inventories that can expire or become obsolete, management accountants are expected to trace very accurately:How many units are inventoried;Unit entry date and expiry date;Promote the use of materials close to expiry;Trace orders to refill inventories.Case 4. AmazonInventory AccountingAmazon, in its capacity of the e-commerce world leader, is the largest merchandising company on earth. This was amplified after thetakeover of Whole Foods. Amazon’s management accountants are dealing with two types of products, some expiring (IT and food) and others never expiring (books and DVDs). Obsessed with customer satisfaction, the company cannot afford to store or deliver expired or obsolete products. Therefore, their management accountants are required to do as follows:Trace every unit in or out and assign to it an expiry or a delivery date;Account for the value of every unit inventoried (purchase, sale and current);Recommend special promotions for products expiring soon;Recommend destruction of expired products and account for losses;Optimise inventory management.Service CompanyUnlike a manufacturing or a merchandising company, a service company rests its activity on intangible assets. These can be expertise, know-how or intellectual capital. Service companies’ activity rarely rests upon high overhead or raw materials but mostly upon labour. This implies that conventional cost accounting and CVP are inapplicable. What matters for such a company is the profit generated by an employee or an activity.Factory Management AccountantA management accountant working in a factory is one working for amanufacturing company. Their main focus and concern is that the factory can deliver at the right time the right quantities of products the company is selling. This also implies that the correct amount of raw materials be employed. Given that the factory is only a supplier to the sales department, its managers have no influence on selling price, which is externally imposed.The management accountant in a factory must ensure that the total cost of the production remains below selling price. To this end, their work mostly consists of the following:Tracing various costs;Conducting CVP analyses and find causes for unfavourable variances;Suggesting solutions to such unfavourable variances;Reporting to the company management accountant in chief.In a factory, as the management accountant deals with costs and quantities, the work is relatively distant and mostly implies that numbers are scrutinised. Number consistency is checked and validated, so that figures can be consolidated. Usually, a factory management accountant does not have much contact with other people in the company, does a relatively solitary work and reports figures.This role is often given either to a junior management accountant, newly graduated, or to a long-term worker in the factory. In the formercase, being a factory management accountant is a threshold to the management accounting profession. Conversely, in the latter case, such a position rewards a worker’s career for someone having a good command of factory processes.Branch Management AccountantA management accountant working for a branch of a company is tasked with duties relatively similar to those of a service company management accountant. More than the product itself, inventories or costs and quantities, such a management accountant’s focus is on overall efficiency and profitability. Whilst a factory management accountant has a very partial view of profitability, the branch management accountant has a more comprehensive view thereof. The branch is taken as a separate unit on which the management accountant has full authority, implying a triple role.A Strategic RoleOne of his or her tasks consists of ensuring that corporate procedures and processes, as defined at the headquarters, are eventually applied locally. It matters that corporate standards do apply in the branch. To this end, the branch management accountant reviews all internal business processes. He or she can interview any branch employees and managers to have a clear view of how things are done. In case of misalignment with corporate instructions, the management accountant in a branch can beasked by the Headquarters to suggest ways of solving these problems. In order to remain independent and have an opinion that counts, this management accountant is very rarely in charge of implementing the solutions he or she recommended.An HRM RoleAs in a service company, the branch management accountant partakes in employee performance assessment. Depending on corporate policy, the management accountant can be the sole in charge of individual annual performance meetings or share those with HR. In this respect, as in a service company, his or her role also consists of assessing employee contribution to branch performance. His or her activities, depending on the autonomy the headquarters grant him or her, can also imply the administration of rewards and incentives, and reporting on these to the company HR department.A Financial Controller RoleThe management accountant in a branch is to assess its overall profitability, reviewing all possible costs and revenues. Depending on the industry in which the branch operates, the contents of such cost and revenue can vary. If the branch has a manufacturing activity, the management accountant shall review unit costs and quantities and conduct CVP analyses. If the branch has a merchandising activity, the management accountant shall review inventories. If the branch is aservice organisation, the management accountant shall review financial performance per activity and report on this to the corporate financial controller.The richness of a branch man agement accountant’s duties implies that he or she has already demonstrated skills in these three domains: corporate procedures, HR and financial control. As a result, a branch management accountant is usually an experienced executive. Very often, this position is offered to a senior person already having had both a managerial experience and a financial or accounting activity. Given his or her strategic role for the company, this management accountant is often an employee from this company with a good command of corporate processes and procedures. It is quite usual to appoint on such positions someone who has had managerial responsibilities in a different branch. In order to avoid possible conflicts of interest and collusion with work colleagues, this change in role and duties is often accompanied with a geographical change. The new management accountant is usually appointed in a different branch.译文作为战略决策者的管理会计师瓦西里·劳图尔管理会计师的角色管理会计师的角色包括确保运营与组织战略保持一致,并可以采用可以管理和遵循的数字(Kaplan&Norton,2008;Simons,2005,2010)。
了不起的盖茨比第七章英语单词知乎以下是《了不起的盖茨比》第七章中出现的一些单词及其用法解释:1. Debauch: (verb) to corrupt morally or by intemperance or sensuality.Example: The wild party in Gatsby's mansion was filled with debauchery and excess.2. Sotto voce: (adverb) in a low voice, or in an undertone.Example: Jordan spoke to Nick sotto voce, revealing a secret that nobody else could hear.3. Affront: (verb) to insult intentionally.Example: Tom felt affronted when Gatsby openly declared his love for Daisy.4. Elude: (verb) to evade or escape from, as by daring, cleverness, or skill.Example: Despite all efforts, the truth about Gatsby's past eluded everyone.5. Nebulous: (adjective) hazy, vague, indistinct, or confused.Example: Gatsby's actual identity remained nebulous to many of his party guests.6. Meretricious: (adjective) alluring by a show of flashy or vulgar attractions, but often without real value.Example: Daisy was not impressed by the meretricious displays of wealth at Gatsby's parties.7. Contemptuous: (adjective) showing or expressing contempt or disdain; scornful.Example: Tom looked at Gatsby with a contemptuous expression, as he considered him a social climber.8. Ineffable: (adjective) incapable of being expressed or described in words; inexpressible.Example: Daisy experienced an ineffable sense of longing when Gatsby took her for a drive in his fancy car.9. Ramification: (noun) a consequence or implication; a branching out.Example: The ramification of Gatsby's obsession with Daisy was the destruction of his own life.10. Libertine: (noun) a person who is morally or sexually unrestrained, especially a dissolute man.Example: Gatsby was often seen as a libertine, indulging in extravagant parties and relationships.11. Sluggish: (adjective) displaying slow or lazy movements or responses.Example: The sluggish summer heat made everyone at the party feel lethargic and unmotivated.12. Pander: (verb) to cater to the lower tastes or base desires of others.Example: Gatsby's extravagant parties were seen by some as an attempt to pander to the desires of the wealthy elite.13. Incarnation: (noun) a particular physical form or state; a concrete or actual form of a quality or concept.Example: Gatsby believed that he could recreate himself into an incarnation of the man Daisy truly desired.14. Inexplicable: (adjective) unable to be explained or accounted for.Example: Daisy's sudden attraction towards Gatsby seemed inexplicable to many, considering their past.15. Insidious: (adjective) proceeding in a gradual, subtle way, but with harmful effects.Example: Tom warned Daisy about Gatsby's insidious intentions, accusing him of trying to steal her away.16. Supercilious: (adjective) behaving or looking as though one thinks they are superior to others; arrogant.Example: Tom's supercilious attitude towards Gatsby was evident in his condescending mannerisms.17. Saunter: (verb) to walk in a slow, relaxed, and confident manner.Example: Gatsby sauntered across the lawn towards Daisy, trying to appear nonchalant.18. Harrowed: (adjective) distressed or disturbed.Example: Gatsby's harrowed expression revealed the emotional turmoil he was experiencing.19. Truculent: (adjective) eager or quick to argue or fight; aggressively defiant.Example: Tom showed his truculent nature when he confronted Gatsby about his relationship with Daisy.20. Portentous: (adjective) of or like a portent; foreboding; full of unspecified meaning.Example: The dark clouds and thunderous sky seemed portentous, as if something significant was about to happen.21. Gaudiness: (noun) the quality of being tastelessly showy or overly ornate.Example: Despite the gaudiness of Gatsby's mansion, the guests were drawn to its opulence.22. Indiscernible: (adjective) impossible to see or clearly distinguish.Example: In the chaos of the party, individual voices became indiscernible and blended into a cacophony.23. Intermittent: (adjective) occurring at irregular intervals; not continuous or steady.Example: The intermittent rain throughout the night dampened the enthusiasm of the party guests.24. Stratum: (noun) a layer or a series of layers of rock in the ground.Example: Gatsby tried to climb the social stratum, hoping to be accepted by the upper class.25. Harlequin: (noun) a character in traditional pantomime; a buffoon.Example: Gatsby's harlequin smile hid the sadness and longing he felt for Daisy.26. Disconcerting: (adjective) causing one to feel unsettled or disturbed.Example: Daisy's disconcerting confession about her true feelings left Gatsby feeling disoriented and hurt.请注意,以上的双语例句是根据所给的单词和上下文进行编写的,但并非《了不起的盖茨比》中的原文。
美国会计协会政府会计研究的几点看法作者姓名:拉吉夫D.班克,威廉·W·库珀和戈登·波特资料来源:会计回顾,卷. 67,第3(7月,1992),页496-510发布者:美国会计协会稳定网址:/stable/247974.访问:19/04/2013 11:43您使用JSTOR档案,即表示您接受使用条款及条件,可在/page/info/about/policies/terms.jspJSTOR是一个不以营利为目的服务,帮助学者,研究人员和学生发现,使用,和建立在广泛的内容在受信任的数字档案。
我们利用信息技术和工具,以提高生产力和促进学术的新形式。
有关JSTOR的更多信息,请联系support@。
美国会计协会与JSTOR数字化合作,维护和扩展访问 的会计评论。
会计回顾第一卷.67,第3号1992年7月第496-510政府会计研究的几点看法拉吉夫D.班克明尼苏达明尼阿波利斯大学威廉·W·库珀德克萨斯大学奥斯汀分校戈登·波特明尼苏达明尼阿波利斯大学大纲和简介:根据1991年十二月当前企业问题的调查,州和地方的政府支出占美国国内产品总值的11%以上。
穆迪1991年的市政说明书显示,这些政府企业拥有优秀的债务。
现在美元逼近8000亿,此外,一份由公共证券协会(1987)的报告指出,这笔债务增长率从1966年至1986年以12%的复合年增长率增长。
国家和当地政府活动持续增加的幅度,以及在会计业务方面显然构成了政治和经济环境的重要组成部分。
富有特色的是,这些组织重要的责任问题,需要会计研究的重视。
由FEROZ威尔逊和DEIS吉鲁的文章中看到,在这个问题上,我们已经邀请了审查,解决其中的一些话题。
FEROZ威尔逊的研究可以看作是以一个扩展资本市场为基础的研究,探讨财务会计披露对证券价格和收益率的影响的公共部门。
他们假设市场对沿线国家和地区线路市政府债务的分割和研究差分信息披露的有关借贷成本的影响。
IAS41 International Accounting Standard41AgricultureIn April2001the International Accounting Standards Board(IASB)adopted IAS41 Agriculture,which had originally been issued by the International Accounting Standards Committee(IASC)in February2001.In December2003the IASB issued a revised IAS41as part of its initial agenda of technical projects.Other IFRSs have made minor consequential amendments to IAS41.They include IFRS9 Financial Instruments(issued November2009and October2010)and IFRS13Fair Value Measurement(issued May2011).IFRS Foundation A1221IAS41C ONTENTSfrom paragraph INTRODUCTION IN1 INTERNATIONAL ACCOUNTING STANDARD41AGRICULTUREOBJECTIVESCOPE1 DEFINITIONS5 Agriculture-related definitions5 General definitions8 RECOGNITION AND MEASUREMENT10 Gains and losses26 Inability to measure fair value reliably30 GOVERNMENT GRANTS34 DISCLOSURE40 General40 Additional disclosures for biological assets where fair value cannot bemeasured reliably54 Government grants57 EFFECTIVE DATE AND TRANSITION58FOR THE ACCOMPANYING DOCUMENTS LISTED BELOW,SEE PART B OF THIS EDITIONBASIS FOR CONCLUSIONSBASIS FOR IASC’S CONCLUSIONSILLUSTRATIVE EXAMPLESA1222IFRS FoundationIAS41 International Accounting Standard41Agriculture(IAS41)is set out in paragraphs1–61.All the paragraphs have equal authority but retain the IASC format of the Standard when it was adopted by the IASB.IAS41should be read in the context of its objective and the Basis for Conclusions,the Preface to International Financial Reporting Standards and the Conceptual Framework for Financial Reporting.IAS8Accounting Policies,Changes in Accounting Estimates and Errors provides a basis for selecting and applying accounting policies in the absence of explicit guidance.IFRS Foundation A1223IAS41IntroductionIN1IAS41prescribes the accounting treatment,financial statement presentation, and disclosures related to agricultural activity,a matter not covered in otherStandards.Agricultural activity is the management by an entity of the biologicaltransformation of living animals or plants(biological assets)for sale,intoagricultural produce,or into additional biological assets.IN2IAS41prescribes,among other things,the accounting treatment for biological assets during the period of growth,degeneration,production,and procreation,and for the initial measurement of agricultural produce at the point of harvest.It requires measurement at fair value less costs to sell from initial recognition ofbiological assets up to the point of harvest,other than when fair value cannot bemeasured reliably on initial recognition.However,IAS41does not deal withprocessing of agricultural produce after harvest;for example,processing grapesinto wine and wool into yarn.IN3There is a presumption that fair value can be measured reliably for a biological asset.However,that presumption can be rebutted only on initial recognition fora biological asset for which quoted market prices are not available and for whichalternative fair value measurements are determined to be clearly unreliable.Insuch a case,IAS41requires an entity to measure that biological asset at its costless any accumulated depreciation and any accumulated impairment losses.Once the fair value of such a biological asset becomes reliably measurable,anentity should measure it at its fair value less costs to sell.In all cases,an entityshould measure agricultural produce at the point of harvest at its fair value lesscosts to sell.IN4IAS41requires that a change in fair value less costs to sell of a biological asset be included in profit or loss for the period in which it arises.In agriculturalactivity,a change in physical attributes of a living animal or plant directlyenhances or diminishes economic benefits to the entity.Under atransaction-based,historical cost accounting model,a plantation forestry entitymight report no income until first harvest and sale,perhaps30years afterplanting.On the other hand,an accounting model that recognises andmeasures biological growth using current fair values reports changes in fairvalue throughout the period between planting and harvest.IN5IAS41does not establish any new principles for land related to agricultural activity.Instead,an entity follows IAS16Property,Plant and Equipment or IAS40Investment Property,depending on which standard is appropriate in thecircumstances.IAS16requires land to be measured either at its cost less anyaccumulated impairment losses,or at a revalued amount.IAS40requires landthat is investment property to be measured at its fair value,or cost less anyaccumulated impairment losses.Biological assets that are physically attached toland(for example,trees in a plantation forest)are measured at their fair valueless costs to sell separately from the land.IN6IAS41requires an unconditional government grant related to a biological asset measured at its fair value less costs to sell to be recognised in profit or loss when,and only when,the government grant becomes receivable.If a government A1224IFRS FoundationIAS41grant is conditional,including when a government grant requires an entity notto engage in specified agricultural activity,an entity should recognise thegovernment grant in profit or loss when,and only when,the conditionsattaching to the government grant are met.If a government grant relates to abiological asset measured at its cost less any accumulated depreciation and anyaccumulated impairment losses,the entity applies IAS20Accounting forGovernment Grants and Disclosure of Government Assistance.IN7IAS41is effective for annual financial statements covering periods beginning on or after1January2003.Earlier application is encouraged.IN8IAS41does not establish any specific transitional provisions.The adoption of IAS41is accounted for in accordance with IAS8Accounting Policies,Changes inAccounting Estimates and Errors.IN9The illustrative examples accompanying IAS41provide examples of the application of the Standard.The Basis for Conclusions summarises the Board’sreasons for adopting the requirements set out in IAS41.IFRS Foundation A1225IAS41International Accounting Standard41AgricultureObjectiveThe objective of this Standard is to prescribe the accounting treatment anddisclosures related to agricultural activity.Scope1This Standard shall be applied to account for the following when they relate to agricultural activity:(a)biological assets;(b)agricultural produce at the point of harvest;and(c)government grants covered by paragraphs34and35.2This Standard does not apply to:(a)land related to agricultural activity(see IAS16Property,Plant andEquipment and IAS40Investment Property);and(b)intangible assets related to agricultural activity(see IAS38IntangibleAssets).3This Standard is applied to agricultural produce,which is the harvested product of the entity’s biological assets,only at the point of harvest.Thereafter,IAS2Inventories or another applicable Standard is applied.Accordingly,this Standarddoes not deal with the processing of agricultural produce after harvest;forexample,the processing of grapes into wine by a vintner who has grown thegrapes.While such processing may be a logical and natural extension ofagricultural activity,and the events taking place may bear some similarity tobiological transformation,such processing is not included within the definitionof agricultural activity in this Standard.4The table below provides examples of biological assets,agricultural produce,and products that are the result of processing after harvest:Biological assets Agricultural produce Products that are theresult of processingafter harvestSheep Wool Y arn,carpetFelled trees Logs,lumberTrees in a plantationforestPlants Cotton Thread,clothingHarvested cane SugarDairy cattle Milk Cheesecontinued... A1226IFRS FoundationIAS41 ...continuedBiological assets Agricultural produce Products that are theresult of processingafter harvestPigs Carcass Sausages,cured hamsBushes Leaf Tea,cured tobaccoVines Grapes WineFruit trees Picked fruit Processed fruit DefinitionsAgriculture-related definitions5The following terms are used in this Standard with the meanings specified:Agricultural activity is the management by an entity of the biologicaltransformation and harvest of biological assets for sale or for conversioninto agricultural produce or into additional biological assets.Agricultural produce is the harvested product of the entity’s biologicalassets.A biological asset is a living animal or plant.Biological transformation comprises the processes of growth,degeneration,production,and procreation that cause qualitative orquantitative changes in a biological asset.Costs to sell are the incremental costs directly attributable to the disposalof an asset,excluding finance costs and income taxes.A group of biological assets is an aggregation of similar living animals orplants.Harvest is the detachment of produce from a biological asset or thecessation of a biological asset’s life processes.6Agricultural activity covers a diverse range of activities;for example,raising livestock,forestry,annual or perennial cropping,cultivating orchards andplantations,floriculture and aquaculture(including fish farming).Certaincommon features exist within this diversity:(a)Capability to change.Living animals and plants are capable of biologicaltransformation;(b)Management of change.Management facilitates biological transformationby enhancing,or at least stabilising,conditions necessary for the processto take place(for example,nutrient levels,moisture,temperature,fertility,and light).Such management distinguishes agricultural activityfrom other activities.For example,harvesting from unmanaged sources(such as ocean fishing and deforestation)is not agricultural activity;andIFRS Foundation A1227IAS41(c)Measurement of change.The change in quality(for example,genetic merit,density,ripeness,fat cover,protein content,and fibre strength)orquantity(for example,progeny,weight,cubic metres,fibre length ordiameter,and number of buds)brought about by biologicaltransformation or harvest is measured and monitored as a routinemanagement function.7Biological transformation results in the following types of outcomes:(a)asset changes through(i)growth(an increase in quantity orimprovement in quality of an animal or plant),(ii)degeneration(adecrease in the quantity or deterioration in quality of an animal orplant),or(iii)procreation(creation of additional living animals orplants);or(b)production of agricultural produce such as latex,tea leaf,wool,andmilk.General definitions8The following terms are used in this Standard with the meanings specified:Carrying amount is the amount at which an asset is recognised in thestatement of financial position.Fair value is the price that would be received to sell an asset or paid totransfer a liability in an orderly transaction between market participantsat the measurement date.(See IFRS13Fair Value Measurement.)Government grants are as defined in IAS20Accounting for GovernmentGrants and Disclosure of Government Assistance.9[Deleted]Recognition and measurement10An entity shall recognise a biological asset or agricultural produce when, and only when:(a)the entity controls the asset as a result of past events;(b)it is probable that future economic benefits associated with theasset will flow to the entity;and(c)the fair value or cost of the asset can be measured reliably.11In agricultural activity,control may be evidenced by,for example,legal ownership of cattle and the branding or otherwise marking of the cattle onacquisition,birth,or weaning.The future benefits are normally assessed bymeasuring the significant physical attributes.12A biological asset shall be measured on initial recognition and at the end of each reporting period at its fair value less costs to sell,except for thecase described in paragraph30where the fair value cannot be measuredreliably.A1228IFRS FoundationIAS4113Agricultural produce harvested from an entity’s biological assets shall be measured at its fair value less costs to sell at the point of harvest.Suchmeasurement is the cost at that date when applying IAS2Inventories oranother applicable Standard.14[Deleted]15The fair value measurement of a biological asset or agricultural produce may be facilitated by grouping biological assets or agricultural produce according tosignificant attributes;for example,by age or quality.An entity selects theattributes corresponding to the attributes used in the market as a basis forpricing.16Entities often enter into contracts to sell their biological assets or agricultural produce at a future date.Contract prices are not necessarily relevant inmeasuring fair value,because fair value reflects the current market conditionsin which market participant buyers and sellers would enter into a transaction.As a result,the fair value of a biological asset or agricultural produce is notadjusted because of the existence of a contract.In some cases,a contract for thesale of a biological asset or agricultural produce may be an onerous contract,asdefined in IAS37Provisions,Contingent Liabilities and Contingent Assets.IAS37applies to onerous contracts.[Deleted]17–2122An entity does not include any cash flows for financing the assets,taxation,or re-establishing biological assets after harvest(for example,the cost of replantingtrees in a plantation forest after harvest).23[Deleted]24Cost may sometimes approximate fair value,particularly when:(a)little biological transformation has taken place since initial costincurrence(for example,for fruit tree seedlings planted immediatelyprior to the end of a reporting period);or(b)the impact of the biological transformation on price is not expected to bematerial(for example,for the initial growth in a30-year pine plantationproduction cycle).25Biological assets are often physically attached to land(for example,trees in a plantation forest).There may be no separate market for biological assets that areattached to the land but an active market may exist for the combined assets,thatis,the biological assets,raw land,and land improvements,as a package.An entity may use information regarding the combined assets to measure thefair value of the biological assets.For example,the fair value of raw land andland improvements may be deducted from the fair value of the combined assetsto arrive at the fair value of biological assets.IFRS Foundation A1229IAS41Gains and losses26A gain or loss arising on initial recognition of a biological asset at fair value less costs to sell and from a change in fair value less costs to sell ofa biological asset shall be included in profit or loss for the period inwhich it arises.27A loss may arise on initial recognition of a biological asset,because costs to sell are deducted in determining fair value less costs to sell of a biological asset.Again may arise on initial recognition of a biological asset,such as when a calf isborn.28A gain or loss arising on initial recognition of agricultural produce at fair value less costs to sell shall be included in profit or loss for the period inwhich it arises.29A gain or loss may arise on initial recognition of agricultural produce as a result of harvesting.Inability to measure fair value reliably30There is a presumption that fair value can be measured reliably for a biological asset.However,that presumption can be rebutted only oninitial recognition for a biological asset for which quoted market pricesare not available and for which alternative fair value measurements aredetermined to be clearly unreliable.In such a case,that biological assetshall be measured at its cost less any accumulated depreciation and anyaccumulated impairment losses.Once the fair value of such a biologicalasset becomes reliably measurable,an entity shall measure it at its fairvalue less costs to sell.Once a non-current biological asset meets thecriteria to be classified as held for sale(or is included in a disposal groupthat is classified as held for sale)in accordance with IFRS5Non-currentAssets Held for Sale and Discontinued Operations,it is presumed that fairvalue can be measured reliably.31The presumption in paragraph30can be rebutted only on initial recognition.An entity that has previously measured a biological asset at its fair value lesscosts to sell continues to measure the biological asset at its fair value less costs tosell until disposal.32In all cases,an entity measures agricultural produce at the point of harvest at its fair value less costs to sell.This Standard reflects the view that the fair value ofagricultural produce at the point of harvest can always be measured reliably.33In determining cost,accumulated depreciation and accumulated impairment losses,an entity considers IAS2,IAS16and IAS36Impairment of Assets.Government grants34An unconditional government grant related to a biological asset measured at its fair value less costs to sell shall be recognised in profit orloss when,and only when,the government grant becomes receivable.A1230IFRS FoundationIAS4135If a government grant related to a biological asset measured at its fair value less costs to sell is conditional,including when a government grantrequires an entity not to engage in specified agricultural activity,anentity shall recognise the government grant in profit or loss when,andonly when,the conditions attaching to the government grant are met.36Terms and conditions of government grants vary.For example,a grant may require an entity to farm in a particular location for five years and require theentity to return all of the grant if it farms for a period shorter than five years.Inthis case,the grant is not recognised in profit or loss until the five years havepassed.However,if the terms of the grant allow part of it to be retainedaccording to the time that has elapsed,the entity recognises that part in profitor loss as time passes.37If a government grant relates to a biological asset measured at its cost less any accumulated depreciation and any accumulated impairment losses(see paragraph30),IAS20is applied.38This Standard requires a different treatment from IAS20,if a government grant relates to a biological asset measured at its fair value less costs to sell or agovernment grant requires an entity not to engage in specified agriculturalactivity.IAS20is applied only to a government grant related to a biologicalasset measured at its cost less any accumulated depreciation and anyaccumulated impairment losses.Disclosure39[Deleted]General40An entity shall disclose the aggregate gain or loss arising during the current period on initial recognition of biological assets and agriculturalproduce and from the change in fair value less costs to sell of biologicalassets.41An entity shall provide a description of each group of biological assets.42The disclosure required by paragraph41may take the form of a narrative or quantified description.43An entity is encouraged to provide a quantified description of each group of biological assets,distinguishing between consumable and bearer biologicalassets or between mature and immature biological assets,as appropriate.Forexample,an entity may disclose the carrying amounts of consumable biologicalassets and bearer biological assets by group.An entity may further divide thosecarrying amounts between mature and immature assets.These distinctionsprovide information that may be helpful in assessing the timing of future cashflows.An entity discloses the basis for making any such distinctions.44Consumable biological assets are those that are to be harvested as agricultural produce or sold as biological assets.Examples of consumable biological assetsare livestock intended for the production of meat,livestock held for sale,fish infarms,crops such as maize and wheat,and trees being grown for lumber.BearerIFRS Foundation A1231IAS41biological assets are those other than consumable biological assets;for example,livestock from which milk is produced,grape vines,fruit trees,and trees fromwhich firewood is harvested while the tree remains.Bearer biological assets arenot agricultural produce but,rather,are self-regenerating.45Biological assets may be classified either as mature biological assets or immature biological assets.Mature biological assets are those that have attainedharvestable specifications(for consumable biological assets)or are able tosustain regular harvests(for bearer biological assets).46If not disclosed elsewhere in information published with the financial statements,an entity shall describe:(a)the nature of its activities involving each group of biologicalassets;and(b)non-financial measures or estimates of the physical quantities of:(i)each group of the entity’s biological assets at the end of theperiod;and(ii)output of agricultural produce during the period.47–48[Deleted]49An entity shall disclose:(a)the existence and carrying amounts of biological assets whose title isrestricted,and the carrying amounts of biological assets pledged assecurity for liabilities;(b)the amount of commitments for the development or acquisition ofbiological assets;and(c)financial risk management strategies related to agriculturalactivity.50An entity shall present a reconciliation of changes in the carrying amount of biological assets between the beginning and the end of thecurrent period.The reconciliation shall include:(a)the gain or loss arising from changes in fair value less costs to sell;(b)increases due to purchases;(c)decreases attributable to sales and biological assets classified asheld for sale(or included in a disposal group that is classified as heldfor sale)in accordance with IFRS5;(d)decreases due to harvest;(e)increases resulting from business combinations;(f)net exchange differences arising on the translation of financialstatements into a different presentation currency,and on thetranslation of a foreign operation into the presentation currencyof the reporting entity;and(g)other changes.A1232IFRS FoundationIAS4151The fair value less costs to sell of a biological asset can change due to both physical changes and price changes in the market.Separate disclosure ofphysical and price changes is useful in appraising current period performanceand future prospects,particularly when there is a production cycle of more thanone year.In such cases,an entity is encouraged to disclose,by group orotherwise,the amount of change in fair value less costs to sell included in profitor loss due to physical changes and due to price changes.This information isgenerally less useful when the production cycle is less than one year(forexample,when raising chickens or growing cereal crops).52Biological transformation results in a number of types of physical change—growth,degeneration,production,and procreation,each of which isobservable and measurable.Each of those physical changes has a directrelationship to future economic benefits.A change in fair value of a biologicalasset due to harvesting is also a physical change.53Agricultural activity is often exposed to climatic,disease and other natural risks.If an event occurs that gives rise to a material item of income or expense,thenature and amount of that item are disclosed in accordance with IAS1Presentation of Financial Statements.Examples of such an event include an outbreakof a virulent disease,a flood,a severe drought or frost,and a plague of insects.Additional disclosures for biological assets where fairvalue cannot be measured reliably54If an entity measures biological assets at their cost less any accumulated depreciation and any accumulated impairment losses(see paragraph30)at the end of the period,the entity shall disclose for such biologicalassets:(a)a description of the biological assets;(b)an explanation of why fair value cannot be measured reliably;(c)if possible,the range of estimates within which fair value is highlylikely to lie;(d)the depreciation method used;(e)the useful lives or the depreciation rates used;and(f)the gross carrying amount and the accumulated depreciation(aggregated with accumulated impairment losses)at thebeginning and end of the period.55If,during the current period,an entity measures biological assets at their cost less any accumulated depreciation and any accumulated impairmentlosses(see paragraph30),an entity shall disclose any gain or lossrecognised on disposal of such biological assets and the reconciliationrequired by paragraph50shall disclose amounts related to suchbiological assets separately.In addition,the reconciliation shall includethe following amounts included in profit or loss related to thosebiological assets:(a)impairment losses;IFRS Foundation A1233IAS41(b)reversals of impairment losses;and(c)depreciation.56If the fair value of biological assets previously measured at their cost less any accumulated depreciation and any accumulated impairment lossesbecomes reliably measurable during the current period,an entity shalldisclose for those biological assets:(a)a description of the biological assets;(b)an explanation of why fair value has become reliably measurable;and(c)the effect of the change.Government grants57An entity shall disclose the following related to agricultural activity covered by this Standard:(a)the nature and extent of government grants recognised in thefinancial statements;(b)unfulfilled conditions and other contingencies attaching togovernment grants;and(c)significant decreases expected in the level of government grants. Effective date and transition58This Standard becomes operative for annual financial statements covering periods beginning on or after1January2003.Earlier application is encouraged.If an entity applies this Standard for periods beginning before1January2003,itshall disclose that fact.59This Standard does not establish any specific transitional provisions.The adoption of this Standard is accounted for in accordance with IAS8Accounting Policies,Changes in Accounting Estimates and Errors.60Paragraphs5,6,17,20and21were amended and paragraph14deleted by Improvements to IFRSs issued in May2008.An entity shall apply thoseamendments prospectively for annual periods beginning on or after1January2009.Earlier application is permitted.If an entity applies the amendments foran earlier period it shall disclose that fact.61IFRS13,issued in May2011,amended paragraphs8,15,16,25and30and deleted paragraphs9,17–21,23,47and48.An entity shall apply thoseamendments when it applies IFRS13.A1234IFRS Foundation。
Survey Tools for Assessing Service DeliveryJan DehnRitva ReinikkaJakob SvenssonJune 2002_____________________Contributions and comments of Magnus Lindelöw and Aminur Rahman are gratefully acknowledged.Contents1. Introduction 12. Public spending, service delivery, and outcomes 23. Public sector agencies 24. Key Features of PETS and QSDS 5A. Key features 5B. Other survey-based tools 6C. Uses and applications of PETS/QSDS 85. How to design and implement a PETS and QSDS 11A. General 11B. Fact-finding, consultations and purpose of the study 11C. Concept Paper 12D. Rapid data assessment13E. Local consultant 13F. Questionnaire design 13G. Training and field-testing 16H. Data entry and verification 16I. Analysis and reporting 16J. Ownership 16K. Linking PETS/QSDS with other surveys 17L. Panel data 176. Sampling strategy and data availability 18A. Unit of Study 18B. Data availability 18C. Private for-profit and not-for-profit 19D. Sample frame, sample size, and stratification 19E. Sampling issues involved in linking PETS and QSDS 20F. Non-sampling error and field-testing 207. Findings from PETS 2121A. GeneralB. Leakage (capture) of public funds 21C. Incentives to staff (moral hazard) 268. Findings from a QSDS 28Findings and emerging issues 30Bibliography 331. IntroductionCentral government budget allocations can be poor predictors of the actual quantity and quality of public services in poor countries with weak governance, a conclusion supported by the weak relationship between public spending and growth and social development indicators in cross-country analysis. There is also evidence that government funds are spent on the wrong goods (private rather than public) and the wrong people (better-off rather than the poor), fail to reach intended service providers, or are converted into actual services inefficiently. Additionally, households may choose not to take advantage of such services for various reasons.The ability to diagnose and measure problems of service delivery within the public and private delivery systems is a pre-requisite to designing policy reforms and institutions to improve service delivery. This paper argues that micro-level tools are necessary to assess both the quality and quantity of services, and the complexities involved in transforming budgets into goods and services. Public Expenditure Tracking Surveys (PETS) and Quantitative Service Delivery Surveys (QSDS) have been developed in full recognition of the characteristics of public organizations, and with the specific objectives of identifying where service delivery problems arise, quantifying the relative importance, discovering how and why they arise, and devising means of resolving them. PETS and QSDS gather quantitative data on a sample survey basis, including inputs, outputs, and other characteristics, directly from the service-providing unit. PETS assess (often diagnostically) the issue of leakage of public funds or resources prior to reaching the intended beneficiary. QSDS take a service-facility-based approach to assessing incentives for facility level staff to produce high-quality services, and provide a measure for efficiency at the level of the frontline provider and information about determinants.This paper is intended to guide those who wish to undertake the PETS/QSDS. The paper is organized as follows. Section 2 outlines the motivation for developing PETS and QSDS, namely the persistent failure to observe close relationships between public sector spending and outcomes. Section 3 discusses general features of public service agencies that provide public services and some reasons why information on performance is not easy to obtain. Section 4 summarizes key features and potential uses of PETS and QSDS, including a comparison with other tools for public expenditure and service delivery analysis. In section 5, we highlight key lessons learned through implementing these surveys and provide details on how to design and implement PETS and QSDS. We discuss sampling strategies and data issues in section 6. Sections 7 and 8 present findings from a number PETS studies and the first QSDS in health care in Uganda.2. Public spending, service delivery, and outcomesConventionally central government budget allocations are used as indicators of the supply of public services. It has become increasingly clear, however, that budget allocations can be poor predictors of the actual quantity and quality of public services, especially in countries with poor governance and weak institutions. A substantial body of cross-country empirical studies shows that the associations between public spending on the one hand and growth and social development outcomes on the other are ambiguous at best. For example, Kormendi and Mequire (1985) and Ram (1986) find that higher government expenditures are associated with higher growth, while Landau (1986), Barro (1991), Dowrick (1992), and Alesina (1997) find higher government expenditures to be associated with lower growth. Easterly and Rebelo (1993) find that overall public investment has a very low impact on growth, and Devarajan, Swaroop, and Zou (1996) observe that the standard candidates for productive expenditures have either a negative or an insignificant relationship with growth.Sector specific studies have likewise failed to reveal a strong link between spending and outcomes. With respect to public spending and educational outcomes, the relationship between the level of resources spent on schooling and educational outcomes remains weak (Hanushek 1995). One part of the problem seems to be that government funds do not reach schools intact, as studies in Uganda (Reinikka and Svensson 2001) and Indonesia (James, King, and Suryadi 1995) have shownCross-country studies of the health outcomes have come to a fair consensus on two points. First, socioeconomic characteristics explain nearly all of the variation in the mortality rates across countries. These include gross domestic product (GDP) per capita, distribution of income, and level of female education (Filmer and Prichett 1999). Second, public expenditure on health care has had little impact on average health status. According to a review by Filmer, Hammer and Pritchett (2000), the share of public spending on health care is not a significant determinant of health outcomes, such as life expectancy and child mortality. In a recent paper also using cross-country data, Rajkumar and Swaroop (2002) show that the impact of public spending on human development outcomes depends on governance measured by level of corruption and quality of bureaucracy. When governance is better so is the effect of spending on education and health outcomes.3. Public sector agenciesTwo key questions in public policy today are (1) why does the level of public expenditure on average have such a limited effect on human development outcomes? and (2) what can be done to improve performance?The literature provides at least four interrelated explanations. First, governments may be spending on the wrong goods or the wrong people. A large portion of public spending on health and education is devoted to private goods—ones where government spending is likely to crowd out private spending (Hammer, Nabi, and Cercone 1995). Many benefit incidence studies of health and education spending also show that benefits accrue largely to the rich and middle-class, while the share going to the poorest 20 percent (where it can make a difference) is always less than 20 percent (Castro-Leal and others 1999).Second, even when governments spend on the right goods or the right people, the money fails to reach the frontline service provider. A PETS study in Uganda in the early 1990s showed that only 13 percent of nonwage recurrent expenditures for primary education actually reached the primary school (Reinikka and Svensson 2001). Furthermore, much of the variation in funding received across schools could be explained more by political economy than by efficiency and equity considerations. Similar findings have been obtained in Ghana and Tanzania (for a review see Reinikka and Svensson 2002a).Third, incentives to deliver a high quality product or service are often missing. Even when the money reaches the primary school or health clinic, the service providers may be poorly paid and ineffectively monitored. Clients meanwhile, have limited information to enable them to hold service providers accountable.Fourth, even if the services are effectively provided, households may not take advantage of them. For economic and other reasons, parents pull their children out of school or fail to take them to the clinic. This “demand-side” failure often interacts with the supply-side failures to generate a low-level of public services and outcomes among the poor.All four explanations, and in particular the second and third point, highlight a key motivation for undertaking a QSDS/PETS, namely that improving public services must involve an explicit treatment of incentives within the public sector.1 The organizational structure of public sector agencies involves multiple tiers of management and frontline workers. Multiplicity is also a key aspect of public sector agencies in terms of the tasks they perform and stakeholder or constituencies they serve. Tasks and interests at each tier may conflict with each other from the viewpoint of limited resources and finite time, and various stakeholders may also have conflicting interests. The output of public service agencies is often difficult to measure and systematic information is rarely available about specific inputs and outputs of service delivery, particularly in developing countries.1 Dixit (2000) reviews the characteristics of public organizations in light of incentives and organization theory. For further details, see also Bernheim and Whinston (1986), Dixit (1996, 1997), Holmström and Milgrom (1988), Martimort (1992), and Stole (1991).Public sector organizations share the following key features:• A multi-tier organizational structure, with various tiers of government down to the frontline provider;• Multiple stakeholders with different and often competing objectives and an underlying political economy through which the stakeholders exert pressure onpublic sector agencies to achieve their respective goals;• Multiplicity of tasks with various degrees of substitutability; and• Vague measurability of outputs and tasks.Dixit (2000) gives an example of public education to illustrate some of these features. Multi-tasking includes providing literacy and numeracy and other direct skills, supporting the emotional and physical growth of the children, providing vocational skills and preparing pupils for working life, providing skills in health and financial management, instilling citizenship, overcoming the disadvantages of home life, and ensuring children can grow up in a drug and violence-free environment. While they are not mutually contradictory, these goals must compete for limited resources of schools and teachers and are frequently considered interchangeable in the schools’ production process. Moreover, it is very difficult to measure the output of many of these tasks. The diverse body of stakeholders includes parents and children, teachers and their unions, taxpayers, potential employers of graduates, society as a whole, private schools, and various groups favoring or opposing specific components of the curriculum. These principals have diverse preferences and objectives. Parents want “good education” and day care for their children, teachers and unions want higher pay, taxpayers want low costs, employers want vocational skills, society wants good citizens, and private schools compete for pupils and some public funds. Again, many of these objectives are mutually conflicting, for instance taxpayers’ objective of low costs vis-à-vis teachers and unions’ goal who want higher pay. And yet teachers are often not interested in pay alone but in challenging work and career prospects.These features not only complicate analysis of performance; they may also explain why so little detailed information exists on public sector performance. First, there is often inadequate information on outputs, possibly because management information systems tend to be unreliable in the absence of adequate incentives to maintain them on a regular basis. Lack of information weakens the agents’ (that is, public service agencies and/or their employees) incentive to make the required effort—a straightforward result of moral hazard—which in turn adversely affects both the quality and quantity of services. Lack of information can be further entrenched if government allocates expenditures away from areas of poor measurability and verifiability precisely because of poor measurability and verifiability rather than on the technical and economic merits of such expenditures per se.Second, there is often inadequate information on agent actions. Agents can often resist effective monitoring in the presence of multiple tasks and multiple principals with substitutable objectives. Moreover, as some actions are more verifiable than others, it is not always optimal to provide implicit incentives to carry out specific tasks, since the agent will then tend to divert all effort from unverifiable to verifiable tasks. In education, for example, exam results would be disproportionately emphasized. Incentive schemes are most suitable when outcomes are clearly defined and unambiguous, and become weak when neither outcomes nor actions are observable, such as in a typical government ministry.4. Key Features of PETS and QSDS2A. Key featuresApart from information on central government budget allocations, information on actual public spending is seldom available in many developing countries. The PETS was designed to provide such information, on a sample survey basis, from different tiers of government, including frontline service facilities with the objective of identifying the location and extent of impediments in financial flows. Government resources, which are typically earmarked for particular uses in the budget (votes, line items), flow upon release, within a pre-defined legal, regulatory, and institutional framework, passing through the layers of the government and via the banking system down to service facilities, which are charged with the responsibility of exercising the actual spending. A PETS tracks the flow of resources through these institutional strata in order to determine how much of original resources reach each level, and how long resources take to get there (if they get there at all). It is therefore useful as a device for locating and quantifying (political and bureaucratic) capture, leakage of funds, and problems in the deployment of human and financial resources. It can in principle also be used to evaluate impediments to the reverse flow of information to account for actual expenditures.The QSDS goes beyond tracking funds to examine efficacy of spending, as well as incentives, oversight, and the relationship between agents and principals. In the QSDS, the facility or frontline service provider is typically the main unit of observation in much the same way that the firm is the unit of observation in enterprise surveys and the household is the unit in household surveys.The QSDS can be applied to government, private for-profit and private not-for-profit service providers. In each case, data are collected both through interviews of managers and staff and from the service provider’s records (say, both local government and2 Examples can be found at /programs/public_services/topic/tools/.facility). In some cases, beneficiaries are also surveyed. Triangulating the data collection this way means that information serves as a means of cross-validating information obtained separately from each source. Thus the compilation of information gathered through this method typically requires much more effort than, say, a perception survey of users, making it costlier and more time consuming to implement than its qualitative alternatives. But in many cases the benefits of quantitative analysis by far offset its cost.As the PETS and QSDS complement each other, a PETS may be conducted in conjunction with a QSDS. Their combination allows a detailed evaluation of wider institutional and resource-flow problems on frontline service delivery. The facility-level analysis can also be linked “upstream” to the public administration and political processes (through public official surveys) and “downstream” to households through (household surveys) to combine the supply of and demand for service.B. Other survey-based toolsA number of tools exist to analyze provider behavior, including facility modules in household surveys, benefit incidence analysis, and empirical studies to estimate facility (in particular hospital) cost functions. PETS and QSDS are distinct from these other tools in the following key respects.First, PETS/QSDS differ from other survey-based research tools by defining the service provider (and the incentives facing the provider) as the key unit of analysis (as opposed to, say, the firm or the household). It is not unusual from household surveys to include facility modules. Previously the Living Standard Measurement Study (LSMS) household surveys included health facility modules on an ad hoc basis (Alderman and Lavy 1996). A number of the Demographic and Health Surveys (DHS) carried out in over 50 developing countries have also included a service provider component. The Family Life Surveys implemented by RAND combined health provider surveys with those of households (for a review of health facility surveys see Lindelöw and Wagstaff 2001). The rationale for including a facility module in a household survey is to characterize the link between access to and quality of public services and key household welfare indicators. These modules collect quantitative data on medical procedures, equipment and staff availability, and supervision, as well as carry out various quality checks on staff (Frankenberg and Beegle 1998; Thomas, Lavy, and Strauss 1996; Alderman and Lavy 1996).The perspective in these surveys, however, is that of the household rather than the service provider per se. As a consequence they pay little attention to the question of why quality and access are the way they are. This is reflected in the type of data collected, which is mainly on access indicators and the range of services offered. In other words, these surveys largely ignore provider behavior and the processes and complexities, as discussed above, through which public spending is transformed into services.In addition, in most cases, facility information is collected as a part of community questionnaires, which rely on the knowledge of one or more informed individuals (Frankenberg 2000). Information supplied by informants is therefore not only heavily dependent on the perception of a few individuals but also not detailed enough to form a basis for analysis of service delivery, such as operational efficiency, utilization, and other performance indicators. To the extent that the information is based on perceptions, there may be additional problems due to the subjective nature of the data and its sensitivity to respondents’ expectations. Perceptions are often useful as a means of formulating testable hypotheses rather than as a means of testing them.Second, PETS/QSDS do not use budgeted costs as a basis for analysis. In benefit incidence analysis household data on consumption of public goods are combined with information on budget allocations for public expenditures to determine a unit subsidy per person. Household usage of the service is then aggregated across key social groups to impute the pattern and distribution of service provision. By contrast, PETS/QSDS collect data on actual spending and services provided at the facility level.Third, PETS/QSDS explicitly recognize the “umbilical” link between public service providers and the rest of the public sector. Providers of public services typically rely on the wider government structure for resources, guidance about what services to provide, and how to provide them. This dependence makes them sensitive to systemwide problems in transfer of resources and the institutional framework. Therefore, it is not possible completely to isolate a public facility from the rest of the system. This sets PETS/QSDS apart from the hospital cost function literature, which despite having a clear facility focus, is more or less analogous to the firm-survey based literature. In other words, the hospital cost function literature does not try to relate the problems of service delivery to upstream issues. In part, this reflects the fact that the literature has mainly looked at cost efficiency in hospitals in the United Kingdom and the United States, where “leakage” is perhaps less of an issue than efficiency.3 Perhaps more relevant, though, is the budding literature on cost efficiency and other performance indicators in clinics and primary health facilities in developing countries (Somanathan and others 2000, McPake and others 1999). This literature can and should have an important influence on the development of the best practice QSDS.Finally, PETS/QSDS explicitly recognize that agents in the service delivery system may have strong incentives to misreport (or not report) key data. These incentives derive from the fact that information provided by, for example a school or a health facility, partly determine its entitlement of public support/funding. Also, in case resources (including staff time) are used for other purposes (for instance in case of shirking and corruption), the agent involved in the activity will most likely not report it truthfully.3. For a review of this literature, see Wagstaff (1989); Wagstaff and Barnum (1992); Barnum and Kutzin (1993).Moreover, certain types of information, such as official charges, may only partly capture what is intended to be measured (e.g., the users’ costs of the service). PETS/QSDS deal with these data issues in two ways: (i) by using a multi-angular data collection strategy; that is, a combination of information from different sources; and (ii) by careful consideration of which sources and respondents have incentives to misreport, and identification of data/sources that to a lesser extent are influenced by these incentives. C. Uses and applications of PETS/QSDSIssues of particular interest to development practitioners include efficiency, quality of service, leakage of resources, political or administrative capture, moral hazard, such as shirking and ghost workers, asymmetric information, management, supervision, and distributional issues. Many countries formulate policies within a paradigm of large and ambitious public spending programs intended to address efficiency and equity concerns. Yet, the implementation capacity of governments has seldom been systematically incorporated into the analysis of public expenditure priorities. PETS and QSDS are eminently suited for such tasks.Figure 1. Tools of analysis of public expenditureQSDS and PETSApart from diagnosis, the PETS and particularly the QSDS can be used for generating data for research. While the large literature on incentives in the context of contract and information theory focuses mostly on firms, it offers a number of applications to the public sector as well (Dixit 2000). PETS and QSDS can provide new data to test hypotheses on provider behavior underlying service delivery outcomes. Research into the economic behavior of households and firms has already yielded important insights, often with substantial policy implications. This suggests that research into public institutions, which in some countries deliver as much of half of GDP and which are directly responsible for the implementation of government policy, promises an equally rich research agenda. Important empirical research questions that the PETS and the QSDS can help to answer:• How to design “right” incentives within the public sector (characterized by multiple principals and multiple agents) and the private sector, compatible withincreased quantity and improved quality of basic services, particularly for poorpeople?• How to measure and overcome problems of moral hazard, such as shirking and ghost workers?• How to strengthen voice mechanisms for service users and counter problems created by asymmetric information?• What type of accountability and oversight arrangements between various tiers of government and between government and beneficiaries can help improve basicservice delivery?Finally, information collected through PETS and QSDS allows empirical testing of various theoretical propositions pertaining to principal-agent relations within the public sector, bargaining and other resource allocation mechanisms, as well as asymmetric information and incentive problems and how they affect the quality and quantity of services delivered. Empirical research using PETS and QSDS data can be used not only to test various predictions from standard contract and incentive theories, but also to contribute to the development of new theories in this area for public sector agencies.Looking beyond country-specific studies, the compilation of systematic multi-country service delivery databases will also facilitate cross-country comparisons. The World Bank’s Development Research Group is presently of setting up such a database of QSDS surveys. As more data become available in the coming years, it is expected that empirical research in this area will grow.5. How to design and implement a PETS and QSDSA. GeneralPETS and QSDS are now being applied by the operational staff within the World Bank and other donors, researchers, and developing country governments. The intuitive appeal of PETS and QSDS, however, can belie the complexity involved in planning and implementing these surveys.First, the survey teams and stakeholders need to have a clear idea of why they are doing the study. When a new analytical tool is developed, there is a temptation to apply the tool, regardless of whether the situation on the ground warrants its application, and, indeed, without tailoring the instrument to the local conditions.Second, the planning, design, implementation, and subsequent analysis of PETS/QSDS data are complex undertakings. A successful study requires a continuous and careful hands-on effort in accordance with clearly defined objectives. In particular, survey managers should not underestimate the importance and the difficulties involved in obtaining quantitative data from facility records. Unlike household surveys, PETS/QSDS cannot be based on recall data but require consultation of records.Finally, there is a need to consult widely with the client government and other stakeholders about the objectives of the study. Lack of ownership may result in rejection of the findings. To ensure that the study meets its ultimate objective of informing policy reforms, it is essential that consensus building among stakeholders is nurtured prior to, during, and subsequent to execution.In light of the lessons from past surveys, we outline the steps involved in successful design and implementation of PETS and QSDS. Many steps are common for both surveys, given that PETS typically includes a facility component and QSDS needs to relate public facilities to the government system. Typical steps involved in designing these surveys include the following:B. Fact-finding, consultations and purpose of the studyIn the initial phase, the survey team needs to consult extensively with a wide array of in-country stakeholders, including government agencies (ministry of finance, sector ministries, and local governments), donors, and civil society organizations. Such consultations are likely to reveal relevant hypotheses about major problems in service delivery and hence define the purpose and objectives of the study, as well as the sector(s) to be included in the study. Given that the survey requires considerable effort, it is advisable to limit the number of sectors to just one or two. Until now, the PETS and QSDS have mostly been carried out in the “transaction-intensive” health and education。
Economics World, ISSN 2328-7144 April 2014, Vol. 2, No. 4, 281-289 Taxes, Loans, Credit and Debts in the 15th Century Towns ofMoravia: A Case Study of Olomouc and Brno *Roman ZaoralCharles University, Prague, Czech RepublicThe paper explores urban public finance in the late medieval towns on the example of two largest cities inMoravia—Olomouc and Brno. Its purpose is to define similarities and differences between them, to express changeswhich have taken place in the course of the 15th century, and to distinguish financial administration and types ofinvestments in the towns situated in the Eastern part of the Holy Roman Empire from those in the West. The primarysources (municipal books, charters, and Jewish registers) are analyzed using quantitative and comparative methodsand the concept of the 15th century financial crisis is reconsidered. The analysis proved that each town within theEmpire paid a fixed percentage of the total tax sum of central direct taxation through a system of repartition so thateach tax increase caused an ever growing pressure on its finances. New taxes collected in Brno and Olomouc after1454 were not proportional to the economic power and population of both cities and gave preferential benefit toOlomouc. At the same time the importance of urban middle classes as tax-farmers started to grow. They increasinglygained influence on the financial and fiscal regime, both through political emancipation as well as by serving asfinancial officials. The Jewish registers document a general lack of money in the 1430s and 1440s which played intohands of the Jewish usurers. Accounting records from the 1480s and 1490s, to the contrary, give evidence of thegrowth of loans, debts and credit enterprise. The restructuring of urban elites, caused by financial crises and socialconflicts, was centered round the wish for a more efficient management of urban financial resources and moreintensive control rights. It was a common feature of towns in the West just as in the East of the Empire. On the otherside, the tax basis in the West was rather created by indirect taxes, while direct taxes prevailed in the East. Tradeactivities played more important role in the West, whereas rich burghers in the East rather invested into land estates.From the research also emerged that the establishment of separate cashes is documented in the West only, themanagement of urban finance in the East remained limited to a single-entry accounting.Keywords: urban public finance, financial crisis, taxation, Jewish capital, late medieval towns, Moravia IntroductionThe study of public finances has received considerable attention during the last decade because of its key role in European state formation by serving as an instrument to extract the capital needed for the realization of political goals from the economic systems that formed the base of all public finances. With Stasavage (2011) * The paper was supported by The Ministry of Education, Youth and Sports of the Czech Republic—Institutional Support for Long-Term Development of Research Organizations—Charles University, Faculty of Humanities. Roman Zaoral, Ph.D., Faculty of Humanities, Charles University. Correspondence concerning this article should be addressed to Roman Zaoral, Charles University, Faculty of Humanities, UAll Rights Reserved.TAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 282recent publication States of credit, he has made a valuable contribution to the debate on the emergence ofpublic credit as a decisive element in the state formation processes that took place in late medieval and earlymodern Europe. In his work, Stasavage emphasizes the importance of geographic scale of political units and theform of political representation within polities for the access to capital markets and thus the possibility to createfunded public debt in order to finance the consolidation or expansion of their relative position within politicalnetworks and regions. The foundation of this public debt was provided by the fiscal revenues originating fromdirect or indirect taxation.Blockmans (1997) pointed out in this debate the importance of scale and timing with respect to local political representative structures. In the larger Flemish cities such as Ghent or Bruges, the participation ofmiddle classes in town governments and thus control over public finances developed in an earlier stage (the14th century), whereas these developments in less-urbanized regions with smaller urban populations such asHolland and Guelders (and in fact in the whole Holy Roman Empire) did not occur until the 15th century. Inthis way, the hypothesis can be stated tentatively that the position of urban elites influenced the managementof urban finances at large, and urban fiscal systems in particular. The degree to which urban elites were able tomonopolize urban government was also determining the room left for other intermediaries to have a say in thefinancial policies of a town and to function in the management of the fiscal systems that were the basis ofmost urban finances. The socio-political backgrounds and the interplay between the political elites, urbanofficials and tax farmers are thus an important topic for knowledge of the intricate mechanisms, which are atthe crossing point of the economic, social, political, and financial developments in the late-medieval urbansociety.All Rights Reserved.Research QuestionsThe author’s attention is paid to the towns situated in the East of the Holy Roman Empire, namely in the Czech lands, in order to show which similarities and differences can be found between towns in the Westernand Eastern part of the Empire, in which way and to what degree the 15th century economic and financial crisesand social conflicts influenced the management of urban fiscal systems (and the closely-linked system of publicdebt) of two traditional capitals1and at the same time the largest cities in Moravia—Brno and Olomouc, theeconomic potential of which remarkably started to differ, particularly during the second half of the 15th century.In the period between 1420 and 1500, Olomouc as the seat of Moravian bishopric grew from round 5,000 tomore than 6,000 inhabitants at the turn of the 16th century, whereas Brno as the former seat of MoravianMargraves decreased from round 8,000 to less than 6,000 inhabitants during the same time (Šmahel, 1995, p.360; Macek, 1998, p. 27).Research MethodsIt is the purpose of this paper to quantify data obtained from the analysis of municipal books, charters and Jewish registers relating to urban public finance in the late medieval cities of Moravia (Czech Republic),particularly in Olomouc and Brno, and to compare so the financial situation of towns situated in the Easternpart of the Holy Roman Empire with those in the West. The concept of the 15th century financial crisis isreconsidered.1The Moravian Diet, starting in the 13th century, the Moravian Land Tables and the Moravian Land Court were all seated in bothTAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 283 The political and economic difficulties which troubled the Margraviate of Moravia during the 15th century (the Hussite wars in the 1420s and 1430s and the Bohemian-Hungarian wars in the 1460s and 1470s) did notonly influence the fiscal system itself, mainly by creation of new taxes and by increase of the tax burden tocover the growing urban public debt. The financial crises, bankruptcies, and financial reforms also had animpact on the official involvement of the burghers and guilds in the management of the urban fiscal systems,following their relatively late political emancipation in the 15th century (Marek, 1965). Until that time, thelocal elites had been formed from the closed merchant oligarchy, which monopolized the town government,defended its own particularistic interests through privileged autonomy and controlled the urban finances.In the 15th century, the importance of urban middle classes as tax-farmers started to grow, they increasingly gained influence on the financial and fiscal regime, both through political emancipation as well asby serving as financial officials. They also demanded more insight in the financial management, both ofindirect taxation and the management of urban debt. They were given a central role in the financial reformsnecessary to face the growing tension between economic stagnation and the financial demands. In this way, theimpact of these socio-political changes on the management of the urban fiscal systems can be displayed.The concept of a financial crisis has recently been addressed by what is now known as the “New fiscal history”. The emergence of public finance, fiscal systems, and the creation of public debt are at the heart ofthese discussions: In this sense, a financial crisis occurs when expenditure structurally outweighs the normalrevenues from taxation and the ability to borrow money in order to meet current financial obligations (Bonney,1995, pp. 5-8). The 15th century is generally seen as a period of structural political and economic crisis notonly in the West of the Empire, in the Low Countries (Van Uytven, 1975), but also in the East, in the Czech All Rights Reserved.lands (Šmahel, 1995, pp. 208-220). This crisis also had consequences for urban public finance and itsmanagement. Each town within the Empire had to pay a fixed percentage of the total tax sum of central directtaxation through a system of repartition and so the increased tax burden had forced several towns to sellannuities on an unprecedented scale, because these sums were paid directly through the urban finances(Blockmans, 1999, pp. 287, 297-304). Thus, central direct taxation indirectly tapped into the financial resourcesof the towns, which in turn led to an ever growing pressure on the urban finances causing an increase of urbanindirect taxation to cover the funded debt caused by these annuity sales.AnalysisUrban Public Finance During the 15th Century Financial CrisisOlomouc as leading royal city in Moravia, which exceeded Brno in population size at the end of the 15th century, represented a craft town producing for export on one side and a consume town on the other side(Marek, 1965, p. 125). The urban population grew particularly in the 1450s and 1460s due to new incomersfrom Silesia and North Moravia. Among them, there were craftsmen from 85 percent, mostly cloth weavers, butalso representatives of other textile, food, shoe, leather, and wood processing crafts, ranking into sociallyweaker groups, while the number of merchants was much smaller (Mezník, 1958, pp. 350-353). From theviewpoint of the economic structure, Olomouc was close to Breslau in Silesia.In the 1420s, catholic Olomouc spent a lot of money for its defense against attacks of the Hussite troops, for the building wooden fortifications and for its own mercenary troops. During fights, its burghers had todispatch city troops and to get armor for the king, all beyond the usual yearly tax. So for example, in 1424interests from the war debts of the city exceeded an amount of 200 marks which substantially strangled itsTAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 284trading activities (Nešpor, 1998, p. 79). In connection with the blockade of Olomouc in the second half of the1420s, the long-distance trade was put at risk. The city council covered financial expenses by the sale of realestate, of yearly pays of altar servers and by loans from the Jews as well as from own burghers. The Jewishloans were, however, burdened with a high interest and the council used them only once for the war with theHussites.2Loans from own burghers for the so-called fair credit up to 10 percent were more advantageous.The Role of Jewish CapitalYet the Jewish capital represented an important reservoir of financial means for many inhabitants. The surviving Olomouc Jewish register dated back to 1413-1428, which makes it possible to look into the practiceof lending money, gives evidence on the Jewish loans of craftsmen, merchants, and shopkeepers (Kux, 1905).3However, many other hidden loans, which had been going on with the active participation of Christianinhabitants, did not get into the register at all. The credit had three forms: loans in cash, pledge loans, or tradetransactions with goods. As it was necessary to sell unpaid pawns, the usurer became a shopkeeper and hissmall shop was a junk shop at the same time.The Jews usually required one groschen a week as the interest from each shock or mark of silver which they justified by high taxes and other charges. The average yearly credit taxation reached 86 percent from oneshock (= 60 Prague groschen), respectively 81 percent from one mark(= 64 Prague groschen, a half of pound).The debtor paid so 112 groschen per year from one shock and 116 groschen from one mark (Kux, 1905, pp.24-25). At lower installments, the interest could farther go up and when installments have not been paid at allthe debt grew in geometric progression.All Rights Reserved.Unless the debtor was not able to refund an amount, which he had loaned, the creditor could exact some goods in the loan value, such as expensive cloth, furs, gold jewels, silver dishes, horses, or cattle. Carpets,armor, wine barrels, or real estate are documented among pledges as well. A number of the Olomouc Jews,having been engaged in trade with money, ranged between 12 in 1413 and 20 in 1420. Some of them granted10, the others 100 and more loans. Forty percent of all deposits were entered by Solomon, the richest Jewishcreditor in Olomouc. The most frequently lent amounts ranged between one and six pounds, the lowest loanmade 10 groschen, the largest 100 marks, which was a value of two or even three houses located in the centerof Olomouc(Veselý & Zaoral, 2008, pp. 40-41).A general lack of money among inhabitants, particularly in the post-Hussite period in the 1430s and 1440s,played into hands of the Jewish usurers. A high number of small debts in range between one and 10 shocks(mostly three-five shocks) and a small number of big debts were typical for that period. The fact that amountsof the two thirds of debtors represented only 13 percent of all debts gave evidence on the general becomingpoor of population. On the other side, the only entry of the sum of 600 Hungarian florins, which Johann vonAachen owed to Johann Weigle, represented 42 percent of all declared owed money in the 1440s (Zaoral, 2009,p. 111). In the 1450s, a number of the highest (above 100 shock of silver) and of the lowest debts (under oneshock of silver) increased, which was the evidence on a slight economic recovery and on the graduallyincreasing social differentiation of the Olomouc inhabitants. Superiority in single-entry accounting, onedebtor-one creditor relations, attests, however, insufficiently developed finance in the milieu of guild craftsmenand shopkeepers. The Jewish credit represented, to the contrary, a source of more flexible forms of enterprise.2Olomouc District State Archive, Olomouc City Archive, books, sign. 164, fol. 235r.TAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 285 Despite a danger of large indebtedness, some wealthy people were lending money from the Jews for more times. The owner of the magistrate mansion Wenceslas Greliczer is entered into the Jewish register even 26times. He made loans from more usurers at the same time, going once in cash, going twice he bought horses oncredit and at another time he pledged silk bedding or pearl bracelet of his wife. The Greliczer family, whichplayed a leading role in the city for some 10 years, had at the end to sell all its property and after 1430 itdisappeared out of stage (Kux, 1905, p. 27). The presence of a number of other prominent councilors in theJewish register was a symptom of their later financial bankruptcy, which strengthened anti-Jewish mood in thecity.A lack of money among burghers occurred again in the 1440s as it was evident from the Olomoucmemorial book dated back to 1430-1492 (Spáčilová & Spáčil, 2004). Particularly the year 1442 was critical formany inhabitants as it was evident from the number of loans. In response to rapidly worsening financialsituation, the council decided in 1446 to grant loans in an amount of 10 pounds of silver for damage reduction(Zaoral, 2009, p. 112). Some years later, in 1454, Ladislaus Posthumus, King of Bohemia (1453-1457),expelled the Jews from Moravian royal towns.Urban TaxesThe annual tax in an amount of 600 marks of groschen, collected in Brno and Olomouc, was slightly changing during the 15th century. In 1437 the margrave Albrecht II of Austria (1437-1439) cut Olomouc urbantaxes as a reward for help in fight against the Hussites. After 1454 the annual tax in Olomouc decreased from600 to 587 marks and 40 groschen and this amount remained unchanged until 1526. On the contrary, the tax in All Rights Reserved.Brno increased from 600 to 656 pounds 16 groschen as a recompense for the expelled Jews. Such a tax burdenwas not proportional to the economic power and population of both cities and gave preferential benefit toOlomouc (Dřímal, 1962, pp. 86-87, 116; Zaoral, 2009, pp. 107-109). The different economic potential ofOlomouc and Brno could be also seen from a number of yearly markets which increased in Olomouc from twoup to three and decreased in Brno from four to two (Šebánek, 1928, p. 51; Čermák, 2002, pp. 25-27).Despite the fact that the city had to pay war debts to private creditors with difficulty and for a long time, the standard of living of the urban population in Olomouc was gradually increasing during the 15th century.The municipal tax-payers growth was so big in the second half of the 15th century that collected moneyexceeded the municipal tax amount more than three times (Dřímal, 1962, pp. 122-123). To the contrary, anumber of taxpayers in Brno was decreasing during the whole 15th century and still in 1509 their number wasunder the level of the year 1432. At the same time a number of members of the middle strata, poor craftsmenand sole traders even decreased on one half of the state in 1365 (Dřímal, 1964, pp. 277-280).The administration of urban finances was characterized by a disorganized evidence of assets and liabilities.The municipal collection (the so-called losunga), collected from all town inhabitants with a sufficient propertybase, represented a relevant quota of municipal incomes. The “losunga” amount, paid from a concrete house,was determined by three criteria: a built-up surface, a house location, and an existence of the certain rightconnected with the house. This amount was intentionally undervalued; it did not reflect price fluctuations ofreal estates and remained more or less the same. Craft plying, beer and wine sale, lucrative cloth trade, andannuities were a subject of taxation as well. But only a part of collected money flew to the royal treasury, whichwas often used as pledges for aristocrats. In 1514, pledges in Olomouc were even higher than an amount of themunicipal tax (Dřímal, 1962, p. 93).TAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 286The increasing purchases of houses on the basis of the Law of Emphytheusis4represented another serious problem. Some noblemen ignored compulsory payments from these properties. It caused conflicts withburghers, but the pressure put by the urban representation was only partly successful.In the 1490s, the citycouncil expressed concern about the fact that it supported high nobility and clergy from its own money andjoined insurgent burghers. In the early 16th century, the city found a solution how to get rid of unwelcomecreditors. In 1508, it offered the Bohemian king Vladislaus II (1471-1516) to pay off pledged revenues. Thecity used them to its own benefit for 20 years (Dřímal, 1962, pp. 89-94).Under these circumstances, taxes and administrative fees, which the town succeeded to buy back from the ruler, gained importance all the more. Provincial castle tolls, customs duties (ungelt), and bridge tolls belongedto the most important. The incomes from the town overhead business and from various financial operationsincreased. The town council bought up villages and yards in the immediate walls surroundings. However, thereal value of charges from the town villages was gradually decreasing because charges did not reflect adecrease of the payment power of money in circulation. Thanks to completion of the large farm system in thetown ownership at the end of the 15th century, Olomouc was offered a considerable space for series ofactivities. The incomes from brewing and fish farming were not negligible as well. But the main share ofmunicipal incomes was represented by money paid from the town property and toll (Kux, 1918, pp. 12-13).Despite a varied scale of incomes, the town was never endowed with large sums in cash. Practically all gained money was immediately given out. Particularly taxes as a relevant phenomenon of municipal economicswere draining big amounts of money from the city budget.Superiority of weight unites (marks) over numeric units (shocks of groschen) in all types of written All Rights Reserved.sources gave evidence on a general lack of quality coins. In the 1450s, when the financial crisis culminated, theking Ladislaus gave the burghers of Olomouc permission to repay loans in the petty coins and thereby made awidely used practice legal.5At the same time, gold coins, which replaced counting in marks, have started topenetrate into everyday life since the 1440s. The Olomouc burghers repaid two thirds of their loans in silverand one third in gold. The creditors accepted as a general rule groschen coins from the craftsmen andHungarian florins from the merchants. Payments in goldguldens occurred rarely in the sources. The increaseddemand for gold coins reflected a contradiction between a long-term lack of gross silver coins in circulationand a necessity of the financial covering of trade transactions with real estates and credit (Zaoral, 2009, pp.118-119).The oldest surviving tax register in Olomouc came from 1527. According to it, about two thirds of taxpayers paid less than eight groschen, middle class (about 25 per cent) paid 10 to 26 groschen from the taxbase of 1.5 to four marks and eight percent of wealthy people paid 32 to 102 groschen from the tax base of fiveto 16 marks. These 89 richest burghers owned more than 40 percent of all taxed property. The tax was paid by1,096 persons. Among that, there were about 25 percent of cottars, who did not own any immovable property.Even when it takes into account that most payers also paid the craft tax in an amount of six groschen, a totalaverage levy did not exceed 20 groschen per head (Szabó, 1983, p. 57). Thus the tax burden itself was not highin the case of at least minimal incomes. It ranged on the level of some percent of yearly income. Much bigger4The Law of Emphyteusis is a feudal form of a hereditary land lease. It is a right, susceptible of assignment and of descent,charged on productive real estate, the right being coupled with the enjoyment of the property on condition of taking care of theestate and paying taxes, and sometimes the payment of a small rent.TAXES, LOANS, CREDIT AND DEBTS IN THE 15TH CENTURY TOWNS OF MORA VIA 287 damage was caused to city population and to the royal treasury by reduction of the groschen value and by riseof prices. Unlike Brno, in the long-term low share of persons, having been unable to pay a municipal tax, andan increasing share of poor journeymen and cottars in the urban population are apparently other valuableindicators of growing prosperity in Olomouc (Veselý & Zaoral, 2008, pp. 48-51).Urban Public Finance at the Turn of the 16th CenturyLoans and debts started to increase after the overcoming of financial crisis and the losses reduction from the Bohemian-Hungarian war. The total volume of money in circulation increased particularly in the 1480s and1490s, when debt amounts usually reached even some hundreds of florins.6Since the second half of the 15thcentury, the credit enterprise has been closely connected with trade. Bills of debt and entries into shopkeeper’sregisters became the most common record form of loans. Objections to Christian usurers, which lent moneyinstead of the Jews, were frequent. A number of wealthy burghers sold pays with interest, lent money on credit,or practised open usury. Some amounts were quite high, when, for example, the town Mohelnice borrowed 300marks of silver on 10 percent interest from Nicolas Erlhaupt, burgher of Olomouc.7The city council ofOlomouc also conducted financial business. In 1509, for example, it bought from Wolfgang of Liechtenstein(1475-1525), the owner of Mikulov (Nikolsburg), annuities from the South Bohemian town Pelhřimov(Pilgrams) and became a tax collector there (Zemek & Turek, 1983, p. 507).Tradesmen who could not invest money to trade started to speculate with land estates, for example, the Salzer family held a hereditary magistrate mansion and two villages. Speculations with land estates weretypical also for the city council. While in the early 15th century Olomouc owned six villages, at the end of the All Rights Reserved.same century the extent of the city landed property increased twofold to 12 villages and this upward trendcontinued also in the 16th century (Papajík, 2003, p. 51). A lot of money was spent for various buildingactivities (reconstruction of the municipal hall, new buildings of monasteries, churches and chapels) as well asfor hospitals and other forms of social care (Kuča, 2000, pp. 650-652).Corruption, monopolization of the brewing and other rights as well as bias in favor of guilds were the causes of disputes between elites and other segments of the urban population. It led to open revolts of thecommunity against shopkeepers in 1514 and once again in 1527. A letter of complaint, sent to the king in1514, referred to economic privileges of shopkeepers, free market right, beer prices, and mile right(Dřímal, 1963, pp. 133-142). A strong core of the old type patricians in Olomouc caused the craftsmengained control over urban finances only in the mid-16th century, while in Brno they occurred in the citycouncil already in the 15th century (Kux, 1942, pp. 190-197; Mezník, 1962, pp. 291, 302-306; Szabó, 1984,pp. 68-73).ConclusionsFinancial crises and social conflicts directed the aim of restructuring power of the old ruling elites, and finally were centered round the wish for a more efficient management of urban financial resources and moreintensive control rights for those urban social groups that provided the capital for the realization and protectionof “common” urban interests. It was a common feature for the cities in the West just as in the East of the6Disputes over repayments of debts, debated before the councillors of Breslau, can serve as an example. In 1485-1496 an amountof bills of debt of Olomouc burghers ranged between 30 and 700 florins. See Olomouc District State Archive, Olomouc CityArchive, books, sign. 6671, fol. 2r-18v.。
The value of coskewness in mutual fund performance evaluationDavid Moreno,Rosa Rodríguez *Department of Business Administration,University Carlos III,C/Madrid,126,Getafe 28903,Spaina r t i c l e i n f o Article history:Received 5March 2008Accepted 31March 2009Available online 7April 2009JEL classification:G11G12G23Keywords:Coskewness Mutual fundsPerformance measuresa b s t r a c tRecent asset pricing studies demonstrate the relevance of incorporating coskewness in asset pricing mod-els,and illustrate how this component helps to explain the time variation of ex-ante market risk premi-ums.This paper analyzes the role of coskewness in mutual fund performance evaluation and finds evidence that adding a coskewness factor is economically and statistically significant.It documents that coskewness is sometimes managed and shows persistence of the coskewness policy over time.One of the most striking results is that many negative (positive)alpha funds,measured relative to the CAPM risk adjustments,would be reclassified as positive (negative)alpha funds using a model with coskewness.Therefore,performance ranking based on risk-adjusted returns without considering coskewness could generate an erroneous classification.Moreover,some fund characteristics,such as turnover ratio or cat-egory,are related to the likelihood of managing coskewness.Ó2009Elsevier B.V.All rights reserved.1.IntroductionAfter more than 40years of research,the problem of how to evaluate active portfolio management remains largely unresolved.Classic performance measures proposed by Treynor (1965),Sharpe (1966),Jensen (1968)were developed assuming a normal distribu-tion of returns.During the 1970s,others realized that these perfor-mance measures did not evaluate fund performance accurately because the distribution of fund returns was not Gaussian.Thus,Klemkosky (1973)and Ang and Chua (1979)demonstrated that ignoring the third moment of the return distribution would gener-ate a bias in the performance evaluation.This bias could affect investors directly by leading them to create portfolios with a sub-optimal asset allocation.1From the asset pricing literature,several authors also point out the convenience of using models with higher moments.Kraus and Litzenberger (1976)document the importance of considering the third moment (skewness)of returns.They develop a model in which investors are compensated for holding systematic risk and coskewness risk,requiring a higher (lower)return whenever thesystematic risk is higher (lower)and the coskewness risk is lower (higher).The negative price of risk in the second component indi-cates that investors dislike assets with negative coskewness that requires higher returns.For those unfamiliar with coskewness,an asset with negative coskewness is an asset that,when incorporated into a portfolio,adds negative skewness,increasing the probability of obtaining undesirable extreme values (in the left tail of the distribution).Given this agreement regarding the importance of higher mo-ments from both the mutual fund and asset pricing literatures,re-cent studies have started to develop new performance measures.As hedge funds can employ dynamic strategies such as leverage,short-selling,and investment in illiquid assets,it seems clear that return distributions will be non-normal and that therefore these measures will have a significant effect (see Ranaldo and Favre,2005;Ding and Shawky,2007).It is less evident that these mea-sures also generate changes in the performance of common mutual funds unable to use those strategies.2Moreno and Rodríguez (2006)have taken the first step of introducing coskewness in mutual fund evaluation for the Spanish case.They find some performance differ-ences when these new measures are taken into consideration.The major contribution of this paper is to provide empirical evi-dence about the role of coskewness in evaluating mutual funds.This evidence is not yet addressed in the published literature.It0378-4266/$-see front matter Ó2009Elsevier B.V.All rights reserved.doi:10.1016/j.jbankfin.2009.03.015*Corresponding author.Tel.:+34916248641;fax:+34916249607.E-mail addresses:jdmoreno@emp.uc3m.es (D.Moreno),rosa.rodriguez@uc3m.es (R.Rodríguez).1Along the same lines,other authors such as Prakash and Bear (1986)or Leland (1999)have developed performance measures incorporating skewness,and Stephens and Proffitt (1991)generalize the performance measure to account for any number of moments.All these results indicate that ignoring higher moments could have a significant impact on the performance rankings of these funds.2It must be noted that fund managers could use,for example,derivatives (see Koski and Pontiff,1999;Frino et al.,2009),biasing the distribution of fund returns to the left or right and generating coskewness in the return distribution.Journal of Banking &Finance 33(2009)1664–1676Contents lists available at ScienceDirectJournal of Banking &Financej o u r n a l h o m e p a g e :w w w.e l s e v i e r.c o m /l o c a t e /j bfis provided by examining:(a)the changes in the average fund per-formance;(b)the variations in the ranking of mutual fund manag-ers;(c)the relationships between characteristics such as portfolio turnover,size,category,and coskewness management;and(d)the possibility that some fund managers may profit from the coskew-ness spread.Hereafter,‘‘managing coskewness”refers to having a specific policy regarding the assets incorporated into the fund’s portfolio to achieve higher or lower portfolio coskewness.In order to thoroughly study the relevance of including the third co-moment of asset returns in performance evaluation,two differ-ent multifactor asset pricing models are considered:the CAPM and the Carhart(1997)four-factor model.In both models,a coskew-ness factor is added and the best adjustment of risk3is sought. The use of a coskewness factor is based on the results from the asset pricing literature,which has demonstrated the convenience of using coskewness models instead of the popular Fama and French(1993) three-factor model.Thus,Harvey and Siddique(2000)test the three-moment CAPM’s implication that a stock with a negative coskewness with the market will earn a higher risk premium.They form a coskewness factor following the methodology that Fama and French use in constructing the SMB and HML factors andfind that coskewness is economically significant.Barone-Adesi et al. (2004)use a quadratic model andfinds that additional variables rep-resenting portfolio characteristics(such as those considered in the Fama and French model)have no explanatory power for expected re-turns when coskewness is taken into account.Chung et al.(2006) suggest that higher order co-moments are important for risk-averse investors concerned about extreme outcomes.The authors alsofind that the risk factors of Fama and French approximate these higher order co-moments especially when using low frequencies.Vanden (2006)also points out that SMB and HML measure coskewness risk, but that they are imperfect proxies.More recently,Smith(2007)finds that while the conditional two-moment CAPM and the condi-tional Fama and French three-factor model are rejected,a model that includes coskewness is not rejected by the data.4This paper yields revealing results.First,itfinds that the coskewness factor is both economically and statistically signifi-cant.Second,the average fund performance will change when coskewness is taken into account;this change is greater when compared with the CAPM alpha(the average alpha for all equity funds is moved to the left side by more than double)than with a Carhart model(the average alpha is modified by approximately 6%).Third,in general,these movements in the alpha might affect categories of equity mutual funds in different ways,so that in this sample,the Aggressive Growth funds are made to look better while the rest look worse.Fourth,as those variations in performance will have a different sign depending on the loading on the coskewness factor,a ranking based on risk-adjusted returns without consider-ing coskewness might result in a misleading classification of the funds.Moreover,one of the most striking results is that many neg-ative(positive)alpha funds measured relative to the CAPM risk adjustments would be reclassified as positive(negative)alpha funds using the CAPM plus coskewness.Fifth,those managers using a specific policy for managing coskewness repeat the same policy over time,thus persistence in coskewness policy appears in the majority of the time periods.Sixth,fund turnover and fund category are related to having a specific policy of managing coskewness.The remainder of this paper is organized as follows:the follow-ing section describes the coskewness measure and the models used to analyze its effect on performance evaluation.Section3presents the database of mutual funds and the benchmarks used.Section4 provides the empirical evidence.Summaries and conclusions are presented in Section5.2.The effect of the coskewness factor on performance evaluation2.1.The coskewnessKraus and Litzenberger(1976)extend the CAPM to incorporate the effect of skewness in asset pricing,developing the three-mo-ment CAPM(3MCAPM).Thus,in equilibrium,the expected returns of a risky asset satisfy:R iÀR f¼k1b iþk2c i;ð1Þwhere R i is one plus the expected return of the risky asset,R f de-notes one plus the return of the risk-free asset,b i is the systematic risk,and c i indicates the systematic skewness(standardized coskewness)of the asset,a measure of the asset’s coskewness risk.5 The risk premiums of each risk factor are k1and k2.Therefore,inves-tors are compensated by the expected excess returns for bearing the relative risks measured by beta and gamma.Given that the investor requires higher returns for securities with higher betas,a positive risk premium,k1>0,is expected.However,there would be a nega-tive risk premium for assets with positive systematic skewness, k2<0.From an empirical point of view,asset pricing models can be tested through the restrictions that they impose on the coefficients of the return generating process.Thus,the return generating pro-cesses consistent with the CAPM and the3MCAPM are the market model and the quadratic model,respectively.Whereas the market model assumes that the return of a risky asset is linearly related to the return of a stock index representative of the market,the qua-dratic model establishes a nonlinear relationship expressed as:R i;tÀf;t¼c0iþc1i½R M;tÀR f;t þc2i½R M;tÀR M;t 2þm it:ð2ÞThe estimation of c2i in the quadratic model(2)gives a coskewness measure.Through the use of a partitioned regression argument (Frisch–Waugh–Lovell theorem)it is easy to verify that c2i is equal to Eðe i;tþ1e2M;tþ1Þ=Vðe2M;tþ1Þ,where e i;tþ1represents the residuals fromthe regression of the excess return on the contemporaneous market excess return and e M;tþ1represents the residuals of the excess mar-ket return over its mean.Harvey and Siddique(2000)compute the standardized uncon-ditional coskewness asS i¼Eðe i;tþ1e2M;tþ1ÞffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiEðe2i;tþ1ÞqEðe2M;tþ1Þ;ð3Þ3An alternative way to take into account the skewness of the return distribution could be in an equilibrium framework like that of Leland(1999).However,this performance measure would require two assumptions:the rate of return on the market portfolio must be independently and identically distributed and perfect markets must exist.Moreover,many of the econometric problems related to the estimation of the CAPM alpha will also be present in estimating this performance measure,includingfinding an appropriate proxy for the market portfolio(asmentioned in Leland(1999),footnote22).In contrast,multifactor pricing models, such as the ones proposed in this paper,are not subject to those problems.4Coskewness is also considered relevant in some other economic areas.For example,Vines et al.(1994)study the importance of coskewness in the pricing of real estate and Christie-David and Chaudhry(2001)examine it in explaining the return-generating process in futures markets.Bali et al.(2008)investigates the role of conditional skewness in the estimation of conditional VaR.Post et al.(2008)focuses on the3MCAPM and the economic meaning of the coskewness premium.5According to Kraus and Litzenberger(1976)the expressions are:bi¼E½ðR iÀR iÞðR MÀR MÞE½ðR MÀR MÞ2and c i¼E½ðR iÀR iÞðR MÀR MÞ2E½ðR MÀR MÞ3,where c i is defined as the ratio of the coskewness of that asset’s return and the market’s return to the market’s skewness. In the same way that the covariance(the numerator of beta)represents the marginal contribution of an asset to the variance of the market portfolio,the coskewness(the numerator of gamma)represents the asset’s marginal contribution to the skewness of the market portfolio.D.Moreno,R.Rodríguez/Journal of Banking&Finance33(2009)1664–16761665where e M;tþ1and e i;tþ1are defined as above.The information given by the coskewness allows a risk factor to be constructed in the same way that the Fama and French(1993)factors are constructed.This risk factor can be replicated by a portfolio of assets.In order to elab-orate on this factor,it is necessary to compute the coskewness mea-sure for each asset and then use it to rank the assets.The assets form two portfolios:one contains the30%of the assets that have the most negative coskewness(SÀ)and another contains the30% of the assets with the most positive coskewness(S+).The return spread of the two portfolios(SÀ–S+)and the return spread of the portfolio SÀand the risk-free rate(SÀ–R f)are the coskewness risk factors(CSK).2.2.The modelsIn order to analyze the effect of the coskewness factor on per-formance evaluation,the standard CAPM and the Carhart(1997) four-factor model are used as base cases.The Carhart(1997) four-factor model uses the three factors of Fama and French (1993)plus one that captures the momentum effect.Here,the four-factor model(FF4)is used to adjust the performance of the fund for the regularities found infinancial returns.Thus,the mod-els areR i;tÀR f;t¼a iþb m i½R M;tÀR f;t þe i;t;ð4ÞR i;tÀR f;t¼a iþb m i½R M;tÀR f;t þb smb i SMB tþb hml i HML tþb wmliWML tþe i;t;ð5Þwhere f R M;tÀR f;t;SMB t;HML t;WML t g represents the market,size, book-to-market value,and momentum factors.When a coskewness factor is included,R i;tÀR f;t¼a iþb m i½R M;tÀR f;t þb csk i CSK tþe i;t;ð6ÞR i;tÀR f;t¼a iþb m i½R M;tÀR f;t þb smb i SMB tþb hml i HML tþb wmli WML tþb cskiCSK tþe i;t:ð7ÞTo study the effect of adding this new factor to the traditional Jen-sen’s alpha,the coskewness risk must be considered in the same way as the systematic market risk.Just as greater returns are re-quired for portfolios(and thus managers)with larger systematic risks(betas),in a model that includes coskewness,greater returns are required for portfolios(managers)with larger coskewness risks. To illustrate,evaluate two mutual funds with an annual abnormal return of3%as measured by the classic Jensen’s alpha, a A=a B=0.03.Manager A has over-weighted the portfolio with as-sets that have negative coskewness and has therefore obtained a spread by coskewness,whereas Manager B has not given special consideration to coskewness.According to(6),the loading parame-ter that captures the coskewness risk must be positive for Manager A(e.g.b CSK=0.20)and zero for Manager B.Because investors dislike negative coskewness assets that require greater returns,the coskewness factor must have a positive mean(e.g.0.10),so thefinal alpha for Manager A will be lower than the alpha for Manager B. Therefore,the abnormal return obtained would be a CSK,A=a AÀb2(CSK)=0.03À0.2(0.10)=0.01for Manager A and a CSK,B=a BÀb2(CSK)=0.03for Manager B.In this example,the manager who tries to profit from the coskewness spread achieves the worse performance.The reason is that alpha is a performance measure that considers risk-adjusted returns,so that just as greater returns are required of a manager assuming larger market risk(and therefore a greater b m,reducing his Jensen’s alpha),greater returns are required of the manager assuming a larger systematic risk of skewness.To complete the argument,it is important to note that Manager A adds negative skewness to the portfolio by incorporating negative coskewness assets,which is an undesirable situation for investors.Thus,a cor-rect measure of abnormal returns must penalize this strategy.2.3.Conditional performance evaluationThe models explained above use unconditional expected re-turns and are based on the assumption that factor loadings are constant.However,if expected returns and risks vary over time, such an unconditional approach may give biased results.Chen and Knez(1996)and Ferson and Schadt(1996)advocate condi-tional performance evaluation(CPE).In their one-and multi-factor models,factor loadings(betas)are conditioned on public informa-tion variables.The resulting conditional Jensen’s alphas represent the average difference between the return of a fund and the return of the dynamic strategies based on public information.6 Therefore,in this paper,the previous models are analyzed in a conditional and an unconditional framework,given the existing evidence that asset pricing models need to be conditional since ex-pected returns vary over time.This analysis contributes to condi-tional performance evaluation literature by presenting the differences between conditional and unconditional performance evaluation when coskewness is included.3.Data and benchmarks3.1.Fund returnsThe database used in this study consists of monthly returns for 6819US equity mutual funds between January1962and December 2006,obtained from the Center for Research in Security Prices (CRSP).These mutual funds are classified into three categories: Aggressive Growth,Growth Income Funds,and Long-Term Growth Funds.Table1provides a complete economic and statistical descrip-tion of the database.Presented in rows for each category are the annualized mean return,risk(standard deviation),kurtosis,mini-mum and maximum monthly return during the entire sample per-iod,and the percentage of funds for which the null hypothesis of normality,using a Jarque–Bera test,is rejected.The table also shows the total number of mutual funds in each category in inter-vals of36–84,84–120,120–156,156–288,more than288,and more than432observations.Thefigures in Table1indicate that the kurtosis is,on average, higher than3(a value under the null of a normal distribution) and the null hypothesis of normality is rejected for approximately 48%of mutual funds(51%of Aggressive Growth,53%of Growth In-come Funds,and44%of Long-Term Growth Funds).According to this result,the use of a performance measure based on normality should be questioned.3.2.Benchmark portfoliosThis study uses the CRSP NYSE/AMEX/NASDAQ value-weighted index as the market portfolio.The monthly series of SMB,HML,and WML factors obtained from Kenneth French’s website7is used to capture the effects of size,book-to-market value,and momentum. The short-term risk-free security is the1-month Treasury bill(from Ibbotson Associates).The predetermined variables used as instru-ments in the conditional models are:(1)the lagged level of the 6Christopherson et al.(1998)propose a refinement of the conditional performanceevaluation.Introducing time variation in alpha makes it possible to determine whether managerial performance is indeed constant or varies over time as a function of the conditional information.7/pages/faculty/ken.french/data_library.html1666 D.Moreno,R.Rodríguez/Journal of Banking&Finance33(2009)1664–1676one-month Treasury bill yields;(2)the lagged dividend yield of the CRSP value–weighted NYSE/AMEX/NASDAQ stock index;(3)a lagged measure of the slope of the term structure;and(4)a lagged corpo-rate spread on the corporate bond market.The term spread is a con-stant maturity10-year Treasury bond yield minus the3-month T-bill yield.The corporate bond default yield spread is Moody’s BAA-rated corporate bond yield minus the AAA-rated corporate bond yield.These variables havefigured most prominently in studies of mutual fund performance(see Ferson and Schadt,1996;Ferson and Warther,1996).Table2presents summary statistics on the risk factors.The mean,median,and standard deviation return data are in annual percentages.In addition,the monthly maximum and minimum re-turn,and the p-value of the Jarque–Bera test are shown.The mar-ket factor is the excess return on the value-weighted index;SMB is the factor mimicking portfolio size;HML is the factor mimicking portfolio book-to-market value;WML is the factor mimicking port-folio1-month return momentum;and(SÀ–S+)and(SÀ–R f)are the coskewness factors.Monthly US equity returns from CRSP NYSE/AMEX/NASDAQ files from December1957to December2006are used to compute the coskewness factor,including ordinary common stocks and excluding real estate investment trusts,stocks of companies incor-porated outside of the United States,and closed-end funds.For robustness,various specifications of the coskewness factor are investigated to ensure that it is not sensitive to its construction methodology.8Thus,there are various cut-off definitions(bottom 15%–top15%and bottom20%–top20%)in addition to Harvey and Siddique’s(2000)factor(bottom30%–top30%).Moreover,these fac-tors are computed by employing the parameter c2from the quadratic model(2)to sort the common stocks instead of using the standard-ized unconditional coskewness(3)proposed by Harvey and Siddique (2000).The risk premium for all these coskewness factors is positive, becoming higher as the cut-off is more extreme.The risk premiums for the(SÀ–S+)are3.44%,3.19%,and2.33%annually for the15–15, 20–20,and30–30cut-offs respectively,and range from8.9%to 8.30%annually in the case of the(SÀ–R f)coskewness factor.Con-structing the coskewness factor from the quadratic model,the pre-miums for the different cut-offs are 3.52%, 2.74%,and 2.56% respectively for the(SÀ–S+),and9.94%,9.01%,and8.73%for the (SÀ–R f)factor.In order to analyze the potential impact of the factors when they were added to the models,the correlation coefficient among all of them is computed.The correlation for the same coskewness factor using different cut-offs is very high(e.g.for the(SÀ–S+) 15–15and20–20the correlation is0.94),allowing the conclusion that the factor is not sensitive to the selection of different cut-offs. In addition,the correlation computed using different measures of coskewness is also high,e.g.for the(SÀ–S+)30–30from the Harvey and Siddique measure(2000)and the measure from the quadratic model it is0.85.Therefore,the coskewness factor is not sensitive to the different ways of measuring the coskewness.Table2also presents the contemporaneous correlations be-tween the factors included in the models.Observe that these cor-relations are generally small,ranging fromÀ0.40to0.37.But there is one case in which correlation is not negligible:when the existing correlation between the market factor and the coskewness factor(SÀ–R f)is0.913.In order to avoid possible multicollinearity problems,this coskewness factor is orthogonalized with respect to the market factor.The correlation with the market then changes to zero,and the correlation with the other coskewness factor(SÀ–S+)changes to0.90(whereas before it was only0.37),corroborat-ing the robustness of the coskewness factor used here.Conse-quently,the rest of the paper shows results only considering theTable1Summary statistics of mutual funds:January1962–March2006.Mean return Standard deviation Kurtosis Max.losses Max.returns Test normalityAggressive Growth11.03821.698 4.272À18.77717.57451Growth/Income7.61014.662 4.191À13.41610.62453Long-Term Growth7.44117.674 4.215À15.13213.74944All Funds8.59518.206 4.272À15.85414.19248Number of funds36–8484–120120–156156–288>288>432 Aggressive Growth211210546002441694518 Growth/Income16177514261971677650 Long-Term Growth3090170871536619610561 All Funds681935131741807532226129The table reports summary statistics for6819mutual funds in the database categorised by investment objectives:‘‘Aggressive Growth”,‘‘Income”,‘‘Growth and Income”and ‘‘Long-Term Growth”.A fund is included in the investment universe if it contains at least36consecutive monthly return observations.For each category,we present in columns the annualized mean return(as a%),standard deviation(as a%),kurtosis,monthly minimum return(Max.Losses)and monthly maximum return(Max.Return)during the entire sample period,and the percentage of funds for which the null hypothesis of normality of a Jarque–Bera test is rejected at the10%level of significance.The table also shows the total number of mutual funds in each category in intervals of36–84,84–120,120–156,156–288and more than288and432observations.Table2Summary statistics of risk factors and instruments.Market SMB HML WML SÀ–S+SÀ–R fMean 5.453 6.118 1.97310.184 3.1908.755Median9.251 6.000 1.56010.680 2.2398.608Maximum16.04913.63013.42018.40016.38423.005MinimumÀ23.134À9.840À21.850À25.050À13.422À19.070Std.dev.15.3709.97511.11313.82510.03316.721SkewnessÀ0.4800.274À0.595À0.6500.4810.007Kurtosis 4.933 5.3808.6758.470 6.747 4.763Jarque–Bera0.0000.0000.0000.0000.0000.000Cross correlationsMarket SBM HML WML SÀ–S+SÀ–R fEXRM 1.000SBMÀ0.402 1.000HML0.290À0.271 1.000WMLÀ0.079À0.041À0.103 1.000SÀ–S+0.0050.1090.1260.055 1.000SÀ–Rf+0.913À0.3100.277À0.1320.371 1.000This table reports summary statistics on the risk factors.The mean,median and std.dev.are represented in annual percentages.We show the monthly maximum andminimum return,and the p-value of the Jarque–Bera test.The Market factor is theexcess return on the value-weighted CRSP NYSE/AMEX/Nasdaq portfolio;SMB isthe factor mimicking portfolio for size;HML is the factor mimicking portfolio forbook-to-market,WML is the factor mimicking portfolio for the1-month returnmomentum;SÀ–S+and SÀ–R f are the coskewness factors.Table2also presents the contemporaneous correlations between the factors included in the models.8The authors are grateful to an anonymous referee for comments on this section, which have led to substantial improvements in the paper.D.Moreno,R.Rodríguez/Journal of Banking&Finance33(2009)1664–16761667factor (S À–S +),where coskewness is computed using (3)and with the intermediate cut-off 20–20.94.Empirical results4.1.The coskewness of mutual fundsTable 3reports some summary statistics that compare the coskewness measures across the three categories of funds analyzed in this paper.Panel A shows the unconditional skewness computed as the third central moment over the mean.The results indicate that,considering all funds jointly,half of the funds have a negative skewness,significant at the 5%level.10For each of the categories,these percentages are 42%,61%,and 49%,respectively.This result shows that the skewness of the funds is significant and that the third moment of the return distribution should not be ignored.Panel B presents the results of measuring the unconditional coskewness of the mutual funds.The mean value for all equity mutual funds is neg-ative (À0.013)and the proportion of funds that have a significant coskewness is,on average,19.63%.Moreover,it can be observed that each category has a different standardized unconditional coskew-ness,with the Aggressive Growth being the only one having a nega-tive average coskewness (À0.063)and further,having the greatest number of funds with negative unconditional coskewness (16%).Gi-ven that a mutual fund is a portfolio of assets,this finding indicates that approximately 16%of those mutual funds are investing in assets with negative coskewness and that therefore the required return of these funds according to the 3MCAPM should be higher.When the quadratic model (2)is estimated as an alternative measure of coskewness,the results are similar to those found in Panel B.The mean value estimated for the parameter c 2,shown in Panel C,is neg-ative when all categories are considered together and the percentage of funds with a significant parameter is around 10%.Again,the Aggressive Growth category shows a negative value (À0.531)onaverage and the Growth Income and Long-Term Growth funds pres-ent a positive one.The figures in Table 3might give the impression that very few funds exhibit significant coskewness and that,as a result,the im-pact of coskewness could be marginal.Given that the main goal of the paper is to analyze the importance of coskewness as an addi-tional factor in performance evaluation,it is interesting to report the funds’exposure to the coskewness factor rather than just the coskewness measures.The sensitivities to the coskewness factor depend on the factor and the funds.Since,in general,the average betas differ significantly from zero for all categories and factors,using this (S À–S +)factor,between 58%and 64%of the funds are sta-tistically significant.11Hence,these results suggest that coskewness could play an important role in explaining the performance evaluation of mutual funds and that disregarding it will create a bias –perhaps a signif-icant one –in assessing performance evaluation.This hypothesis is tested in the following section.4.2.Performance evaluation of mutual fundsThe results of the time-series estimation for the models are re-ported in Table 4(the first panel reporting all funds jointly and subsequent panels reporting each category of mutual funds).For each panel,Rows 1and 2show the unconditional estimation of the CAPM and the CAPM plus the coskewness factor.The Carhart four-factor model (1997)and the same model plus the coskewness factor are in Rows 3and 4.For each model,alpha,beta(s),the ad-justed R 2of the regressions,and the likelihood ratio test are reported.An interesting result from Table 4is that,in general,for all the categories of funds used in this paper and for all models,the aver-age coefficient obtained for the CSK factor is statistically different from zero.The values range from À0.09to 0.10,depending on the category and the model analyzed.In addition to statistical sig-nificance,there is the economic significance.As the coskewness factor is an excess return,an approximate value of the coskewness risk premium can be calculated by multiplying the loadings on the factor by the sample average return of the coskewness portfolio9The results for the rest of the specifications of the coskewness factor are very similar and are available upon request.Table 3Skewness and coskewness of mutual funds.All fundsAggressive Growth Growth/Income Long-Term Growth Panel A:unconditional skewness Mean À0.355À0.299À0.455À0.340MedianÀ0.399À0.327À0.470À0.391Positive and Sign.at 5% 4.4587.955 1.546 3.592Negative and Sign.at 5%49.89042.33061.53448.964Panel B:unconditional coskewness Mean À0.013À0.0630.0160.006MedianÀ0.006À0.0650.0290.014Positive and Sign.at 5%7.626 4.5458.5349.256Negative and Sign.at 5%12.01116.2419.89510.227Panel C:C 2i Mean À0.134À0.5310.0930.018MedianÀ0.018À0.5430.1270.083Positive and Sign.at 5% 4.355 1.847 5.937 5.243Negative and Sign.at 5% 5.9548.665 5.318 4.434t -Statistic0.8260.8410.8570.800The unconditional skewness is computed as the third central moment over the mean.The unconditional coskewness is defined as E ðe i ;t þ1e 2M ;t þ1Þ=ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiE ðe 2i ;t þ1Þq E ðe 2M ;t þ1Þ,wheree i,t+1is the residual from the regression of the excess return on the contemporaneous market excess return and e M,t+1is the residual of the excess market return over its mean.c 2i is an alternative measure of coskewness from the quadratic model consistent with the three-moment CAPM,R i ;t ÀR f ;t ¼c 0i þc 1i ½R M ;t ÀR f ;t þc 2i ½R M ;t ÀR M ;t 2þm it .Significance levels for unconditional skewness and coskewness are computed based on bootstrap percentiles methodology (for a more detailed description on this meth-odology see Efron and Tibshirani (1993)),and we find that at 5%they are À0.40and 0.40for the unconditional skewness and 0.20and 0.20for the unconditional coskewness.10Significance levels for unconditional skewness and coskewness are computed based on bootstrap percentiles methodology (for a more detailed description of this methodology see Efron and Tibshirani,1993).Here,they are À0.40and 0.40for the unconditional skewness and À0.20and 0.20for the unconditional coskewness at the 5%level.11These figures are calculated by regressing the excess return of each fund on the returns on the (S À-S +)portfolio.1668D.Moreno,R.Rodríguez /Journal of Banking &Finance 33(2009)1664–1676。
Searching ScopusScopus provides records of articles from thousands of peer-reviewed journals (and other types of document) covering a wide range of subjects (social and policy sciences, health, medicine, sport, management, biology, chemistry, physics, architecture, engineering). Many of Scopus’s search results will include links to full documents. The purpose of this guide is to help you search Scopus effectively - many of the principles covered will also help you search other databases.Contents1. How to use this guide p.12. Accessing Scopus p.23. Selecting your search terms p.34. Entering your search terms p.55. Re-sorting and refining your results p.76. Engaging with abstracts (summaries)p.97. Accessing the full article p.98. Broadening your perspective on a topic p.109. Saving and emailing individual results p.10Saving searches and email alerts: please refer to the guide:/library/pass.bho/infoskills/saveSearchesAlerts-sps.pdf1. How to use this guide:don’t skip this section!Most of the examples in this guide are based on the following assignment title:“Investigate the extent to which rates of attendance impact upon academic achievement by undergraduates”.Work through the whole of this guide. In each section, note the instructions and examples given, and try a search of Scopus following the same instructions. However, whensearching Scopus, use one of your own assignments (or research questions) to help youselect your own search terms (rather than using the examples in this guide).First year SPS students taking SP10159 workshops: use the theme of the article by Roberts and Ravn to help you select your own search terms (you will be using this article as thebasis of your November assignment).2. Accessing Scopusa. You need to access Scopus via the Library website to get full access. Along with mostother library resources, it is best to use the browser, Google Chrome, to access Scopus. Go the Library homepage:https:///home - then, click the link for your subject within the “Subject Resources” section.This will take you to your subject’s library webpages where you will find a link to Scopus in ‘Search the Literature’. Alternatively, search for Scopus via the Library Catalogue.3. Selecting your search termsThink carefully about your search terms before entering them. This will help the database return a stronger set of relevant search results.3a. Identifying sub-themesTo select your select your search terms, identify the key words or phrases within your assignment title/theme (or research question). Consider which words/phrases distinguish your title/theme from any other one. If you identify multiple key terms, each of these represents a ‘sub-theme’ within the overall theme of your assignment.For example, the following are the three ‘sub-themes’ drawn from the assignment title, “Investigate the extent to which rates of attendance impact upon academic achievement by undergraduates”:3b. Identifying alternative words or phrasesi.Think about the potential range of alternative words and phrases that could be usedto describe each separate sub-theme. Consider terminology that you already know from academic books, journal articles, lectures and tutorials, and commonly-used language.Why do this? This will help you decide which words to enter into Scopus’ searchboxes.ii.Also, if you enter a large range of relevant search terms, you are more likely to increase the number of useful search eful tip: You might find it helpfulto write down your alternative words/phrases in separately themed rows e.g.4. Entering your search terms4a. Create multiple search boxesClick ‘+ add search Field’ underneath the search box.In order to create multiple search boxes - you need one box for each of your sub-themes.4b. Organising and entering your search termsEnter each set of search terms (i.e. each sub-theme) into its own separate search box.Enter the word or between each alternative word/phrase.For an explanation of the asterisk*, refer to section 3C of this guide. Here is an example ofa set of organised search terms in Scopus.In the above example, note the following:∙The default AND appears between search boxes. This means that at least one word/phrase from each search box will appear in each of your results.∙You have the option to limit your results to a specific date range. This is not appropriate for all searches (some older articles may still be relevant or frequently cited).4c. Useful search tips!Truncating words using an asterisk *As in the examples above, try using an asterisk where relevant – this might help increase your number of search results. You can add this at the end of the stem of a word to find variations e.g.attend*searches for attend, attend s, attend ing, attend ed, attend ance etc.absen*searches for absen ce, absen ces, absen t, absent ee, absent ees, absent eeism Searching for a specific phraseYou can enter “speech marks” around a phrase so that Scopus searches for only those words in exactly the same order e.g. “university student*”,“student at university”4d. Click the search button (the magnifying glass icon).Take a look at the number of your search results. Scroll down to the ‘subject area’ options in the left-hand column (left of the results). If you are retrieving many irrelevant searchresults, you can refine your results by selecting filters in the left-hand column (e.g. refine by ‘social sciences’). Click a subject filter and then take another look at the number of yoursearch results.5. Re-sorting and refining your results5a. Re-sorting your resultsIf your results appear in date order, re-sort them so that they appear starting with the most relevant. You can also re-sort them in order of those that are most highly cited. The “sort on” options appear just above your search results and to the right.5b. Using “limit” optionsExperiment with ticking/selecting some options in the left-hand column to refine yourresults – for example, you could limit your results by selecting/ticking keyword(s) andclicking the ‘limit to’ button - this appears at both the top and bottom of the column.Note both the ‘view less/more’ and ‘view all’ links in the keywords menu.5c. Retrieving too few results?Click the orange ‘Scopus’ logo (top left-hand corner) to return to your search terms. Consider whether any further terms exist on your topic and if so, add these to the relevant search boxes (again entering the word or in between multiple terms). Try another search. If you still retrieve few results, search more broadly on the topic. You could so this by identifying the least significant of the sub-themes and excluding the associated search terms from your next search.5d. Retrieving too many irrelevant results?Click the orange ‘Scopus’ logo (top left-hand corner) to return to your search terms. Consider whether any of your search terms are of only marginal relevance – if so, remove these and try another search. If you continue to retrieve many irrelevant results, consider whether your topic contains a further sub-theme. If so, enter the search term(s) associated with that sub-theme in a further search box.5e. Proximity searching(an optional step but this can prove useful)Proximity searching is useful where multiple similar variations on a single phrase exist e.g. “social work training” “training of social workers” “teaching social work”…By entering a search term followed by W/ and a number, you can search for two search terms to appear near each other e.g. within 5 words of each other.Place any alternative terms in brackets.“social work*” W/5 (train* OR teach*)If relevant to your topic, undertake a “proximity search” by adapting your search terms (as in the above example) and note down the number of results retrieved……6. Engaging with abstracts (summaries)If you hover just to the right of the “links” button, you will see an option to “showabstract” – click on this to view a summary/abstract of the relevant document i.e. findings and methodologies.Note the terminology used in abstracts. By developing your awareness of relevantterminology, this may help you modify/expand your search terms. In turn, this willpotentially help you retrieve a greater number of relevant results. If you click on an article title within your results, you may also find author key words and subject terms listed –these describe the content of the article and may inspire you to modify your own terms. 7. Accessing the full article6a. Checking online availabilityTo find out whether or not the full text of an article is available for you to read, click the blue and white ‘Links’ button:This button will take you to a “links” page which will provide a link to the full article or atleast the journal website (e.g. if the library subscribes to the relevant journal).6b. If the full article is NOT available online:Search the Library Catalogue via the Library homepage: https:///home - Search for the journal title (not the article title). If we provide the journal and its Catalogue record states ‘copies available’, we provide a print copy (click ‘copies available’ to find out if we hold the relevant issue). If we do not provide the article in any format, and it ispotentially important for your work, please contact your Subject Librarian for advice.8. Broadening your perspective on a topic“Cited by” links and referencesBack to your search results: Note the words times cited to the right of each result. Thistells you the number of times an individual article has been cited/referenced by otherarticles (i.e. those that have been indexed by Scopus). You can click on this to find details of those other articles.It is good practice to consider using such articles to support/extend/challenge yourargument. They may help you demonstrate a broader understanding of the topic, providing you with a more up-to-date perspective.For similar reasons, it is also good practice, where relevant, to follow-up an individualarticle’s own references i.e. use previous articles related to the same topic.9. Saving and emailing resultsa. It is good practice to save multiple copies of selected results so that you have “back-ups”in the event of losing them. To save them, first Click the box next to each individual result of interest.b. Email the selected results to yourself: select the envelope icon. A short online form will appear. Complete the form (e.g. enter your email address and click Send.c. Other export options:Select the Export option just above your search results. A dialog box will appear where you find various options for saving your results. For example:Save the results to a folder of your choice: select Text format and click the blue Export button. The references will appear as a separate file for you save. If the results open up in a separate window, you may need to copy and paste them into a Word document - then save that document.Saving to EndNote: select RIS format and click the blue Export button to create a file. If you use EndNote Desktop, the results will be imported immediately into your EndNote library. If you use EndNote Online, you need to save the file and import into EndNote Online - here are the instructions: https:///guides/how-to-use-endnote-online-library-guides-part-2-downloading-database-references/#scopusTry also searching further library databases (e.g. IBSS). No single database provides details of all articles written on a topic.If you would like any further support in using this database,please contact your Subject Librarian.Peter Bradley: Subject Librarian for Health & SPS: ***************.uk22 November 201911。
ORIGINAL ARTICLEMeasurement of the Q-tip angle before and after tension-free vaginal tape-obturator(TVT-O):preoperative urethral mobility may predict surgical outcomeSun-Ouck Kim&Ho Seok Jung&Won Seok Jang&In Sang Hwang&Ho Song Yu&Dongdeuk KwonReceived:3July2012/Accepted:14October2012#The International Urogynecological Association2012AbstractIntroduction and hypothesis The purpose of this study was to compare the results of the Q-tip test before and after the tension-free vaginal tape-obturator(TVT-O)in women with stress urinary incontinence(SUI)to determine the value of the Q-tip test in predicting the outcome of transobturator tape(TOT). Methods Between June2008and June2009,59women with SUI who underwent the TVT-O procedure and were followed up for at least6months were analyzed.Urethral hypermobility was defined as a maximal straining angle greater than30°as measured by the Q-tip test.Parameters of evaluation included a comprehensive medical history, physical examination,Q-tip test,stress test,and urodynamic study,which included determination of the Valsalva leak point pressure.Cure was defined as no leakage of urine postoperatively either subjectively or objectively,whereas failure was defined as the objective loss of urine during the stress test.Results The patients were divided into two groups according to their preoperative Q-tip angle:<30°(group1,n021)and ≥30°(group2,n038).The Q-tip angle decreased significantly in both groups:from25.9±5.98°preoperatively to18.4±7.23°postoperatively in group1(p00.04)and from36.6±6.75°preoperatively to24.1±5.48°postoperatively in group2 (p00.03).The difference was obviously pronounced in group 2.The incontinence cure rate was significantly higher in group 2(97.4%)than in group1(85.7%;p00.04). Conclusions Our results suggest that mobility of the proxi-mal urethra is associated with a high rate of success of the TVT-O procedure.Keyword Q-tip angle.Transvaginal tape.Stress incontinence.WomenIntroductionThe“hammock theory”proposed by DeLancy in1994is a readily understood way of explaining the continence mech-anism[1].The fascial attachments connect the periurethral tissue and anterior vaginal wall to the arcus tendineus at the pelvic sidewall,and the muscle attachments connect the periurethral tissue to the medial border of the levator ani muscle[2].This musculofascial support provides a hammock-like supporting layer onto which the urethra is compressed during increases in intra-abdominal pressure. This compression induces an increase in urethral closure pressure during a stress maneuver or coughing.Nowadays, two minimally invasive midurethral sling procedures are commonly used to produce urethral suspension of the ure-thral fascia with a polypropylene mesh:the tension-free vaginal tape(TVT)procedure,which was first reported in 1996[3],and the tension-free vaginal tape-obturator(TVT-O)procedure,which was first reported in2001[4].Both procedures were developed to restore urethral support through the placement of a sling at themidurethra. S.-O.Kim:H.S.Jung:W.S.Jang:I.S.Hwang:H.S.Yu:D.KwonDepartment of Urology,Chonnam NationalUniversity Medical School,Gwangju,KoreaS.-O.Kim(*)Department of Urology,Chonnam NationalUniversity Hospital and Medical School,8,Hak-dong,Dong-ku,Gwangju501-757,South Koreae-mail:seinsena@Int Urogynecol JDOI10.1007/s00192-012-1978-6Many studies have attempted to evaluate the correlation between proximal urethral hypermobility(UH)and midure-thral sling procedures[5–10]and the effect of urethral mobility on the surgical outcome of those procedures in women with stress urinary incontinence(SUI)[5].Because the sling is placed at the midurethra,and not in the bladder neck,the surgical outcome of midurethral sling placement is determined by the urethral mobility and urethral kinking that induce the urethra to bend during stress or coughing.How-ever,controversy exists over the impact of preoperative and postoperative urethral mobility on the surgical outcome of midurethral sling procedures[11].The Q-tip test was intro-duced for the measurement of urethral mobility in women with SUI[12].The aim of this study was to compare UH as measured by the Q-tip test before and after the TVT-O procedure in women with SUI to determine the value of the Q-tip test in predicting the outcome of the TVT-O procedure.Materials and methodsSubjects and study designBetween June2008and June2009,78consecutive women with SUI underwent a TVT-O produced by Johnson& Johnson for clinically and urodynamically proven SUI[4] and the medical records of those subjects were retrospec-tively reviewed.Five patients who had a history of previous urethral surgery including midurethral sling operation were excluded from the analysis.Two patients who had postop-erative urinary retention defined by incomplete emptying with a residual urine of>150ml requiring sling release were excluded owing to the potential effects of the release on the Q-tip angles.Four patients had incomplete Q-tip data for comparison and eight patients were lost to follow-up.Fifty-nine women who were followed up for at least6months postoperatively were finally analyzed.All patients under-went urological evaluation before treatment,including a comprehensive medical history,physical and neurological examinations,and urine analysis.Additional parameters of evaluation included a Q-tip test,stress test,and urodynamic study,which included determination of the Valsalva leak point pressure(VLPP).The examiners at this institution were two well-experienced specialists in urology.Patients were examined with an empty bladder while in the dorsoli-thotomy position on a standard gynecological table.The Q-tip test was performed by placing a lubricated cotton swab transurethrally and then withdrawing it until the sensation of resistance was met.A goniometer was used for measure-ment of the Q-tip angle with the horizontal plane.UH was defined as an angle of≥30°above horizontal with maximal straining effort.Cure was defined as no leakage of urine postoperatively either subjectively or objectively,whereas failure was defined as the objective loss of urine during the stress test.The procedures for this study complied with the guidelines provided by the Declaration of Helsinki.The terminology used in this paper conforms to the standards set by the International Urogynecological Association (IUGA)/International Continence Society(ICS)joint report on terminology for pelvic floor disorders in women[13].Exclusion criteriaPatients were excluded from this analysis according to the following criteria:patients who had undergone previous urethral surgery,retropubic surgery,or anterior colporrha-phy;patients with a history of urinary retention with residual urine over200ml/s;and patients with active urinary tract infections or other urological disease or drug treatment that could affect bladder function and urethral function.Patients using alpha-adrenergic receptor agonists or antagonists were also excluded.Potential patients who had any possible cause of neurogenic bladder were excluded.Statistical analysesSPSS(version17for Windows,SPSS Inc.,Chicago,IL, USA)was used for the statistical analyses.Data were ana-lyzed by paired t tests and are expressed as means±standard deviations(SDs).A p value<0.05was deemed statistically significant.ResultsThe patients’demographic data are shown in Table1.The average age(years),mean parity,and average body mass index(BMI)(kg/m2)of the participants were55.1±12.7, 2.6±0.8,and25.6±3.4,respectively.The preoperative esti-mated mean maximal flow rate(ml/s)and postvoid residual urine(ml)were28.5±3.9and27.7±15.3,respectively.The main incontinence severity of the subjects was Stamey grade 1,which was observed in41(69.5%)patients,followed by Stamey grade2in17(28.8%)and Stamey grade3in1 (1.7%).The estimated mean Q-tip angle(°)and VLPP (cmH2O)on urodynamic study were33.1±5.82and87.9±3.4,respectively.Of the59participants,21patients(35.6%) had a Q-tip angle of<30°(group1),and38patients (64.4%)had UH as determined by a Q-tip angle≧30°(group2).After the midurethral sling operation,the postoperative Q-tip angle decreased significantly in both groups:from 25.9±5.98°preoperatively to18.4±7.23°postoperatively in group1(p00.04)and from36.6±6.75°preoperatively to24.1±5.48°postoperatively in group2(p00.03).TheInt Urogynecol Jdifference was obviously pronounced in group2.The mean difference in the Q-tip angle before and after the operation was7.5°in group1and12.5°in group2(Table2).As also shown in Table2,the cure rate of incontinence was signif-icantly higher in group2(97.4%)than in group1(85.7%; p00.04).DiscussionIn our study,we found that more than64%of the patients had UH,which was defined as a Q-tip straining angle of more than30°.After the midurethral sling operation,ure-thral mobility showed a more pronounced decrease in the group with UH than in the group without UH.The cure rate of incontinence was also significantly higher in the group with UH(97.4%)than in the group without UH(85.7%).These results suggest that the mobility of the proximal urethra is associated with a high rate of success of the TVT-O procedure and that UH as defined by a Q-tip strain-ing angle of≥30°above horizontal could be a prognostic value in incontinent women undergoing surgery.Stress incontinence is considered to be caused by a poor-ly supported proximal urethra causing UH,which is a sig-nificant factor in the development of SUI.Therefore,the initially developed surgical treatments for SUI,such as classic retropubic or transvaginal bladder neck suspension, were designed to improve support of the proximal urethra and to correct sphincter competence[14,15].After the introduction of the hammock theory,surgical techniques were developed for support of the suburethral layer,such as the midurethral sling procedure.Many studies have attempted to evaluate the correlation between proximal UH and midurethral sling procedures[5–10]and the effect of urethral mobility on the surgical outcome of those proce-dures in women with SUI[5].Bakas et al.[9]conducted a study involving41subjects for comparison of results of the Q-tip test before and after the TVT procedure in women with SUI.They reported that adequate mobility of the prox-imal urethra is associated with a high rate of success of the TVT procedure at the6-month follow-up examination.They emphasized that reduced urethral mobility does not indicate surgical failure but rather that the lower the urethral mobil-ity,the higher the possibility of an unfavorable surgical outcome.In the present study,our findings agree with the study by Bakas et al.in terms of the high rate of success of the midurethral sling operation in patients with preoperative urethral mobility.However,a contrary point of view exists with regard to this issue in the development of the midurethral sling pro-cedure.Although urethral mobility is a common feature of patients who undergo anti-incontinence procedures,correc-tion of urethral mobility may not be a necessary factor for cure of incontinence[8,16–18].Furthermore,successful treatment of SUI does not necessarily provide a suburethral support mechanism rather than the correction of urethral mobility.Paick et al.[5]evaluated change in urethral mo-bility after the transobturator tape placement and the effect of urethral mobility on the outcome of the transobturator tape placement in a total of159women with SUI.They reported that the Q-tip test values showed a significant decrease compared with baseline in patients with preopera-tive UH but did not change in patients without preoperative UH.In terms of the cure rate,there was no significant difference between the groups(preoperative UH vs no UH),which suggests that the lack of urethral mobility is not a factor indicating a high risk of failure after the trans-obturator tape placement.Those authors insisted that the Q-tip angle would not change after the midurethral sling oper-ation because the original concept of this procedure isTable1Patients’characteristicsValueAge(years)55.1±12.7 BMI(kg/m2)25.6±3.4 No.of deliveries 2.6±0.8 Hysterectomy(%)11(13.6) Anti-incontinence operation(%)5(8.5) Maximal flow rate(ml/s)28.5±3.9 Postvoid residual urine volume(ml)27.7±15.3 VLPP(cmH2O)87.9±3.4 Stamey grade(%)Grade141(69.5) Grade217(28.8) Grade31(1.7)Q-tip test(°)31.2±5.82 UH(n)Group1(Q-tip angle<30°)21 Group2(Q-tip angle≧30°)38Data are presented as mean±SD,unless otherwise noted;data in parentheses are percentagesBMI body mass index,VLPP Valsalva leak point pressure,UH urethral hypermobilityTable2Results of Q-tip test in patients who underwent the TVT-O procedureCure rate(%)Q-tip angle Q-tipanglechangep value Preoperative PostoperativeGroup118/21(85.7)25.9±5.9818.4±7.237.50.04 Group237/38(97.4)36.6±6.7524.1±5.4812.5Group1Q-tip angle<30°,group2Q-tip angle≧30°.Data are presented as mean±SDInt Urogynecol Jmidurethral dynamic kinking and compression of the mid-urethra at the level of tape placement,causing the sling to be placed in the midurethra without tension on the resting urethra[19].It has been reported that proximal urethral mobility and the Q-tip angle could change over time after the midurethral sling operation.In one study,Lukacz et al.[7]reported the anatomical effects of the TVT procedure in54patients with a1-year follow-up evaluation.They found that the Q-tip angle decreased from the mean preoperative value only during the initial period after the TVT procedure and tended to increase with the passage of time,with the ultimate long-term Q-tip angle remaining unchanged from the preopera-tive value.One reason for the gradual change in the Q-tip angle has been suggested to be the elastic configuration of the polypropylene mesh and the possibility of stretching over time[20].Another suggestion is the limited mobility of the urethra immediately postoperatively owing to tissue edema and swelling[7].Less active Valsalva effort during the stress test as a result of pain or fear of disrupting the wound is another possible reason for a decreased Q-tip test value during the immediate postoperative period[7].In this study,we measured urethral mobility and assessed the cure rate during a relatively short-term follow-up period of 6months after the operation.Thus,it is possible that the urethral mobility and related cure rate after the operation could change over time in these patients.The Q-tip test has been considered to be a valuable tool for assessing UH in women with stress incontinence and for predicting surgical outcome[21].Although surgical correc-tion of SUI should not rely solely on the result of the Q-tip angle,the results of the current study confirm that the Q-tip test can be used as a predictor of surgical outcome.Our study showed that the cure rate after the TVT-O procedure was better in patients with UH.Therefore,we recommend performing the Q-tip test in patients with SUI who are expecting to undergo the midurethral sling procedure.This test may provide a useful reference for urogynecologists in their evaluation of patients in an attempt to predict surgical outcome and provide proper preoperative counseling to patients.To date,the Q-tip test remains the simplest and most reliable clinical tool for use in quantification of the loss of support of the urethrovesical junction.Karram and Bhatia standardized the technique of the Q-tip test and emphasized the importance of proper placement of the Q-tip at the urethrovesical junction and proper examination[22].The present study had some limitations.Valsalva effort, which may introduce variation in the degree of the Q-tip straining angle measured in the same patient,may have differed between the tests,possibly as the result of discom-fort related to instrumentation.Furthermore,observer vari-ability may cause a wider range of Q-tip values.Another limitation of this study is that the number of patients included was relatively small and the follow-up period was relatively short.Regarding knowledge of the relation be-tween the cure rate of SUI after the TVT-O procedure and postoperative Q-tip angle change,we could not determine the exact reason for failure of each patient after the TVT-O procedure and the change in the Q-tip angle in those patients.In conclusion,the results of the present study showed that the postoperative Q-tip angle decreased significantly from the preoperative value and that this change was pronounced in the group with preoperative UH.Furthermore,the related cure rate of incontinence was significantly higher in the group with preoperative UH than in the group without UH.This result implies that mobility of the proximal urethra is associated with a high rate of success of the TVT-O procedure.Conflicts of interest None.References1.DeLancey JO(1994)Structural support of the urethra as it relatesto stress urinary incontinence:the hammock hypothesis.Am J Obstet Gynecol170(6):1713–17202.DeLancey JO(1989)Pubourethral ligament:a separate structurefrom the urethral supports(“pubo-urethral ligaments”).Neurourol Urodyn8:53–613.Ulmsten U,Henriksson L,Johnson P,Varhos G(1996)An ambu-latory surgical procedure under local anesthesia for treatment of female urinary incontinence.Int Urogynecol J Pelvic Floor Dysfunct7(2):81–854.Delorme E(2001)Transobturator urethral suspension:mini-invasive procedure in the treatment of stress urinary incontinence in women.Prog Urol11:1306–13135.Paick JS,Cho MC,Oh SJ,Kim SW,Ku JH(2007)Is proximalurethral mobility important for transobturator tape procedure in management of female patients with stress urinary incontinence?Urology70(2):246–2506.Minaglia S,Ozel B,Hurtado E et al(2005)Effect of transobturatortape procedure on proximal urethral mobility.Urology65:55–59 7.Lukacz ES,Luber KM,Nager CW(2003)The effects of thetension-free vaginal tape on proximal urethral position:a prospec-tive,longitudinal evaluation.Int Urogynecol J Pelvic Floor Dysfunct14:179–1848.Klutke JJ,Carlin BI,Klutke CG(2000)The tension-free vaginaltape procedure:correction of stress incontinence with minimal alteration in proximal urethral mobility.Urology55:512–5149.Bakas P,Liapis A,Creatsas G(2002)Q-tip test and tension-freevaginal tape in the management of female patients with genuine stress incontinence.Gynecol Obstet Invest53:170–17310.McLennan MT,Melick CF,Cannon S(2004)The position of theurethrovesical junction after incontinence surgery:early postoper-ative changes.Int Urogynecol J Pelvic Floor Dysfunct15(1):44–48 11.Deval B,Levardon M,Samain E et al(2003)A French multicenterclinical trial of SPARC for stress urinary incontinence.Eur Urol 44:254–25812.Crystle D,Charme L,Copeland W(1971)Q-tip test in stressurinary incontinence.Obstet Gynecol38:313–315Int Urogynecol J13.Haylen BT,de Ridder D,Freeman RM et al(2010)AnInternational Urogynecological Association(IUGA)/International Continence Society(ICS)joint report on the terminology for female pelvic floor dysfunction.Int Urogynecol J21:5–2614.McGuire EJ,Lytton B,Pepe V,Kohorn EI(1976)Stress urinaryincontinence.Obstet Gynecol47:255–26415.Enhorning G(1961)Simultaneous recording of intravesical andintra-urethral pressure.A study on urethral closure in normal and stress incontinent women.Acta Chir Scand Suppl276:1–68 16.Lo TS,Horng SG,Liang CC et al(2004)Ultrasound assessment ofmid-urethra tape at three-year follow-up after tension-free vaginal procedure.Urology63:671–67517.Sarlos D,Kuronen M,Schaer G(2003)How does tension-freevaginal tape correct stress incontinence?Investigation by perineal ultrasound.Int Urogynecol J Pelvic Floor Dysfunct14:395–39818.Fritel X,Zabak K,Pigne A et al(2002)Predictive value of urethralmobility before suburethral tape procedure for urinary stress in-continence in women.J Urol168:2472–247519.Ulmsten U,Petros P(1995)Intravaginal slingplasty(IVS):anambulatory surgical procedure for treatment of female urinary incontinence.Scand J Urol Nephrol29:75–8220.Dietz HP,Vancaillie P,Svehla M,Walsh W,Steensma AB,Vancaillie TG(2001)Mechanical properties of implant materials used in incontinence surgery.Neurourol Urodyn20:53021.Bergman A,Koonings PP,Ballard CA(1989)Negative Q-tip testas a risk factor for failed incontinence surgery in women.J Reprod Med34:193–19722.Karram MM,Bhatia NN(1988)The Q-tip test:standardization ofthe technique and its interpretation in women with urinary incon-tinence.Obstet Gynecol71:807–811Int Urogynecol J。
1 IntroductionHuman factor is the most active productivity element and the most valuable wealth of a corporate. Against the background of the knowledge-based economic globalization, an unprecedented attention has been given to the development and application of human resources. How to explore the potential of its staff, stimulate their positive organizational behavior and change human resources into human capital at its maximum has become an important problem which needs to be solved urgently in the management of Chinese modern enterprise.Much research is done into the staff's working motivation in the foreign countries while empirical study of their positive organizational behavior seems less. Theories on organizational motivation are rarely seen abroad. however, whether they are suitable for Chinese management cultural tradition is still to be considered. For example, when Adams' theory of organizational justice applies to Chinese traditional organizational behavior culture, such as respect of person, being sensitive of one's reputation, thinking better of the collective and circumbendibus in dealing with human relations, it is a question whether organizational justice will still have a direct effect on the staff's positive organizational behavior. So far, studies of staff's positive organizational behavior have just started at home and abroad, and systematic studies of the concepts of staff's positive organizational behavior, its internal structures and its measurement have not yet been formed. Especially at home, empirical studies of positive organizational behavior have been rarely seen. The present research attempts to make an empirical study of the concept, the structure, its state and features and some influencing factors of positive organizational behavior of the staff in manufacturing-type enterprises in China.2 FindingsOrganizational Citizenship Behavior (OCB) is a unique aspect of individual activity at work first mentioned in the early 1980s. According to Organ's (1988) definition, It represents “individual behavior that is discretionary, not directly or explicitly recognized by the formal reward system, and in the aggregate promotes the efficient and effective functioning of the organization”.None of the organizations are designed perfectly. So organizations need organizational citizenship behavior to aggregate promotes the efficient and effective functioning of the organization. Organizational Citizenship Behavior (OCB) refers to the individual behaviors that is discretionary ,not directly or explicity recognized by the formal reward system, and in the aggregate promotes the effective functioning of the organization. With the deepening competition among enpertises and the flattening of organizational structure, more and more managers pay attention to OBC. In China, OBC has also gained a lot of attention, and this paper's main research objects are new employees in corporations.We attempts to examine the correlation among new employees' OBC, performance and turnover intention. The paper has introduced the concept of OBC, performance and turnover intention at first .On the basis of correlative research, this paper has educed the characteristic symbols of OCB of enterprise's new staff in our country ,and then testified the hypotheses by questionnaire survey and analyzing the date.Through explored the elements of Counterproductive work behavior (CWB) and the correlated variables, such as organizational justice, interpersonal conflict, organizational constraints, the intermediation of negative emotions, and the effects of team commitment affected CWB, under the comparation between OCB and CWB. Based on the literature review, the data aregained by the methods of survey from hundred employees of fifteen corporations. Results of CWB as follows:(1) Counterproductive Work Behaviors included five factors: interpersonal hostility, work deviated, corporation property misappropriates, aggressive behavior, people property misappropriate.(2) Decision-making justice affected directly CWB, and affected indirectly CWB through the intermediation of negative emotions.(3) Interpersonal justice affected directly CWB, and affected indirectly CWB through the intermediation of negative emotions.(4) Organizational constraints affected directly CWB, and affected indirectly CWB through the intermediation of negative emotions.(5) Negative emotions have independent power of interpretation for the variation of CWB.(6) Team commitment could predict CWB.(7) Team commitment moderates the relation between decision-making justice and CWB.(8) Team commitment moderates the relation between decision-making justice and CWB.(9) There is significant CWB difference in gender, position and fixed number of work years.The self-constructed organizational citizenship behavior questionnaire is employed to investigate the structure of the organizational citizenship behavior of the personal life insurance salespersons. We can conclusion, seven organizational citizenship behaviors have been identified, which include sportsmanship, self-development, organizational loalty&compliance, individual initiative, helping behavior, good relationship, and organizational agreement. Based on the results and findings of the paper, the organizational citizenship behavior of the personal life insurance salespersons has been constructed and the applied significance of the model in the HR management practice has been pointed out as well. Through the analyzing of the influence of the autobiographical variables on the model, the author points out that: sex factor age factor education factor marriage factor, even their interactions have no notable affection on organizational citizenship behavior; either justice factor or satisfaction factor has notable plus-correlated relationship with seven organizational citizenship behaviors; seven organizational citizenship behaviors have notable plus-correiated relationship with the performance of salespersons and the group. The rest of the research provides some helpful suggestions for the HR management of the life insurance industry, especially for performance management.1 Positive organizational behavior of the staff in manufacturing-type enterprises in China consists of indulgent behavior, responsibility behavior, active behavior, innovative behavior, assistance behavior and behavior aimed at interpersonal harmony2 With good reliability and validity, self-made questionnaires on corporate staff's positive organizational behavior, psychological ownership in organization, organization-based self-esteem and corporate image can be used as effective and reliable tools for measuring positive organizational behavior of the staff in manufacturing-type enterprises.3The main features of corporate staff's positive organizational behavior are the following: a. responsibility behavior and behaviors aimed at interpersonal harmony are given adequate attention and develop relatively well, but the average grade of indulgent behavior is the lowest, that is, indulgent behavior develops worse compared with the other positive organizational behaviors; b. corporate staff s positive organizational behavior shows no difference in sex; c. in the factor of age, middle-aged staff s positive organizational behavior is better than that of other age groups; d. inthe factor of position, the higher the position is, the higher the grade turns out to be; e. in the factor of working years, as their working years increase, the corporate staff is of the tendency to elevate the positive organizational behavior; f. in the factor of salary, staff members of middle-level salary, compared with the staff members of relatively Lower and higher salary, get a comparatively low grade in their positive organizational behavior; g. as to the element of enterprise type, foreign-capital enterprises, limited responsibility corporations and limited stock corporations have got higher grades than other types of enterprises in positive organizational behavior, joint-partner enterprise has got the lowest grade; h. as to the element of region, staff members in the north have got a higher grade in positive organizational behavior, and the grade of the staff members in the south is obviously lower than that in other regions.4 It is found in the study that the direct effect of organizational justice on positive organizational behavior is not obvious. Organizational justice imposes its influences indirectly through factors including psychological ownership in organizations, perceived organizational support, organization-based self-esteem and corporate image, which have more influence on positive organizational behavior; that is, when the staff think in their mind that organization's products, achievements, and honor belong to them, when they feel that the organization cares about them and support them, and when they feel worthy and proud as a member of the organization and integrate themselves with the organization's ideal, corporate staff's positive organizational behavior will be formed easily.The present study also shows that organizational justice, psychological ownership in organization; organization-based self-esteem and corporate image are the important factors that affect staff's positive organizational behavior, which are of significant positive correlations with staff's positive organizational behavior and of significant positive prediction effect on every individual factor of positive organizational behavior. A comprehensive observance of their influence on the factors of positive organizational behavior shows that organization-based self-esteem and psychological ownership in organizations have significant influences on indulgent behavior; while the effects of psychological ownership in organizations and corporate image are more effects on responsibility behavior and organization-based self-esteem; in active behavior and behaviors aimed at interpersonal harmony, organization-based3 ConclusionsThe main conclusion of this paper including: in our country, the OCB of enterprise's new staff is formed by organizational compliance, helping behavior, sportsmanship, interpersonal harmony and protecting company resource; Studied the relationship between the staff characteristic and OCB, and the testing showed there was little correlation; Studied the relationship between the OCB and performance, it showed there was positive correlation; Studied the relationship between the OCB and turnover intention, it showed there was negative correlation.Finally, the theory and empirical study's conclusions have put forward some countermeasure proposals to enterprise's administrators: establishing the positive attitude, approving and praising OCB highly, regarding OCB as the corporate culture and so on.In a word, the innovations in the present research are as follows: as to the theory, the study shows that direct effect of the traditional theory of organizational justice toward Chinese staff's positive organizational behavior is not obvious, and psychological ownership in organizations, perceived organizational support, organization-based self-esteem and corporateimage have more influence on Chinese corporate staff's positive organizational behavior in manufacturing-type enterprises; also its structural model is constructed against the Chinese cultural background. As to the research methods, the research combines qualitative study and quantitative study, and by using kinds of methods, an investigation of the structure of connotation, the state and features and the influencing factors of positive organizational behavior is made on the mufti-perspective; as to the practice, a series of scientific and effective tools for measurement of positive organizational behavior are made for the first time in the research, that is, questionnaire on corporate staff's positive organizational behavior, questionnaire on psychological ownership in organizations, questionnaire on organization-based self-esteem and questionnaire on corporate image. Furthermore, corresponding practical strategies for management of enterprise are brought out according to the result of empirical study.ReferenceReichers, A. F. A Review and Reconceptualizatian of Organizational Commitment. Academy of Management Review, 1985, 10 (3):465-476.Becker, ii. S. Notes on the Concept of Commitment. American Journal of sociology, 1960,66, 32-40.Kanter, R. M. Commitment and Social Organizations; A Study of Commitment Mechanism in dtopian Communities. American Sociological Review, 1988, 33, 499-517,Sheldon, M. E. Investment and Invalvements as Mechanisms Producing Commitment to the Organization. Administrative Science Quarterly, 1971, 16, 143-149.Hrebiniak, L. G, Alutto, J. A. Personal and hole Related Factors in the Development of Organizational Gommitmant. Administrative Science Quarterly, 1}?72, 17(4):555-573.Buchanan, . Government Managers, Business Executives, and Organizational Commitment. Public Administrative Review, 1974, 34(4):339--347.Salancik, G. R.,Pfeffer, J. A Social Information Processing Approach to Job Attitudes and Task Resign. Administrative Science Quarterly, 1977, 23, 224-253.Stevens ,J. M, Beyer, J. M., Trice, H. M. Assessing Personal, Role and Organizational Predictors of Managerial Commitment. Academy of Management Journal, 1978, 21(3):380-396.Mowday, R. T.,Steers, R.,Porter, L. The Measurement of Organizational Commitment. Journal of V ocational Behavior, 1979, 14, 224-247.Weaver, Y. Commitment in Organizations: A Normative View. Academy of Management Review, 1982, 7(3):418-428.。
2023年6月大学英语六级真题Part Ⅰ Writing (30 minutes)Directions: For this part, you are allowed 30 minutes to write a short essay entitled The Certificate Craze.You should write at least 150 words following the outline given below.1. 目前许多人热衷于各类证书考试2. 其目旳各不相似3. 在我看来……The Certificate Craze注意: 此部分试题在答题卡1上。
Part II Reading Comprehension (Skimming and Scanning) (15 minutes) Directions: In this part, you will have 15 minutes to go over the passage quickly and answer the questions on Answer Sheet 1.For questions 1-7, choose the best answer from the four choices marked A), B), C) and D).For questions 8-10, complete the sen tences with the information given in the passage.Minority ReportAmerican universities are accepting more minorities than ever.Graduating them is another matter.Barry Mills, the president of Bowdoin College, was justifiably proud of Bowdoin's efforts torecruit minority students.Since 2023 the small, elite liberal arts school in Brunswick, Maine, has boosted the proportion of so-called under-represented minority students in entering freshman classes from 8% to 13%."It is our responsibility to reach out and attract students to come to our kinds of places," he told a NEWSWEEK reporter.But Bowdoin has not done quite as well when it comes to actually graduating minorities.While 9 out of 10 white students routinely get their diplomas within six years, only 7 out of 10 black students made it to graduation day in several recent classes."If you look at who enters college, it now looks like America," says Hilary Pennington, director of postsecondary programs for the Bill & Melinda Gates Foundation, which has closely studied enrollment patterns in higher education."But if you look at who walks across the stage for a diploma, it's still largely the white, upper-income population."The United States once had the highest graduation rate of any nation.Now it stands 10th.For the first time in American history, there is the risk that the rising generation will be less well educated than the previous one.The graduation rate among 25- to 34-year-olds is no better than the rate for the 55- to 64-year-olds who were going to college more than 30 years ago.Studies show that more and more poor and non-white students want to graduate from college –but their graduation rates fall far short of their dreams.The graduation rates for blacks, Latinos, and Native Americans lag far behind the graduation rates for whites and Asians.As the minority population grows in the United States, low college graduation rates become a threat to national prosperity.The problem is pronounced at public universities.In 2023 the University of Wisconsin-Madison –one of the top five or so prestigious public universities –graduated 81% of its white students within six years, but only 56% of its blacks.At less-selective state schools, the numbers get worse.During the same time frame, the University of Northern Iowa graduated 67% of its white students, but only 39% of its munity colleges have low graduation rates generally –but rock-bottom rates for minorities.A recent review of California community colleges found that while a third of the Asian students picked up their degrees, only 15% of African-Americans did so as well.Private colleges and universities generally do better, partly because they offer smaller classes and more personal attention.But when it comes to a significant graduation gap, Bowdoin has company.Nearby Colby College logged an 18-point difference between white and black graduates in 2023 and 25 points in 2023.Middlebury College in Vermont, another top school, had a 19-point gap in 2023 and a 22-point gap in 2023.The most selective private schools –Harvard, Yale, and Princeton –show almost no gap between black and white graduation rates.But that may have more to do with their ability to select the best students.According to data gathered by Harvard Law School professor Lani Guinier, the most selective schools are more likely to choose blacks who have at least one immigrant parent from Africa or the Caribbean than black students who are descendants of American slaves."Higher education has been able to duck this issue for years, particularly the more selective schools, by saying the responsibility is on the individual student," says Pennington of the GatesFoundation."If they fail, it's their fault." Some critics blame affirmative action –students admitted with lower test scores and grades from shaky high schools often struggle at elite schools.But a bigger problem may be that poor high schools often send their students to colleges for which they are "undermatched": they could get into more elite, richer schools, but instead go to community colleges and low-rated state schools that lack the resources to help them.Some schools out for profit cynically increase tuitions and count on student loans and federal aid to foot the bill –knowing full well that the students won't make it."The school keeps the money, but the kid leaves with loads of debt and no degree and no ability to get a better job.Colleges are not holding up their end," says Amy Wilkins of the Education Trust.A college education is getting ever more expensive.Since 1982 tuitions have been rising at roughly twice the rate of inflation.In 2023 the net cost of attending a four-year public university –after financial aid –equaled 28% of median (中间旳)family income, while a four-year private university cost 76% of median family income.More and more scholarships are based on merit, not need.Poorer students are not always the best-informed consumers.Often they wind up deeply in debt or simply unable to pay after a year or two and must drop out.There once was a time when universities took pride in their dropout rates.Professors would begin the year by saying, "Look to the right and look to the left.One of you is not going to be here by the end of the year." But such a Darwinian spirit is beginning to give way as at least a few colleges face up to the graduation gap.At the University of Wisconsin-Madison, the gap has been roughly halved over the last three years.The university has poured resources into peercounseling to help students from inner-city schools adjust to the rigor (严格规定)and faster pace of a university classroom –and also to help minority students overcome the stereotype that they are less qualified.Wisconsin has a "laserlike focus" on building up student skills in the first three months, according to vice provost (教务长)Damon Williams.State and federal governments could sharpen that focus everywhere by broadly publishing minority graduation rates.For years private colleges such as Princeton and MIT have had success bringing minorities onto campus in the summer before freshman year to give them some prepara tory courses.The newer trend is to start recruiting poor and non-white students as early as the seventh grade, using innovative tools to identify kids with sophisticated verbal skills.Such pro grams can be expensive, of course, but cheap compared with the millions already invested in scholarships and grants for kids who have little chance to graduate without special support.With effort and money, the graduation gap can be closed.Washington and Lee is a small, selective school in Lexington, Va.Its student body is less than 5% black and less than 2% Latino.While the school usually graduated about 90% of its whites, the graduation rate of its blacks and Latinos had dipped to 63% by 2023."We went through a dramatic shift," says Dawn Watkins, the vice president for student affairs.The school aggressively pushed mentoring (辅导) of minorities by other students and "partnering" with parents at a special pre-enrollment session.The school had its first-ever black st spring the school graduated the same proportion of minorities as it did whites.If the United States wants to keep up in the globaleconomic race, it will have to pay systematic attention to graduating minorities, not just enrolling them.注意: 此部分试题请在答题卡1上作答。
Assessing the Probabilities of Obtaining a LicenseI never met an inventor who didn’t want to know the value of his inventions. Accounting rules allow intangible assets to be recorded on balance sheets at their ‘highest and best uses’. The highest and best use of inventions is typically to license the invention to a company capable of delivering products that incorporate the invention to the market. Licensing typically yields a higher value than using related patents to ensure freedom of operation, blocking competitors, or removing a technology from the market in order to eliminate the risks of displacing successful products.There are a host of valuation tools—such as the Discounted Cash Flow, Relief from Royalty, Decision Trees, Probability Weighted Expected Return Methods—that we use to determine the value of inventions and patents once licensing occurs. All of these valuation methodologies require an assessment of the probability of securing a license.To make this determination we have developed a Licensing Probability Model (LPM). LPM is a dynamic model that produces different probabilities of securing a license as new information is fed into it.The three major components of the LPM are Expected Financial Returns, Deal Congruity and Deal Momentum. Below is a brief review of these components.EXPECTED FINANCIAL RETURNS – The biggest impetus for a licensee to license in an invention is the expected financial returns. A quick formula for estimating these expected returns is:Expected Financial Returns = Market Potential + Strategic Motivation – Upfront Costs – Risks of ImplementationThe vast majority of the readers of this article are familiar with the terms above. So a remedial review of them is not necessary. However, the important issue in building models is not filling them in by rote process, but rather by asking penetrating questions. As Pablo Picasso said, “Computers are useless. They only give you answers. The question is what is important.” Some of the questions that we ask to populate the Expected Financial Returns part of the LPM are:• Market Potential Does demand already exist? Jacob Reinbolt, Partner at Procopio, Cory, Hargreaves & Savitch LLP, points out that technologies that facilitate natural extensions of products into new markets offer high risk‐adjusted value. An example of this principle is placing social media applications on smart phones.• Strategic Motivation How badly does the customer need the technology? According to Mark Wicker, Partner at Morrison & Foerster LLP, industries (such as pharmaceuticals) that have enormous revenues at risk, high rates of obsolescence, and few remaining years of patent life will be highly motivated to adopt new technology. On the other hand, industries that face less competition and are considering replacing basic compounds are likely to wait until the government tells them they can no longer use current products before they license in new technology.• Upfront Costs What is the cost of delivering the technology to the market? How long will it take to deliver the technology to the market? Will market demand wane in the meantime?• Risks of Implementation How developed is the technology? The earlier the stage of development, the more pronounced the risk and the lower the value of the technology. Is the technology to be licensed already a product? Is the licensor licensing all of the relevant technology to the licensee? Have searches been conduct to detect how practicing the licensed art might infringe on the patents of others? • Risk of Implementation What has the inventor done to de‐risk the invention for the licensee? Measures such as obtaining independent technology validation, a freedom of operation opinion letter, business model validation, or a customer contract reduce the risk for licensees. For cash‐strapped inventors, it is most important that the investments to de‐risk licensing risks for the licensee demonstrate that the technology offers a direct path to rapid revenue.DEAL CONGRUITYJust because a deal appears to be financially rewarding doesn’t mean that the deal will be consummated. Inventors must identify specific companies with requisite financial resources to license in their technology. There must be close alignment in the business objectives of the two parties. To determine the extent of the strategic fit that exists between the licensor and potential licensees, we ask questions such as:• What was the genesis of the invention? Shane Hunter, Partner at Townsend and Townsend and Crew, notes that inventions that solve problems faced by researchers are more likely to be licensed than inventions that come about as a matter of happenstance. For instance, a solution to a problem that has long befuddled legions of researchers will more likely find a licensee than an invention whose inspiration occurred when a lone inventor banged his head on the toilet as he fell to the floor.• What is the licensee’s commitment to commercialize the technology? This answer is a function of the licensee’s history of developing licensed‐in technology and its history of managing collaborations with licensors. Other issues to consider along these lines include:∙Does the technology serve the licensee’s primary or ancillary business units? Are targeted markets natural progressions for the licensees? Are the potential licensees allocating theirresearch dollars to the primary markets to be addressed by the invention? If not, is the reasonbecause of a lack or relevant resources? Are the licensees downsizing in areas of importance for the successful commercialization of the technology?∙Have the licensees committed (or are they likely to commit) their best manufacturing facilities to the project? Best regulatory affairs personnel? Most capable sales and marketing staff to the initiative?∙How receptive is the licensee to license in new technology? The answer to this crucial question is a function of the degree of potential opposition by the licensee’s lawyers and the degree ofpotential opposition by licensee’s research and business development professionals to license in technologies.• Can the licensee’s pipeline accommodate the technology? While an invention can offer attractive return on investment potential to a licensee, the licensee will not be willing to license in such inventionif its pipeline is congested. If the pipeline cannot accommodate the technology in the near‐term, a very high discount rate must be applied going forward to account for the fact that the odds of achieving a license agreement decline rapidly over time. This is because companies want the latest technology, shelved technology becomes stigmatized and there is significant turnover in the licensing departments of large companies.DEAL MOMENTUMIn addition to deals penciling out and existence of strategic congruity, the people factors must be in place for deals to close. Among the considerations to review in determining whether the people factors will act as lubricants or irritants are:• Was the deal brought to the licensee by a respected intermediary? Inventions brought to the attention of licensees by licensing agents, patent brokers or lawyers are often well received by licensees as they have already been vetted to some extent. Also, the agents’ familiarity with the licensees’ processes raise the probabilities of closing a license.• The willingness of the inventor to participate in the negotiations is important: The inventor can make the otherwise bland words in a patent come alive. Their excitement for the technology can be infectious. However, in other instances, the inventor is too busy (pursuing other research or has committed to delivering extensive client consultations) to focus on negotiating the licensing agreements at hand.• Stephen C. Ferruolo, Partner with Goodwin Procter LLP, notes that the reasonableness of the inventor is an important factor in negotiating license agreements. Many inventors have completely unrealistic expectations regarding the value of their inventions. Further, the role of the inventor in contributing to improvements to the technology or in its commercialization is another issue that demands agreement between the inventor and licensee. In some cases, arrangements can be made to have the inventors (or key research people) transfer their employment from the licensor to the licensee.• Are you negotiating with people that have decision making authority? How high up in the organization has approval been obtained? How high up in the organization must approval be obtained? Is there an internal champion for your project? An interesting method of determining internal alignment is when senior members of the licensee sign off on a Memorandum of Understanding.• Are negotiations progressing smoothly? The answer to this question is a function of the stability of your negotiating partners, consistency of communication, agreeing on the majority of issues throughout the negotiations, and the potential for future deals between the two parties. Other important indicators are the licensee’s willingness to sign off on Non‐Disclosure Agreements, Memoranda of Understanding and Term Sheets.The above discussion obviously does not provide a comprehensive list of issues to take into account when estimating the likelihood of obtaining a license. However, considering the Expected Financial Returns (for the licensees), Deal Congruity, and Deal Momentum should provide a framework for making such assessments.David Wanetick is Managing Director of IncreMental Advantage, a valuation firm focusing on valuing emerging technologies and patents. He is the director of the Certified Patent Valuation Analyst program.。
Assessing the use of composts from multiple sources based on the characteristics of carbon mineralization in soilXu Zhang a ,Yue Zhao a ,Longji Zhu a ,Hongyang Cui a ,Liming Jia b ,Xinyu Xie a ,Jiming Li b ,Zimin Wei a ,⇑a College of Life Science,Northeast Agricultural University,Harbin 150030,China bEnvironmental Monitoring Center of Heilongjiang Province,Chinaa r t i c l e i n f o Article history:Received 24March 2017Revised 24August 2017Accepted 24August 2017Available online xxxx Keywords:Organic waste compost Amended soilCarbon mineralization Mineralization kinetica b s t r a c tIn order to improve soil quality,reduce wastes and mitigate climate change,it is necessary to understand the balance between soil organic carbon (SOC)accumulation and depletion under different organic waste compost amended soils.The effects of proportion (5%,15%,30%),compost type (sewage sludge (SS),tomato stem waste (TSW),municipal solid waste (MSW),kitchen waste (KW),cabbage waste (CW),peat (P),chicken manure (CM),dairy cattle manure (DCM))and the black soil (CK).Their initial biochemical composition (carbon,nitrogen,C:N ratio)on carbon (C)mineralization in soil amended compost have been investigated.The CO 2-C production of different treatments were measured to indicate the levels of carbon (C)mineralization during 50d of laboratory incubation.And the one order E model (M1E )was used to quantify C mineralization kinetics.The results demonstrated that the respiration and C min-eralization of soil were promoted by amending composts.The C mineralization ability increased when the percentage of compost added to the soil also increased and affected by compost type in the order CM >KW,CW >SS,DCM,TSW >MSW,P >CK at the same amended level.Based on the values of C 0and k 1from M1E model,a management method in agronomic application of compost products to the pre-cise fertilization was proposed.The SS,DCM and TSW composts were more suitable in supplying fertilizer to the plant.Otherwise,The P and MSW composts can serve the purpose of long-term nutrient retention,whereas the CW and KW composts could be used as soil remediation agent.Ó2017Elsevier Ltd.All rights reserved.1.IntroductionDue to the pressure of intensive human activity,improper farm-ing and management practices pose serious threats to the sustain-ability of natural resources,resulting into challenges related to improper cultivation,over-cultivation,soil erosion and soil fertility decline.These factors are usually associated to one another,lead-ing to lower soil fertility and decrease food security,resulting to poverty,non-point source pollution and natural resources degra-dation trap.Meanwhile,the development of agriculture and the economy leads the production of a variety of organic solid wastes,which include animal manure,municipal waste,plant litter,and so on (Masunga et al.,2016).These organic solid wastes are abundant,spot focused,which can cause strong pollution and environmental problems without promptly controlled (Price et al.,2013;Wei et al.,2013;He et al.,2013).Therefore,the organic solid waste dis-posal through composting is very important,and the compost pro-duction can increase agricultural posting biodegradable parts of urban solid waste,agricultural waste,and recycling the compost can provide a wealth of nutrient resources,which can improve the soil as a soil organic amended for commer-cial agriculture (Gómez-Brandón et al.,2008).The volume of these wastes can be reduced by approximately 50%(Helfrich et al.,1998)and disposal cost can be recovered (Levy and Taylor,2003).How-ever,these organic solid waste compost are required to be ade-quately processed before applying to the soil.In addition,mature compost can kill the pathogenic microorganisms present in its compost,thereby avoiding a number of harmful effects on the soil properties,so the mature compost with a well high-temperature stage is eco-friendly (Senesi and Brunetti,1996;Bernal et al.,1998;Yang et al.,2013;He et al.,2014).In addition,investigation on the decomposition of the organic materials added to the soil is an important analysis of the organic matter balance in the soil.Also it is essential to understand how to improve soil fertility using the relative value of different materials (Saviozzi et al.,1993).The decomposition process,called mineral-ization,expresses a decrease in organic matter content and an increase in available minerals which are previously immobilized in organic form (Masunga et al.,2016).The mineralization of/10.1016/j.wasman.2017.08.0500956-053X/Ó2017Elsevier Ltd.All rights reserved.⇑Corresponding author.E-mail address:weizimin@ (Z.Wei).Waste Management xxx (2017)xxx–xxxContents lists available at ScienceDirectWaste Managementjournal homepage:www.els ev i e r.c o m /l o ca t e /w a s manorganic matter in a soil depends on two major factors,the soil type and the quantity of organic matter incorporated (Levi-Minzi et al.,1990).To carbon mineralization (C mineralization),the rate at which the soil organic matter is degraded to CO 2depends mainly on the nature of its interactions with the soil matrix.In incubation experiments,the rate of the mineralization of soil organic matter and the amount of mineralizable C were found to be influenced by the characteristic of additional materials (Masunga et al.,2016),amount and quality of soil C pool (Riffaldi et al.,1996;Ahn et al.,2009),exchange capacity and microbial activity (Riffaldi et al.,1996).In the other hand,Aslam et al.(2008)and Masunga et al.(2016)propose to use kinetic models for providing a mathematical sup-port of the dynamics of C mineralization in incubation,which can be used to predict the ability of soils in supplying potentially mineralizable organic carbon,and organic matter balance.Utilizing C mineralization models based on CO 2respiration rate stability,the C mineralization can be predicted in compost-amended soil.Some investigators have compared a few different kinetic models to describe the observed C mineralization in compost-amended soil,and some studies focus on the C mineralization ability from several (single)compost-amended soils (Aslam et al.,2008;Masunga et al.,2016).Currently,there is no systematic study about the effects of different mineralization models on the assessment of organic carbon.In this paper,we selected eight typical organic waste composts which can reach the same degree of maturity.The compost sam-ples were mixed with the soil at 5%,15%and 30%field applications.A short-term laboratory incubation experiment was performed,to evaluate the effects of different organic waste compost on the C mineralization of a black soil.We used the first-order E model to describe the influences of different organic waste compost-amended soils on C mineralization (Aslam et al.,2008).The pur-pose of the laboratory aerobic incubation experiments was to illus-trate the kinetic differences of C mineralization and to evaluate the organic carbon mineralization rate of different compost-amended soils.The ability to form organic C mineralization from organic waste compost-amended soil provided insight into the predictions of the soil’s behavior in the natural environments.We studied the dynamic characteristics of CO 2release and the carbon stock stabil-ity in the organic solid waste compost-amended soil,scientifically.It is important to sequestrate carbon and nutrient,thereby,pro-moting the total greenhouse gas (GHG)mitigation in organic waste compost-amended soil.2.Materials and methods2.1.Soil and composting sample propertiesThe black soil (CK)used in this study was collected from the upper 15cm of an arable field located at Xiangfang Experimental Farm (Heilongjiang Province,China 45°4402300N,126°4301600E).Soil samples were air dried at 25°C,homogenized,manually ground with hands and passed through a 2mm mesh sieve.The soil was stored in a refrigerator at 4°C until they were used.Eight trapezoidal piles were prepared by Shanghai Songjiang Composting Plant,using dairy cattle manure (DCM),kitchen waste (KW),cabbage waste (CW),Tomato stem waste (TSW),municipal solid waste (MSW),sewage sludge (SS),chicken manure (CM),and peat (P).Each composting plie was conducted with a single material.Each composting pile contained approximately 2t of raw material (1.5m high with a 2Â3m base).The piles were turned over by forklift,when necessary,in order to improve the fermentation posting was considered finished when the temperature of the pile became stable and germination indexapproached 80%,then,approximately 2kg of samples were col-lected from the mature composts,and stored at 4°C for analysis (Wei et al.,2013).Water holding capacity was measured by satu-rating the soil/compost,gravity draining for 24h,and then deter-mining the moisture content of the saturated media by oven drying at 105°C.Prior to these experiments,soils and composts were all wet to 60%of their water holding capacities.The eight compost samples were air-dried at room temperature and filtered using sieves (<2mm mesh).2.2.Mineralization experimentThe compost sample was mixed thoroughly with soil and a con-trolled moisture level was maintained at 60%of the maximum water holding capacity (WHC)with distilled water.The samples were aerobically incubated at 25°C for 50days.The compost sam-ples were mixed with the soil at 0%,5%,15%,30%compost (v/v),where 0%was the CK.In total 1kg soil-compost mixture was placed into a sealed cylindrical plastic pot (5.15cm internal diam-eter and 18cm length).All treatments were prepared in triplicate.In addition,the plastic pots were modified to accommodate two polyethylene fittings to supply and exhaust air;the samples were aerated continuously with humidified air to avoid oxygen limita-tions (Aslam et al.,2008).The carbon dioxide concentration was measured on the influent and effluent air of the reactors every 12h using an infrared CO 2sensor (GXH-3010E1)(VanderGheynst et al.,2002).The CO 2evolution rate was calculated,and the data of percentage of carbon evolved after 50d was based on the cumu-lative release of CO 2and the initial value of TOC.In addition,Total organic carbon (TOC)was measured based on potassium dichro-mate oxidation (Nelson and Sommers,1982)and Total Nitrogen (TN)was measured by a modified Kjeldahl method (Bremmer and Sulvaney,1982).2.3.Mineralization kinetic modelsWe selected mineralization kinetic models which has biological significance and could be written in a discrete form with parame-ters having biological significance.The one order E kinetic (M1E )model was used in this study using the cumulative CO 2-C evolve during the period of incubation.The M1E model was originally pro-posed for N mineralization by Jones (1984).The model was already found useful in describing the decomposition process in the soil amended with different types of organic materials (Saviozzi et al.,1993).The M1E models which is another variation of the first-order model,is given by the equation:C ðt Þ¼C 0f 1Àexp ðÀk 1Át Þg þC 1ð1Þwhere C 1(g CO 2-C kg À1)represents a separate pool of an easily decomposable substrate that produces a mineralization flush dur-ing the first incubation interval.C 0is the size of the active carbon fraction (g CO 2-C kg À1),and k 1is the mineralization rate constant for the decomposition of the active carbon fraction (day À1).2.4.Statistical analysesThe cumulative data of the mineralized carbon were fitted to mathematical equations by the non-linear least-square curve fit-ting technique (Marquardt-Levenberg algorithm)using MATLAB R2010b software.The analytical data were subjected to a one-way ANOVA testing using Statistic 8.0software.2X.Zhang et al./Waste Management xxx (2017)xxx–xxx3.Results3.1.Chemical composition of compost-amended soilsTotal organic carbon (TOC),total nitrogen (TN),C:N ratio in dif-ferent composts amended soil presented significant difference.A significant (P <0.001)interaction exists between compost type and amended level for the initial chemical composition of compost-amended soil which included total organic carbon (TOC),total nitrogen (TN),C:N ratio (Table 1).The TOC concentra-tion ranged from 40.57g kg À1in 5%MSW +soil to 99.42g kg À1in 30%KW +soil.The magnitude of TOC was listed in the order of 30%>15%>5%>CK (P <0.01).The mean TOC values were obtained by calculating the average TOC values of these three different composts-amended soils was in the order of KW +soil (11.60%)>CM +soil (5.50%)>CW +soil (2.58%)>P +soil (5.50%)>SS +soil (5.50%)>TSW +soil (5.50%)>MSW +soil (5.50%).Strong differ-ences were similarly (P <0.001)observed for total nitrogen (TN)concentration in composts obtained from the same amended level,being largest in 30%CM +soil (7.15g kg À1)and smallest in 5%DCM +soil (1.75g kg À1).3.2.CO 2emissionsDuring the 50days of incubation,the daily release of CO 2-C (Fig.1)was larger for the soil amended with composts than that in CK.The addition of compost to the soil had significant effects on C mineralization during the incubations.The magnitude of CO 2emissions is in the order 30%>15%>5%>CK (P <0.01).In CK treatment,CO 2emissions were decreased monotonously through-out the entire incubation.The additional compost caused a short lag-phase (1–2d)in all compost amendments,except for CK,thereafter,a CO 2peak occurred between 3and 6days.CO 2-C emis-sions increased remarkably immediately following compost amendments and peaked within the first few days,significantly (P <0.001)increasing at compost amended levels.After the rateof CO 2emission in different treatments reached to their highest value,the CO 2emissions started to drop and then plateaued for approximately 25–30days before decreasing to the base level.After 50days,the cumulative CO 2-C evolution ranged from 11.66g CO 2-C kg À1soil (MSW)to 23.01g CO 2-C kg À1soil (KW),18.2g CO 2-C kg À1soil (MSW)to 36.2g CO 2-C kg À1soil (KW),30.85g CO 2-C kg À1soil (MSW)to 56.94g CO 2-C kg À1soil (KW compost)in 5%,15%,30%amended level,respectively.While,CK treatment had only 4.11g CO 2-C kg À1soil (Fig.S1and Table 2).The cumulative CO 2emissions was in the order of KW,CM >CW,SS,DCM,TSW,P <MSW compost treatments in the same amended levels (P <0.05;Fig.2).Increasing eight organic waste compost amended level from 5%to 30%resulted in linear increases of cumulative CO 2emissions in soils (P <0.01;Fig.3).According to the slopes of the regression lines,the KW compost amended soil evolved higher amount of CO 2-C than other treatments from all amended levels by the end of the incubation period.3.3.Mineralization dynamicsThe M1E model was then fitted on this estimate and used to describe mineralization kinetics.The C mineralization kinetics was well described by the M1E model which assumes that compost was partitioned into two pools;a defined number of C pools min-eralized (C 0)and a pool of easily decomposable substrate,produc-ing a mineralization flush during the first incubation interval (C 1)(Pascual et al.,1998).The parameter estimates,the standard error of the estimates,and the coefficient of determination (R 2)are pre-sented in Table 2.During the 50d of incubation,the added organic carbon was released as CO 2with 46.2%in KW compost amended,and the mean value was 40.32%in CM and CW compost amended,while the mean value was 39.64%and 35.75%in SS,DCM,TSW and P,MSW compost-amended,respectively,but only 19.64%of the soil TOC evolved as CO 2in CK (Table 2).The C 0was significantly differentTable 1Chemical composition of compost-amended soils (%dry wt.)TOC (g kg À1)TN (g kg À1)C:N ratio 5%KW +soil 64.4a 2.30b 27.99a CM +soil 59.0b 2.79a 21.15e CW +soil 52.2bc 1.91e 27.32b SS +soil 48.9d 2.02c 24.21d DCM +soil 46.5de 1.75de 26.57c TSW +soil 45.5e 1.81d 25.15d P +soil 43.1ef 2.02c 21.32e MSW +soil 40.6f 2.15bc 18.78f 15%KW +soil 79.4a 3.78b 21cCM +soil 74.9b 4.13a 18.15e CW +soil 69.1c 2.64f 26.19a SS +soil 65.8d 3.48c 18.92d DCM +soil 64.0d 2.95e 21.76b TSW +soil 60.1d 3.11de 19.3d P +soil 58.1e 3.22d 18.06e MSW +soil 54.2f 3.53c 15.36f 30%KW +soil 99.4a 6.31b 15.6b CM +soil 93.7b 7.15a 13.11de CW +soil 86.8c 5.06f 17.15a SS +soil 82.7d 5.88c 14.06cd DCM +soil 80.8de 5.31e 15.22bc TSW +soil 80.2de 5.56d 14.43c P +soil 78.7e 5.71cd 13.78d MSW +soil74.1f 6.10bc 12.14e CK20.40.9421.67Values in a column sharing the same letter are not significantly different (P <0.05).Sewage sludge (SS),tomato stem waste (TSW),municipal solid waste (MSW),kitchen waste (KW),cabbage waste (CW),peat (P),chicken manure (CM),dairy cattle manure (DCM))and the black soil (CK).X.Zhang et al./Waste Management xxx (2017)xxx–xxx3for samples with the same amount of eight different types of amended composts.The mean value of C 0across amended levels was in the order of KW >CM,CW >SS,DCM,TSW >P,MSW >CK.The C 0ranged between 14.55g kg À1and 68.53g kg À1(F =286.17,P <0.001),and the range of the C mineralization constant rate(k 1)ranged between 0.0589g kg À1day À1and 0.0495mg kg À1day À1(F =58.23,p <0.001)(Table 2).The k 1of composts decreased with an increase in compost amended levels,expect CM and CW composts treatments.The KW had lowest k 1values in all the com-post materials amendments (Table 2).More than 50%oftheFig.1.Daily C released as CO 2from soil amended with eight types of typical organic waste composts:sewage sludge (SS),tomato stem waste (TSW),municipal solid waste (MSW),kitchen waste (KW),cabbage waste (CW),peat (PE),chicken manure (CM),dairy cattle manure (DCM)and unamended soil (CK)during 50days incubation.Table 2C mineralization kinetics in compost amended soils from the three amended levels after 50days.Cumulative CO 2-C(g C kg À1soil)(after 50days)Percentage of C evolved after 50days (%)C 0k 1C 1R 25%SS +soil 15.64d 31.9918.47±0.280.088±0.0035À1.83±0.280.9881TSW +soil 13.6d e 29.8816.93±0.360.0801±0.0046À2.05±0.350.9775MSW +soil 11.65f 28.7314.55±0.340.0848±0.0053À1.88±0.340.9731KW +soil 23.01a 35.7328.04±0.50.0808±0.0039À3.11±0.490.9842CW +soil 17.05bc 32.6521.48±0.50.0705±0.0046À2.37±0.450.9751P +soil 12.45e 28.9215.1±0.290.0817±0.0043À1.57±0.290.981CM +soil 20.26b 34.3424.23±0.420.0934±0.0041À2.66±0.420.9853DCM +soil 13.92de 29.9516.04±0.230.0889±0.0034À1.28±0.230.989215%SS +soil 26.67d 40.5232.66±0.570.0754±0.0037À3.4±0.540.985TSW +soil 22.73e 37.8828.2±0.530.0743±0.0039À2.96±0.490.9832MSW +soil 18.21f 33.5722.02±0.350.0714±0.0032À2.07±0.310.9881KW +soil 36.2a 45.6043.52±0.630.074±0.003À4.16±0.590.9897CW +soil 27.89bc 40.3636.56±1.020.0589±0.0046À3.88±0.770.971P +soil 21.18ef 36.4325.9±0.470.0697±0.0036À2.29±0.420.9844CM +soil 32.17b 42.9338.4±0.550.0866±0.0033À3.98±0.560.9895DCM +soil 25.07de 39.2029.78±0.460.0718±0.0031À2.03±0.420.988430%SS +soil 41.7d 50.4352.47±1.110.0695±0.0041À5.56±0.980.9797TSW +soil 38.03de 47.4047.21±0.890.0738±0.0038À5.24±0.820.9832MSW +soil 30.84f 41.6438.88±0.830.06±0.004À3.99±0.70.9803KW +soil 56.93a 57.2768.53±1.010.0737±0.0031À6.93±0.950.9894CW +soil 44.74bc 51.5656.76±1.390.0627±0.0043À5.9±1.120.9756P +soil 35.61e 45.2542.15±0.710.0693±0.0033À2.79±0.620.9871CM +soil 52.87b 56.4161.87±0.860.0957±0.0033À6.35±0.880.9905DCM +soil 40.07de 49.5945.69±0.590.0713±0.0026À2.37±0.540.9921SoilCK4.1119.646.91±0.10.0733±0.0029À0.6±0.090.99Values in a column sharing the same letter are not significantly different (P <0.05).4X.Zhang et al./Waste Management xxx (2017)xxx–xxxorganic carbon from the compost treatments was mineralized after 50days.Hsieh et al.(1981)evaluated an activated sludge in soil,where approximately 26%of the organic carbon was mineralized.In this study,the C 1values were negative in the fitting of all treat-ments (C 1:KW >CM,CW >SS,TSW >P,MSW >DCM compost).These compost quality differences between compost types within and among amended levels would further be expected to impact on their subsequent soil organic carbon mineralization.3.4.Relationship between chemical composition of compost amended soil and carbon mineralization kineticsThe results on the relationships among chemical composition of compost amended and C mineralization kinetic parameters are presented in Fig.3.Across all compost types from all amended levels,k 1was not significantly correlated with any of the compost chemical compositional constituents that were measured.In addi-tion,highly significant positive correlations were also observed between C 0and TOC (r =0.986,p =0.000),C m (r =0.996,p =0.000);C 1and C:N ratio (r =0.497,p =0.013);TOC and C m (r =0.987,p =0.000).Highly significant negative correlations between C 0and C 1(r =À0.925,p =0.000),C:N ratio (r =À0.603,p =0.002);C 1and TOC (r =À0.888,p =0.000),C m (r =À0.902,p =0.000);C m and C:N ratio (r =À0.612,p =0.001).4.Discussion4.1.The effects of compost material on carbon mineralization in soil The differences in initial biochemical composition of compost products indicate significant quality differences between compost types among the amended soil levels.The biochemical differences in composting products would have impacts on subsequent C min-eralization of soil amended with composts (Masunga et al.,2016).In this study,KW treatments had significantly higher initial TOC concentration than other composts,while the lowest TOC concen-tration was in MSW treatments in same amended level (F =132.45,P <0.001).Noirot-Cosson et al.(2016)found that the MSW com-post generated the lowest C storage over the 13years period with 36%of the exogenous organic carbon (EOC)applied incorporated into the SOC.Moreover,the cumulative CO 2emissions was in the order of KW,CM >CW,SS,DCM,TSW >P,MSW composts treat-ments in the same amended levels after 50days.Therefore,the ini-tial TOC concentration of compost products had important controls over decomposition.Subsequently,composts with higher organic carbon content had an easier degradable substrate in response to SOC mineralization.When composts are added to the soil,microorganisms can use the available organic compounds to synthesize enzymes,stimulat-ing the degradation of soil native organic matter,this phenomenon is the priming effect (Kuzyakov and Bol,2000;Blagodatskaya and Kuzyakov,2008).The KW,CM and CW compost materials include labile forms of organic carbon,such as crude protein,crude cellu-lose,amino acids,organic acids and lipids,soil microbes and these materials can first respond and rapidly consume the labile con-stituents in compost.The relatively more recalcitrant P and MSW composts include a large amount of lignin components.The lignin contributes to the recalcitrance of materials by physically and chemically protecting the more easily degradable structural polysaccharides (cellulose)from microbial degradation (Lorenz et al.,2005;Creamer et al.,2013).In addition,increasing eight compost amended level from 5%to 30%resulted in linear increases of cumulative CO 2emissions in soils (P <0.01),which was consis-tent with the studies conducted by other litter materials (Hu et al.,2016;Xiao et al.,2015).These findings are relevant for understanding the balance between C input and output in soils.This study showed negative relationships between C m and C:N (r =0.986).High C:N ratio requirement copiotrophic microorgan-isms,which preferentially utilize labile organic carbon,arethoughtFig.2.Cumulative CO 2emissions as a function of fine compost input levels insoils.Fig.3.Relationships among cumulative CO 2-C emissions (Cm),original total organic carbons (TOC),mineralization rate constants (k1),potentially mineralizable organic carbons (C0),easily decomposable substrate (C1)and C/N ratio.Circle areas represent 95%confidence intervals.X.Zhang et al./Waste Management xxx (2017)xxx–xxx5to be more sensitive to N limitation than oligotrophic,recalcitrant carbon decomposers (Zechmeister-Boltenstern et al.,2015).This phenomenon suggested that C:N stoichiometry is a limiting factor for copiotrophic microbial activity,therefore,compost with low C:N ratio will be more favorable to carbon decomposition than that with a higher C:N ratio.Unlike previous studies that suggested the appearance of CO 2peaks from incipient stage in soils amended with composts (Hu et al.,2016),in this study,the CO 2emission showed a short lag-phase (C 1)(1–2d)before the emergence of CO 2peaks in all com-post treatments,followed by accelerated rapid increase.The lag-phase is relevant to CO 2emissions from the soil amended with composts,reflecting the growth of microorganisms due to the car-bon substrate,nutrients availability and the favorable conditions of humidity and temperature (Tester and Parr,1983;Case et al.,2012).The M1E model was well described the C 1which was a pool of easily decomposable substrate producing a mineralization flush during the first incubation interval.In this study,the C 1values were negative in the fitting of all samples.These negative values revealed the existence of a lag phase in the mineralization process (Pascual et al.,1998).And the existence of lag phase probably due to the presence of some substances in these type of compost mate-rials,which would cause an initial but short-lived inhibition of microbial activity (Tester and Parr,1983;Saha et al.,2016).These results suggested that the value of C 1was significant and nega-tively correlated to the TOC and positive with C:N.Therefore,N source is a limiting factor for copiotrophic microbial activity,com-post products with a low C:N ratio which will be more favorable to C 1.In addition,the value of C 1was positive with the amended levels since the substances which inhibited microbial activity would be much higher.Moreover,the maximum C 1value occurred in KM treatment (À3.11g CO 2-C kg À1sample,À4.16g CO 2-C kg À1sample and À6.93g CO 2-C kg À1sample)from 5%,15%and 30%amended level,respectively,probably due to the presence of salts in this type of material which would cause an initial but short-lived inhibition of microbial activity (Pascual et al.,1998).4.2.Roles of C 0and k 1in compost-amended soil carbon mineralization In this study,the best description of the cumulative CO 2-C data of many types of organic compost treatments were the M1E model.The fastest C mineralization rate was observed in CM (k 1=0.0934,0.0866and 0.0957day À1in 5%,15%,30%treatment,respectively)while the C mineralization rate in CW (k 1=0.0705,0.0589and 0.0627day À1in 5%,15%,30%treatment,respectively)was the slowest.In addition,the k 1of composts decreased with increasing compost amended levels,except CM and CW composts treatments.No significant correlations were found between k 1,C 1and C 0,indi-cating that differences in k 1values among soils cannot be attribu-ted to differences in the relative sizes of the carbon pools.The same finding was reported by Riffaldi et al.(1996)in a study on C min-eralization in some representative agricultural soils.The C mineralization dynamics among eight different sources of organic waste composts amended-soils were affected by a combi-nation of all the parameters including TOC,C:N ratio,C 0,k 1,and C 1,and not only by one individual parameter.Therefore,an integrated tool utilizing multiple fluorescence parameters would be neces-sary.In this study,the hierarchical clusters (HCA)was performed based on the data (TOC,C:N ratio,C 0,k 1,and C 1)presented in result section.The dendrogram in Fig.4provided a comparison of the experimental composts in their mineralization dynamics in the soils.In general,the lower the rescaled distances between the sam-ples were,the more similar their mineralization (Zbytniewski and Buszewski,2005).Based on this observation,mineralization dynamics from different composts were clustered into 4groups,with the first group consisting of SS,TSW,and DCM,while the sec-ond group P and MSW,the third group KW and CW,and the forth group was CM.Carbon sequestration can make the soil retain nutrient,how-ever,it may not be able to achieve the effect of supplying fertilizer,thereby mainly playing the role of soil remediation,to promote the formation of soil aggregate structure and improve soil productiv-ity.The functions of organic compost productions include nutrient retention and fertilizer supply.Nutrient retention refers to soil nutrient uptake and retention storage capacity.Fertilizer supply refers soil release and crop nutrient supply capacity.In general,the ability of the soil with strong Nutrient retention power also has strong fertilizer supply ability.However strong fertilizer supply ability does not mean strong nutrient retention ability.The com-bined nutrient retention and fertilizer supply abilities are impor-tant criteria to evaluate the soil.With the aid of nutrient retention and fertilizer supply ability of soil,compost applications should be preferentially differentiated.In this study,we recommend that for the first group of TSW,DCM and SS compost,these three compost methods have moderate C 0,but the k 1was also high,and as a result,these composts have been rapidly decomposed by the active microorganisms in the early stage of mineralization,which is beneficial for plant nutrient absorption and nutrients release in the soil.Riffaldi et al.(1993)reported that fatty substances from sludge added to soil disap-peared within 19days of posts are more beneficial for plant growth and low-term nutrient releasing properties.The second type of compost for P and MSW compost show the lowest C 0and k 1.Those two composts can be considered as nutri-ent retention.When these composts are applied to the soil,the amount of compost applied may increase to maintain soil fertility.Noirot-Cosson et al.(2016)found the MSW compost generated the lowest C storage over the 13years period with 36%of the Exoge-nous Organic Carbon (EOC)incorporated into the soil organic carbon.The third type of compost for CW and KW compost has a high C 0,but the k 1was very low.Those two composts can be used as long-term insurance nutrient retention,because the two composts can maintain soil fertility and decrease the releasing rate of carbon dioxide in the soil mineralization.Furthermore,when these com-posts are applied to the soil,high organic matter containing com-posting products can be regarded as soil remediation agent to improve soil aggregate structure,increase permeability,relieve soil compaction,and improve poor soil fertility.Last,CM compost has the highest C 0and k 1,so it is suitable for supplying fertilizer,but it needs to be used with caution.The appli-cation rates need to be well controlled by using themaximumFig. 4.HCA of the mineralization rate constants (k 1),potentially mineralizable organic carbons (C 0)and mineralization flush (C 1)from soil amended with eight types of organic waste composts.6X.Zhang et al./Waste Management xxx (2017)xxx–xxx。
a r X i v :a s t r o -p h /0608699v 1 31 A u g 2006D RAFT VERSION F EBRUARY 5,2008Preprint typeset using L A T E X style emulateapj v.6/22/04ASSESSING THE STARBURST CONTRIBUTION TO THE γ-RAY &NEUTRINO BACKGROUNDST ODD A.T HOMPSON ,1,2E LIOT Q UATAERT ,3E LI W AXMAN ,4&A BRAHAM L OEB 5Draft version February 5,2008ABSTRACTIf cosmic ray protons interact with gas at roughly the mean density of the interstellar medium in starburst galaxies,then pion decay in starbursts is likely to contribute significantly to the diffuse extra-galactic back-ground in both γ-rays and high energy neutrinos.We describe the assumptions that lead to this conclusion and clarify the difference between our estimates and those of Stecker (2006).Detection of a single starburst by GLAST would confirm the significant contribution of starburst galaxies to the extra-galactic neutrino and γ-ray backgrounds.Subject headings:galaxies:starburst —gamma rays:theory,observations —cosmology:diffuse radiation —ISM:cosmic rays —radiation mechanisms:non-thermal1.INTRODUCTION &PROTON CALORIMETRYInelastic proton-proton collisions between cosmic rays and ambient nuclei produce charged and neutral pions,which sub-sequently decay into electrons,positrons,neutrinos,and γ-rays.Neutral pion decay accounts for most of the γ-ray emis-sion from the Milky Way at GeV energies (e.g.,Strong et al.2004).The contribution of pion decay to high energy emis-sion from other galaxies —and thus also to the extra-galactic neutrino and γ-ray backgrounds —remains uncertain,how-ever,because of the current lack of strong observational con-straints.For example,aside from the LMC (Sreekumar et al.1992),EGRET did not detect γ-ray emission from star formation in any other galaxy.In two recent papers (Loeb &Waxman 2006;Thompson,Quataert,&Waxman 2006;hereafter LW and TQW,respectively),we have estimated that starburst galaxies contribute significantly to the extra-galactic neutrino and γ-ray backgrounds,with most of the contribu-tion arising from starburst galaxies at z ∼1.TQW also pro-vide detailed predictions for the γ-ray fluxes from nearby star-bursts,several of which should be detectable with GLAST .Using what seem to be similar arguments,however,Stecker (2006)presents an upper limit on the starburst contribution to the neutrino and γ-ray backgrounds that is a factor of ≈5below the predictions of LW and TQW.Here we clarify the difference between these predictions (see §3).The timescale for cosmic ray protons to lose energy via p -p collisions with gas of number density n cm −3is τpp ≈7×107/n yr.The importance of pion losses depends on the ratio of τpp to the cosmic-ray proton escape timescale,τesc .The escape timescale for cosmic rays is quite uncertain,but in starbursts it is probably set by advection in a galactic su-perwind,rather than diffusion,as in the Galaxy.If correct,the escape timescale from a starburst with a wind of veloc-ity V ≈300km s −1and a gas scale height of h ≈100pc is of order τesc ≈h /V ≈3×105yr.This implies that if cos-1Department of Astrophysical Sciences,Peyton Hall-Ivy Lane,Princeton University,Princeton,NJ 08544;thomp@ 2Lyman Spitzer Jr.Fellow3Astronomy Department &Theoretical Astrophysics Center,601Campbell Hall,The University of California,Berkeley,CA 94720;eliot@4Physics Faculty,Weizmann Institute of Science,Rehovot 76100,Israel;waxman@wicc.weizmann.ac.il5Astronomy Department,Harvard University,60Garden Street,Cam-bridge,MA 02138;aloeb@mic rays interact with gas of mean density n 100cm −3,then τpp τesc and we expect that essentially all cosmic ray pro-tons are converted into charged and neutral pions and their secondaries before escaping.In this limit,the galaxy is said to be a “cosmic ray proton calorimeter.”If escape is instead due to diffusion rather than advection,then the density thresh-old required for a galaxy to be a proton calorimeter is likely to be significantly lower.In the Milky Way,few GeV cosmic-ray protons only lose ≈10%of their energy to pion production before escaping the galaxy.For this reason,LW and TQW focused on the high energy emission from luminous starbursts.These galax-ies have much higher gas densities and likely dominate the contribution of star-forming galaxies to the total γ-ray and neutrino backgrounds.For reference,we note that the local starbursts NGC 253and M 82have n 400cm −3,and the nu-clei of Arp 220have gas densities of order 104cm −3,well in excess of that required by the above arguments for τpp <τesc .Since the fraction of the proton energy supplied to sec-ondary electrons,positrons,neutrinos and γ-rays is deter-mined by the microphysics of the Standard Model,the γ-ray and neutrino fluxes from starbursts depend primarily on astro-physical properties of star forming galaxies via1.the fraction of a SN’s canonical 1051ergs of energy sup-plied to cosmic ray protons,η,2.the spectral index p of the injected cosmic ray proton distribution,3.the evolution of the comoving star formation rate den-sity of the universe,˙ρ⋆(z )(M ⊙yr −1Mpc −3),4.and the fraction of all star formation that occurs in the proton calorimeter limit as a function of cosmic time,f (z ).For any ηand p ,the γ-ray and neutrino emission from a given proton calorimeter can be readily calculated,while the cumulative extra-galactic backgrounds also depend on the star formation history of the universe via ˙ρ⋆(z )and f (z ).As TQW and LW show,in the proton calorimeter limit the total γ-ray and neutrino luminosities for an individual star-burst can be related to L TIR ,the galaxy’s total IR luminosity,byL γ≈(2/3)L nu ≈1.5×10−4η0.05L TIR ,(1)2F IG . 1.—The comoving star formation rate density ˙ρ⋆(z )for a num-ber of models and fits to existing data:RSF1(dashed black ),RSF2(solid black ),and RSF3(dotted black )are from Porciani &Madau (2001);the “Fos-sil”model (solid red line )is a simple broken power-law fit to Nagamine et al.(2006)(their Fig.5);the “GALEX”model (solid blue line )is the dust-corrected ˙ρ⋆(z ),adjusted to the same assumed IMF as the other models pre-sented here,from Figure 5of Schiminovich et al.(2005)(˙ρ⋆(z )assumed con-stant for z >3);the “H&S”model (solid green line )is an analytic func-tion from Hernquist &Springel (2003).The total TIR background for these models as computed using eq.(2)is F TIR =19.7,20.4,23.0,15.9,19.5,and 17.3nW m −2sr −1for RSF1,RSF2,RSF3,Fossil,GALEX,and H&S models,respectively.These numbers for F TIR can be compared with observational constraints that indicate F TIR ∼24−27.5nW m −2sr −1(Dole et al.2006).Taking f (z )=min(0.9z +0.1,1),f cal =0.79,0.81,0.810.76,0.75,and 0.73for these same ing these values of F TIR and f cal ,the total γ-ray and neutrino backgrounds can be calculated from eq.(1)as in eq.(4).Figure 2shows the cumulative fraction of F TIR as a function of redshift,ζ(z ),for each of these models (eq.3).where η0.05=η/0.05(i.e.,5×1049η0.05ergs per supernova is supplied to cosmic ray protons).ηis constrained by the requirement that the radio emission from secondary elec-trons/positrons in starbursts be consistent with the observed FIR-radio correlation (see, e.g.,Yun et al.2001).For proton calorimeters,η 0.05is only possible if ioniza-tion,bremsstrahlung,and/or inverse-Compton losses domi-nate synchrotron losses for secondary electrons and positrons in starbursts (see TQW;Thompson et al.2006).2.THE STARBURST BACKGROUNDBecause of the predicted one-to-one correspondence be-tween the neutrino,γ-ray,and radio emission of starbursts andtheir IR emission,the diffuse background in γ-rays or neutri-nos can be normalized to the TIR background,F TIR .Figure 1shows a number of models for ˙ρ⋆(z )drawn from the litera-ture,based on observations of star-forming galaxies at differ-ent redshifts.The integrated background from star formation is then simplyF TIR =c(1+z )2dz(1+z ′)2E (z ′)/∞˙ρ⋆(z ′)dz ′[(1+z )2E (z )]/∞˙ρ⋆(z )dz3Given the strong constraint on ηimplied by the FIR-radio correlation and the reasonable convergence of models anddata on the TIR background (F TIR20≈1;Fig.1),the estimated γ-ray and neutrino backgrounds depend primarily on the frac-tion of star formation that occurs in the proton calorimeter limit:f (z )and f cal .Locally,we estimate f (z ≈0)∼0.1from the fraction of the local FIR and radio luminosity den-sity produced by starbursts (e.g.,Yun et al.2001).However,at high redshift,a much larger fraction of star formation oc-curs in high surface density systems that are likely to be pro-ton calorimeters.For example,the strong evolution of the IR luminosity function with redshift implies that f (z )increases dramatically from z ≈0to z ≈1(e.g.,Dole et al.2006).If we take f (z 1)∼1and ˙ρ⋆(z )as in any of the models of Figure 1,we find that f cal ∼1,with a significant contribution to the γ-ray and neutrino backgrounds coming from z ≈1−2.In fact,we find thatF γ∝νI ν(GeV)∝f (z ≈1−2).(7)More specifically,taking Ωm =0.3,ΩΛ=0.7,and assum-ing a function f (z )=min(0.9z +0.1,1)that smoothly inter-polates from a small local starburst fraction (f (z ≈0)≈0.1)to an order unity starburst fraction at high redshift,we find that f cal ≈0.8for the models shown in Figure 1(see cap-tion).As an example of another alternative,assuming f (z )=min[0.1(1+z )3,0.8],in which the starburst fraction increases roughly in proportion to ˙ρ⋆(z )and saturates at f (z =1)=0.8,f cal ≈0.6for the same models.For numerical calculations for the γ-ray background spe-cific to a given f (z ),˙ρ⋆(z ),and proton injection spectra,p ,see Figure 1of TQW.PARISON WITH STECKER (2006)In a recent paper,Stecker (2006)(S06v1,v2,v3)estimates a significantly smaller γ-ray and neutrino background from starburst galaxies than found in TQW and LW,respectively.In his original version of the paper (S06v1),Stecker normal-ized the total neutrino background to the local FIR luminosity density from starburst galaxies in Yun et al.(2001)(based on IRAS ),thereby neglecting the contribution to the background from redshifts above z ≈0.This error led to a factor of ≈10underestimate of the total neutrino background relative to LW.S06v2corrects this error and addresses both the γ-ray and neutrino backgrounds expected from starburst galaxies.Stecker finds a value for the neutrino and γ-ray backgrounds approximately equal to the value quoted in S06v1,∼5times lower than LW and TQW.There are two primary differences that lead to this discrepancy:1.Stecker advocates that 22%of the extra-galactic back-ground is produced by star formation above z >1.2(see his Table 1,S06v3).Figures 1and 2show,however,that essentially all models for the star formation history of the universe that are consistent with the observed evolution of thestar formation rate density with redshift (˙ρ⋆(z ))yield roughlyequal contributions to F TIRbelow and above z ≈1.2.We argue that order unity of all star formation at z ≈1occurs in starbursts likely to be proton calorimeters,whereas Stecker quotes 13%in the range 0.2≤z ≤1.2and 60%for z >1.2(see his Table 1,S06v3).This step function model for f (z )likely significantly underestimates the contribution of z ≈1galaxies to the γ-ray and neutrino backgrounds.Thus,the inconsistency between our predicted γ-ray and neutrino backgrounds and Stecker’s upper limit is a simple consequence of his model for the star formation history of the universe and the redshift evolution of the starburst population.For the reasons explained above,we believe that our assump-tions are in better agreement with observations of star-forming galaxies at z 1.4.CONCLUSIONThe magnitude of the observed diffuse γ-ray background is quite uncertain,primarily as a result of foreground subtraction.Whereas Sreekumar et al.(1998)(compare with Strong et al.2004)find a specific intensity of 1.4×10−6GeV s −1cm −2sr −1at GeV energies,Keshet et al.(2004)derive an upper limit of νI νγ<5×10−7GeV s −1cm −2sr −1.Future observations by GLAST should alleviate some of the uncertainty in the background determination.A comparison of the upper limit by Keshet et al.(2004)and equation (6)indicates that starburst galaxies may indeed contribute signif-icantly to the diffuse γ-ray background.Even in stacking searches,EGRET did not detect a single starburst galaxy (Cillis et al.2005).In TQW (their Figure 1)we show that the EGRET non-detections are required by pre-dictions that are correctly calibrated to the FIR or radio emis-sion from galaxies.Hence,the EGRET non-detections do not provide a significant constraint on the contribution of starburst galaxies to the γ-ray or neutrino backgrounds.GLAST will,however,be much more ing equation (1)and nu-merical calculations for different p ,TQW show that GLAST should be able to detect several nearby starbursts (NGC 253,M 82,IC 342;see TQW Table 1)if they are indeed proton calorimeters.The dominant uncertainty in assessing the starburst contri-bution to the γ-ray and neutrino backgrounds lies in whether a significant fraction of cosmic-ray protons in fact interact with gas of average density or whether the cosmic rays escape in a galactic wind without coupling to the high density ISM (the star formation history of the universe is comparatively well understood).Detection of a single starburst galaxy by GLAST would strongly support the estimates of LW and TQW and thus the importance of starburst galaxies for the extra-galactic neutrino and γ-ray backgrounds.Such a detection would also provide an important constraint on the physics of the ISM in starburst galaxies.REFERENCESCillis,A.N.,Torres,D.F.,&Reimer,O.2005,ApJ,621,139Dole,H.,et al.2006,A&A,451,417Hernquist,L.,&Springel,V .2003,MNRAS,341,1253Keshet,U.,Waxman,E.,&Loeb,A.2004,JCAP,4,6Loeb,A.,&Waxman,E.2006,JCAP,5,3(LW)Nagamine,K.,Ostriker,J.P.,Fukugita,M.,&Cen,R.2006,arXiv:astro-ph/0603257Porciani,C.,&Madau,P.2001,ApJ,548,522Sreekumar,P.,et al.1992,ApJ,400,L67Sreekumar,P.,et al.1998,ApJ,494,523Stecker,F.W.2006,arXiv:astro-ph/0607197(S06v1,v2,v3)Strong,A.W.,Moskalenko,I.V .,&Reimer,O.2004,ApJ,613,956Thompson,T.A.,Quataert,E.,Waxman,E.,Murray,N.,&Martin,C.L.2006,ApJ,645,186Thompson,T.A.,Quataert,E.,&Waxman,E.2006,submitted to ApJ,arXiv:astro-ph/0606665(TQW)Yun,M.S.,Reddy,N.A.,&Condon,J.J.2001,ApJ,554,803。