Large-Scale Circulation Features Typical of Wintertime Extensive and Persistent Low Temperature
- 格式:pdf
- 大小:525.10 KB
- 文档页数:7
高考英语气候变化的全球影响与应对单选题30题1.Climate change refers to long-term shifts in temperature and weather patterns. What is one major cause of climate change?A.Natural disastersB.Human activitiesC.Astronomical eventsD.Animal behaviors答案:B。
本题主要考查对气候变化原因的理解。
选项A 自然灾害不是气候变化的主要原因;选项C 天文事件对气候变化影响较小;选项D 动物行为对气候变化影响微乎其微。
而人类活动,如燃烧化石燃料、砍伐森林等,是导致气候变化的主要原因之一。
2.Which of the following is a sign of climate change?A.Frequent rainstormsB.Clear skiesC.Calm windsD.Warm winters only答案:A。
气候变化的表现包括极端天气增多,频繁的暴雨是其中之一。
选项B 晴朗的天空不是气候变化的标志;选项C 平静的风也不是;选项D 仅仅是暖冬不能全面代表气候变化,气候变化有多种表现。
3.What does the term “greenhouse gases” refer to?A.Gases that cool the planetB.Gases that are only produced by natureC.Gases that trap heat in the atmosphereD.Gases that are harmless to the environment答案:C。
“greenhouse gases”指的是温室气体,能在大气中捕获热量。
选项A 温室气体是使地球升温而非降温;选项B 温室气体也有人为产生的;选项D 温室气体对环境并非无害。
Geophys.J.Int.(2003)155,289–307Surface wave higher-mode phase velocity measurements usinga roller-coaster-type algorithm´Eric Beucler,∗´El´e onore Stutzmann and Jean-Paul MontagnerLaboratoire de sismologie globale,IPGP,4place Jussieu,75252Paris Cedex05,France.E-mail:beucler@ipgp.jussieu.frAccepted2003May20.Received2003January6;in original form2002March14S U M M A R YIn order to solve a highly non-linear problem by introducing the smallest a priori information,we present a new inverse technique called the‘roller coaster’technique and apply it to measuresurface wave mode-branch phase velocities.The fundamental mode and thefirst six overtoneparameter vectors,defined over their own significant frequency ranges,are smoothed averagephase velocity perturbations along the great circle epicentre–station path.These measurementsexplain well both Rayleigh and Love waveforms,within a maximum period range includedbetween40and500s.The main idea of this technique is tofirst determine all possibleconfigurations of the parameter vector,imposing large-scale correlations over the model space,and secondly to explore each of them locally in order to match the short-wavelength variations.Thefinal solution which achieves the minimum misfit of all local optimizations,in the least-squares sense,is then hardly influenced by the reference model.Each mode-branch a posteriorireliability estimate turns out to be a very powerful instrument in assessing the phase velocitymeasurements.Our Rayleigh results for the Vanuatu–California path seem to agree correctlywith previous ones.Key words:inverse problem,seismic tomography,surface waves,waveform analysis.1I N T R O D U C T I O NOver the last two decades,the resolution of global tomographic models has been greatly improved,because of the increase in the amount and the quality of data,and due to more and more sophisticated data processing and inversion schemes(Woodhouse&Dziewonski1984, 1986;Montagner1986;Nataf et al.1986;Giardini et al.1987;Montagner&Tanimoto1990;Tanimoto1990;Zhang&Tanimoto1991; Su et al.1994;Li&Romanowicz1995;Romanowicz1995;Trampert&Woodhouse1995;Laske&Masters1996;Ekstr¨o m et al.1997; Grand et al.1997;van der Hilst et al.1997;Liu&Dziewonski1998;Ekstr¨o m&Dziewonski1998;Laske&Masters1998;M´e gnin& Romanowicz2000;Ritsema&van Heijst2000,among others).These models are derived from surface wave phase velocities and/or body wave traveltimes(or waveforms)and/or free-oscillation splitting measurements.Body wave studies provide high-resolution models but suffer from the inhomogeneous distribution of earthquakes and recording stations,even when considering reflected or diffracted phases.On the other hand,the surface wave fundamental mode is mainly sensitive to the physical properties of the upper mantle.So,the investigation of the transition zone on a global scale,which plays a key role in mantle convection,can only be achieved by using higher-mode surface waves.Afirst attempt at providing a global tomographic model using these waves has been proposed by Stutzmann&Montagner(1994),but with a limited amount of data.More recently,van Heijst&Woodhouse(1999)computed degree-12phase velocity maps of the fundamental mode and the fourfirst overtones for both Love and Rayleigh waves.These data have been combined with body wave traveltimes measurements and free-oscillation splitting measurements,to provide a global tomographic model with a high and uniform resolution over the whole mantle (Ritsema et al.1999;van Heijst et al.1999).The most recent S H model for the whole mantle was proposed by M´e gnin&Romanowicz (2000).This degree-24model results from waveform inversion of body and surface Love waves,including fundamental and higher modes and introducing cross-branch coupling.Extracting information from higher-mode surface waves is a difficult task.The simultaneous arrivals(Fig.3in Section3)and the interference between the different mode-branches make the problem very underdetermined and non-linear.To remove the non-linearity,Cara &L´e vˆe que(1987)and L´e vˆe que et al.(1991)compute the cross-correlogram between the data and monomode synthetic seismograms and ∗Now at:´Ecole Normale Sup´e rieure,24rue Lhomond,75231Paris Cedex05,France.C 2003RAS289290´E.Beucler,´E.Stutzmann and J.-P.Montagnerinvert the amplitude and the phase of thefiltered cross-correlogram.On the other hand,Nolet et al.(1986)and Nolet(1990)use an iterative inverse algorithm tofit the waveform in the time domain and increase the model complexity within the iterations.These two methods provide directly a1-D model corresponding to an average epicentre–station path.They werefirst used‘manually’,which limited the amount of data that could be processed.The exponential increase in the amount of good-quality broad-band data has made necessary the automation of most parts of the data processing and an automatic version of these methods has been proposed by Debayle(1999)for the waveform inversion technique of Cara&L´e vˆe que(1987)and by Lebedev(2000)and Lebedev&Nolet(2003)for the partition waveform inversion.Stutzmann&Montagner(1993)split the inversion into two steps;at each iteration,a least-squares optimization to measure phase velocities is followed by an inversion to determine the1-D S-wave velocity model,in order to gain insight into the factors that control the depth resolution.They retrieve the phase velocity for a set of several seismograms recorded at a single station and originating from earthquakes located in the same area in order to improve the resolution.Another approach has been followed by van Heijst&Woodhouse(1997)who proposed a mode-branch stripping technique based on monomode cross-correlation functions.Phase velocity and amplitude perturbations are determined for the most energetic mode-branch,the waveform of which is then subtracted from the seismogram in order to determine the second most energetic mode-branch phase velocity and amplitude perturbations,and so on.More recently,Y oshizawa&Kennett(2002)used the neighbourhood algorithm(Sambridge1999a,b)to explore the model space in detail and to obtain directly a1-D velocity model which achieves the minimum misfit.It is difficult to compare the efficiency of these methods because they all follow different approaches to taking account of the non-linearity of the problem.Up to now,it has only been possible to compare tomographic results obtained using these different techniques.In this paper,we introduce a new semi-automatic inverse procedure,the‘roller coaster’technique(owing to the shape of the misfit curve displayed in Fig.6b in Section3.4.1),to measure fundamental and overtone phase velocities both for Rayleigh and Love waves.This method can be applied either to a single seismogram or to a set of seismograms recorded at a single station.To deal with the non-linearity of the problem,the roller coaster technique combines the detection of all possible solutions at a large scale(which means solutions of large-wavelength variations of the parameter vector over the model space),and local least-squares inversions close to each of them,in order to match small variations of the model.The purpose of this article is to present an inverse procedure that introduces as little a priori information as possible in a non-linear scheme.So,even using a straightforward phase perturbation theory,we show how this algorithm detects and converges towards the best global misfit model.The roller coaster technique is applied to a path average theory but can be later adapted and used with a more realistic wave propagation theory.One issue of this study is to provide a3-D global model which does not suffer from strong a priori constraints during the inversion and which then can be used in the future as a reference model.We describe hereafter the forward problem and the non-linear inverse approach developed for solving it.An essential asset of this technique is to provide quantitative a posteriori information,in order to assess the accuracy of the phase velocity measurements.Resolution tests on both synthetic and real data are presented for Love and Rayleigh waves.2F O RWA R D P R O B L E MFollowing the normal-mode summation approach,a long-period seismogram can be modelled as the sum of the fundamental mode(n=0) and thefirst higher modes(n≥1),hereafter referred to as FM and HM,respectively.Eigenfrequencies and eigenfunctions are computed for both spheroidal and toroidal modes in a1-D reference model,PREM(Dziewonski&Anderson1981)in our case.Stoneley modes are removed,then the radial order n for the spheroidal modes corresponds to Okal’s classification(Okal1978).In the following,all possible sorts of coupling between toroidal and spheroidal mode-branches(Woodhouse1980;Lognonn´e&Romanowicz1990;Deuss&Woodhouse2001) and off-great-circle propagation effects(Woodhouse&Wong1986;Laske&Masters1996)are neglected.For a given recorded long-period seismogram,the corresponding synthetic seismogram is computed using the formalism defined by Woodhouse&Girnius(1982).In the most general case,the displacement u,corresponding of thefirst surface wave train,in the time domain, can be written asu(r,t)=12π+∞−∞nj=0A j(r,ω)exp[i j(r,ω)]exp(iωt)dω,(1)where r is the source–receiver spatial position,ωis the angular frequency and where A j and j represent the amplitude and the phase of the j th mode-branch,respectively,in the frequency domain.In the following,the recorded and the corresponding synthetic seismogram spectra (computed in PREM)are denoted by(R)and(S),respectively.In the Fourier domain,following Kanamori&Given(1981),a recorded seismogram spectrum can be written asA(R)(r,ω)expi (R)(r,ω)=nj=0B j(r,ω)expij(r,ω)−ωaCj(r,ω),(2)where a is the radius of the Earth, is the epicentral distance(in radians)and C(R)j(r,ω)is the real average phase velocity along the epicentre–station path of the j th mode-branch,which we wish to measure.The term B j(r,ω)includes source amplitude and geometrical spreading, whereas j(r,ω)corresponds to the source phase.The instrumental response is included in both terms and this expression is valid for bothRayleigh and Love waves.The phase shift due to the propagation in the real medium then resides in the term exp[−iωa /C(R)j(r,ω)].C 2003RAS,GJI,155,289–307The roller coaster technique291 Figure1.Illustration of possible2πphase jumps over the whole frequency range(dashed lines)or localized around a given frequency(dotted line).Thereference phase velocity used to compute these three curves is represented as a solid line.Considering that,tofirst order,the effect of a phase perturbation dominates over that of the amplitude perturbation(Li&Tanimoto 1993),and writing the real slowness as a perturbation of the synthetic slowness(computed in the1-D reference model),eq.(2)becomesA(R)(r,ω)expi (R)(r,ω)=nj=0A(S)j(r,ω)expij(r,ω)−ωaC(S)j(ω)−χ,(3) whereχ=ωa1C(R)j(r,ω)−1C(S)j(r,ω).(4) Let us now denote by p j(r,ω),the dimensionless parameter vector of the j th mode-branch defined byp j(r,ω)=C(R)j(r,ω)−C(S)j(ω)Cj(ω).(5)Finally,introducing the synthetic phase (S)j(r,ω),as the sum of the source phase and the phase shift due to the propagation in the reference model,the forward problem can be expressed asd=g(p),A(R)(r,ω)expi (R)(r,ω)=nj=0A(S)j(r,ω)expi(S)j(r,ω)+ωaCj(ω)p j(r,ω).(6)For practical reasons,the results presented in this paper are computed following a forward problem expression based on phase velocity perturbation expanded to third order(eq.A5).When considering an absolute perturbation range lower than10per cent,results are,however, identical to those computed following eq.(6)(see Appendix A).Formally,eq.(6)can be summarized as a linear combination of complex cosines and sines and for this reason,a2πundetermination remains for every solution.For a given parameter p j(r,ω),it is obvious that two other solutions can be found by a2πshift such asp+j(r,ω)=p j(r,ω)+2πC(S)j(ω)ωa and p−j(r,ω)=p j(r,ω)−2πC(S)j(ω)ωa.(7) As an example of this feature,all the phase velocity curves presented in Fig.1satisfy eq.(6).This means that2πphase jumps can occur over the whole frequency range but can also be localized around a given frequency.Such an underdetermination as expressed in eq.(6)and such a non-unicity,in most cases due to the2πphase jumps,are often resolved by imposing some a priori constraints in the inversion.A contrario, the roller coaster technique explores a large range of possible solutions,with the smallest a priori as possible,before choosing the model that achieves the minimum misfit.3D E S C R I P T I O N O F T H E R O L L E R C O A S T E R T E C H N I Q U EThe method presented in this paper is a hybrid approach,combining detection of all possible large-scale solutions(which means solutions of long-wavelength configurations of the parameter vector)and local least-squares optimizations starting from each of these solutions,in order to match the short-wavelength variations of the model space.The different stages of the roller coaster technique are presented in Fig.2and described hereafter.Thefirst three stages are devoted to the reduction of the problem underdetermination,while the non-linearity and the non-unicity are taken into account in the following steps.C 2003RAS,GJI,155,289–307292´E.Beucler,´E.Stutzmann and J.-P.MontagnerStage1Stage2Stage3Stage4using least-squares2phasejumps?Stage5Stage6Figure2.Schematic diagram of the roller coaster technique.See Section3for details.3.1Selection of events,mode-branches and time windowsEvents with epicentral distances larger than55◦and shorter than135◦are selected.Thus,the FM is well separated in time from the HM(Fig.3), and thefirst and the second surface wave trains do not overlap.Since the FM signal amplitude is much larger than the HM amplitude for about 95per cent of earthquakes,each seismogram(real and synthetic)is temporally divided into two different time windows,corresponding to the FM and to the HM parts of the signal.An illustration of this amplitude discrepancy in the time domain is displayed in Fig.3(b)and when focusing on Fig.4(a),the spectrum amplitude of the whole real signal(FM+HM)is largely dominated by the FM one.Eight different pickings defining the four time windows,illustrated in Fig.3(a),are computed using synthetic mode-branch wave trains and are checked manually.For this reason,this method is not completely automated,but this picking step is necessary to assess the data quality and the consistency between recorded and synthetic seismograms.In Appendix B,we show that the phase velocity measurements are not significantly affected by a small change in the time window dimensions.An advantage of this temporal truncation is that,whatever the amplitude of the FM,the HM part of the seismograms can always be treated.Hence,the forward problem is now split into two equations,corresponding to the FM and to the HM parts,respectively.A(R) FM (r,ω)expi (R)FM(r,ω)=A(S)0(r,ω)expi(S)0(r,ω)+ωaC(ω)p0(r,ω)(8)andA(R) HM (r,ω)expi (R)HM(r,ω)=6j=1A(S)j(r,ω)expi(S)j(r,ω)+ωaC(S)j(ω)p j(r,ω).(9)Seismograms(real and synthetic)are bandpassfiltered between40and500s.In this frequency range,only thefirst six overtone phase velocities can be efficiently retrieved.Tests on synthetic seismograms(up to n=15)with various depths and source parameters have shown that the HM for n≥7have negligible amplitudes in the selected time and frequency windows.C 2003RAS,GJI,155,289–307The roller coaster technique293Figure3.(a)Real vertical seismogram(solid line)and its corresponding synthetic computed in PREM(dotted line).The earthquake underlying this waveform occurred on1993September4in Afghanistan(36◦N,70◦E,depth of190km)and was recorded at the CAN GEOSCOPE station(Australia).The epicentral distance is estimated at around11340km.Both waveforms are divided into two time windows corresponding to the higher modes(T1–T2,T5–T6)and to the fundamental mode(T3–T4,T7–T8).(b)The contribution of each synthetic monomode shows the large-amplitude discrepancy and time delay between the fundamental mode and the overtones.The different symbols refer to the spectra displayed in Fig.4.3.2Clustering the eventsFollowing eq.(8),a single seismogram is sufficient to measure the FM phase velocity,whereas for the HM(eq.9)the problem is still highly underdetermined since the different HM group velocities are very close.This can be avoided by a reduction of the number of independent parameters considering mathematical relations between different mode-branch phase velocities.The consequence of such an approach is to impose a strong a priori knowledge on the model space,which may be physically unjustified.Another way to reduce this underdetermination is to increase the amount of independent data while keeping the parameter space dimension constant.Therefore,all sufficiently close events are clustered into small areas,and each individual ray path belonging to the same box is considered to give equivalent results as a common ray path.This latter approach was followed by Stutzmann&Montagner(1993),but with5×5deg2boxes independently of epicentral distance and azimuth values,due to the limited number of data.Here,in order to prevent any bias induced by the clustering of events too far away from one to another,and to be consistent with the smallest wavelength,boxes are computed with a maximum aperture angle of2◦and4◦in the transverse and longitudinal directions,respectively(Fig.5),with respect to the great circle path.The boxes are computed in order to take into account as many different depths and source mechanisms as possible.The FM phase velocity inversion is performed for each path between a station and a box,whereas the HM phase velocities are only measured for the boxes including three or more events.Since only the sixfirst mode-branches spectra are inverted,the maximum number of events per box is set to eight.The use of different events implies average phase velocity measurements along the common ray paths which can be unsuitable for short epicentral distances,but increases the accuracy of the results for the epicentral distances considered.C 2003RAS,GJI,155,289–307294´E.Beucler,´E.Stutzmann and J.-P.MontagnerFigure4.(a)The normalized amplitude spectra of the whole real waveform(solid line)displayed in Fig.3(a).The real FM part of the signal(truncated between T3and T4)is represented as a dotted line and the real HM part(between T1and T2)as a dashed line.(b).The solid line corresponds to the normalized spectrum amplitude of the real signal truncated between T3and T4(Fig.3a).The corresponding synthetic FM is represented as a dotted line and only the frequency range represented by the white circles is selected as being significant.(c)Selection of HM inversion frequency ranges using synthetic significant amplitudes.The solid line corresponds to the real HM signal,picked between T1and T2(Fig.3a).For each mode-branch(dotted lines),only the frequency ranges defined by the symbols(according to Fig.3b)are retained for the inversion.(d)Close up of the sixth synthetic overtone,in order to visualize the presence of lobes and the weak contribution frequency range in the spectrum amplitude.The stars delimit the selected frequency range.3.3Determination of the model space dimensionReal and synthetic amplitude spectra are normalized in order to minimize the effects due to the imprecision of source parameters and of instrumental response determination.As presented in Fig.4,a synthetic mode-branch spectrum is frequently composed by several lobes due to the source mechanism.Between each lobe and also near the frequency range edges due to the bandpassfilter,the amplitude strongly decreases down to zero,and therefore phase velocities are absolutely not constrained at these frequencies.It is around these frequencies that possible local2πphase jumps may occur(Fig.1).Then,we decide to reduce the model space dimension in order to take into account only well-constrained points.For each spectrum,the selection of significant amplitudes,with a thresholdfixed to10per cent of the mean maximum spectra amplitude,defines the inverted frequency range.In the case of several lobes in a synthetic mode-branch amplitude spectrum,only the most energetic one is selected as shown in Figs4(c)and(d).For a given mode-branch,the simultaneous use of different earthquakes implies a discrimination criterion based upon a mean amplitude spectrum of all spectra,which tends to increase the dimensions of the significant frequency range.The normalization and this selection of each mode-branch significant amplitudes is also a way to include surface wave radiation pattern information in the procedure.Changes in source parameters can result in changes in the positions of the lobes in the mode-branch amplitude spectra over the whole frequency range(40–500s).In the future,it will be essential to include these possible biases in the scheme and then to simultaneously invert moment tensor,location and depth.C 2003RAS,GJI,155,289–307The roller coaster technique295Figure5.Geographical distribution of inversion boxes for the SSB GEOSCOPE station case.The enlarged area is defined by the bold square in the inset (South America).Black stars denote epicentres and hatched grey boxes join each inversion group.Each common ray path(grey lines)starts from the barycentre (circles)of all events belonging to the same box.The maximum number of seismograms per box isfixed at eight.3.4Exploration of the model space at very large scaleThe main idea of this stage is to test a large number of phase velocity large-scale perturbations with the view of selecting several starting vectors for local inversions(see Section3.5).The high non-linearity of the problem is mainly due to the possible2πphase jumps.And,even though the previous stage(see Section3.3)prevents the shifts inside a given mode-branch phase velocity curve,2πphase jumps over the whole selected frequency range are still possible.For this reason a classical gradient least-squares optimization(Tarantola&Valette1982a)is inadequate.In a highly non-linear problem,a least-squares inversion only converges towards the best misfit model that is closest to the starting model and the number of iterations cannot change this feature.On the other hand,a complete exploration of all possible configurations in the parameter space is still incompatible with a short computation time procedure.Therefore,an exploration of the model space is performed at very large scale,in order to detect all possible models that globally explain the data set well.3.4.1Fundamental mode caseWhen considering a single mode-branch,the number of parameter vector components is rather small.The FM large-scale exploration can then be more detailed than in the HM case.Considering that,at low frequencies,data are correctly explained by the1-D reference model,the C 2003RAS,GJI,155,289–307296´E.Beucler,´E.Stutzmann and J.-P.MontagnerabFigure6.(a)Five examples of the FM parameter vector configurations during the exploration of the model space at large scale corresponding toαvalues equal to−5,−,0,+2.5and+5per cent.The selected points for which the phase velocity is measured(see Section3.3)are ordered into parameter vector components according to increasing frequency values.Thefirst indices then correspond to the low-frequency components(LF)and the last ones to the high-frequency(HF) components.Varying the exploration factorα,different perturbation shapes are then modelled and the misfit between data and the image of the corresponding vector is measured(represented in thefigure below).(b)The misfit in the FM case,symbolized by+,is the expression of the difference between data and the image of the tested model(referred to as pα)through the g function(eq.8).Theαvalues are expressed as a percentage with respect to the PREM.As an example,thefive stars correspond to the misfit values of thefive models represented in thefigure above.The circles represent the bestαvalues and the corresponding vectors are then considered as possible starting models for the next stage.dimensionless phase velocity perturbation(referred to as pα)can be modelled as shown in thefive examples displayed in Fig.6(a).Basically, the low-frequency component perturbations are smaller than the high-frequency ones.However,if such an assumption cannot be made,the simplest way to explore the model space is then byfixing an equalαperturbation value for all the components.The main idea is to impose strong correlations between all the components in order to estimate how high the non-linearity is.Varyingαenables one to compute different parameter vectors and solving eq.(8)to measure the distance between data and the image of a given model through the g function,integrated over the whole selected frequency range.Considering that only small perturbations can be retrieved,the exploration range is limited between−5and+5per cent,using an increment step of0.1per cent.The result of such an exploration is displayed in Fig.6(b)and clearly illustrates the high non-linearity and non-unicity of the problem.In a weakly non-linear problem,the misfit curve(referred to as||d−g(pα)||)should exhibit only one minimum.This would indicate that,whatever the value of the starting model,a gradient algorithm always converges towards the samefinal model,the solution is then unique.In our case,Fig.6(b)shows that,when choosing the reference model(i.e.α=0per cent)as the starting model,a gradient least-squares optimization converges to the nearest best-fitting solution(corresponding to the third circle),and could never reach the global best-fitting model(in this example representedC 2003RAS,GJI,155,289–307The roller coaster technique 297by the fourth circle).Therefore,in order not to a priori limit the inversion result around a given model,all minima of the mis fit curve (Fig.6b)are detected and the corresponding vectors are considered as possible starting models for local optimizations (see Section 3.5).3.4.2Higher-mode caseThe introduction of several mode-branches simultaneously is much more dif ficult to treat and it becomes rapidly infeasible to explore the model space as accurately as performed for the FM.However,a similar approach is followed.In order to preserve a low computation time procedure,the increment step of αis fixed at 1per cent.The different parameter vectors are computed as previously explained in Section3.4.1(the shape of each mode-branch subvector is the same as the examples displayed in Fig.6a).In order to take into account any possible in fluence of one mode-branch on another,all combinations are tested systematically.Three different explorations of the model space are performed within three different research ranges:[−4.5to +1.5per cent],[−3to +3per cent]and [−1.5to +4.5per cent].For each of them,76possibilities of the parameter vector are modelled and the mis fit between data and the image of the tested vector through the g function is computed.This approach is almost equivalent to performing a complete exploration in the range [−4.5to +4.5per cent],using a step of 0.5per cent,but less time consuming.Finally,all mis fit curve minima are detected and,according to a state of null information concerning relations between each mode-branch phase velocities,all the corresponding vectors are retained as possible starting models.Thus,any association between each starting model subvectors is allowed.3.5Matching the short-wavelength variations of the modelIn this section,algorithms,notation and comments are identical for both FM and HM.Only the main ideas of the least-squares criterion are outlined.A complete description of this approach is given by Tarantola &Valette (1982a,b)and by Tarantola (1987).Some typical features related to the frequency/period duality are also detailed.3.5.1The gradient least-squares algorithmThe main assumption which leads us to use such an optimization is to consider that starting from the large-scale parameter vector (see Section 3.4),the non-linearity of the problem is largely reduced.Hence,to infer the model space from the data space,a gradient least-squares algorithm is performed (Tarantola &Valette 1982a).The expression of the model (or parameter)at the k th iteration is given by p k =p 0+C p ·G T k −1· C d +G k −1·C p ·G T k −1−1· d −g (p k −1)+G k −1·(p k −1−p 0) ,(10)where C p and C d are the a priori covariance operators on parameters and data,respectively,p 0the starting model,and where G k −1=∂g (p k −1)/∂p k −1is the matrix of partial derivatives of the g function established in eqs (8)and (9).The indices related to p are now expressing the iteration rank and no longer the mode-branch radial order.De fining the k th image of the mis fit function byS (p k )=12[g (p k )−d ]T ·C −1d ·[g (p k )−d ]+(p k −p 0)T ·C −1p ·(p k −p 0) ,(11)the maximum-likelihood point is de fined by the minimum of S (p ).Minimizing the mis fit function is then equivalent to finding the best compromise between decreasing the distance between the data vector and the image of the parameter vector through the g function,in the data space on one hand (first part of eq.11),and not increasing the distance between the starting and the k th model on the other hand (second part of eq.11),following the covariances de fined in the a priori operators on the data and the parameters.3.5.2A priori data covariance operatorThe a priori covariance operator on data,referred to as C d ,includes data errors and also all effects that cannot be modelled by the g function de fined in eq.(8)and (9).The only way to really measure each data error and then to compute realistic covariances in the data space,would be to obtain exactly the corresponding seismogram in which the signal due to the seismic event is removed.Hence,errors over the data space are impossible to determine correctly.In order to introduce as little a priori information as possible,the C d matrix is computed with a constant value of 0.04(including data and theory uncertainties)for the diagonal elements and zero for the off-diagonal elements.In other words,this choice means that the phase velocity perturbations are expected to explain at least 80per cent of the recorded signal.3.5.3A priori parameter covariance operatorIn the model space,the a priori covariance operator on parameters,referred to as C p ,controls possible variations between the model vector components for a given iteration k (eq.10),and also between the starting and the k th model (eq.11).Considering that the phase velocity perturbation between two adjoining components (which are ordered according to increasing frequency values)of a given mode-branch do not vary too rapidly,C p is a non-diagonal matrix.This a priori information reduces the number of independent components and then induces smoothed phase velocity perturbation curves.A typical behaviour of our problem resides in the way the parameter space is discretized.In the matrix domain,the distance between two adjoining components is always the same,whereas,as the model space is not evenly spaced C 2003RAS,GJI ,155,289–307。
大坍缩假说英语The Big Crunch HypothesisThe Big Crunch hypothesis is a theory about the ultimate fate of the universe. According to this theory, the universe will eventually stop expanding and start contracting until it collapses in on itself in a massive implosion. This event is known as the 'Big Crunch.'The hypothesis is based on the idea that the universe is not only expanding but also has a finite amount of matter and energy. As the universe expands, the gravitational pull between galaxies slows down the expansion. Eventually, the pull becomes strong enough to overcome the expansion, and the universe starts to contract.The contraction will continue until all the matter and energy in the universe is compressed into a single point of infinite density, known as a singularity. This is similar to the Big Bang theory, which states that the universe began as a singularity that exploded and expanded rapidly.If the Big Crunch hypothesis is correct, it means that the universe is cyclical, with a series of expansions and contractions. Each expansion and contraction is known as a 'cosmic cycle.' However, there is still much debate amongscientists about whether or not this theory is accurate.One of the challenges to the Big Crunch hypothesis is the discovery that the expansion of the universe is actually accelerating. This suggests that the gravitational pull between galaxies is not strong enough to slow down the expansion, which means that the universe may continue to expand indefinitely.Regardless of whether or not the Big Crunch theory is proven correct, it remains one of the most intriguing ideas about the destiny of our universe.。
2019年8月3—4日临汾市区域性暴雨天气过程分析王 通,杨斌斌,贾翔宇,张淑琴,杨海涛山西省临汾市气象局,山西临汾 041000摘要 暴雨的发生是多种尺度天气系统相互作用的结果,其中大尺度环流对于暴雨的发生有明显的制约作用,而中尺度系统则是造成暴雨的直接影响系统。
利用常规天气图、乡镇雨量站等资料采取天气学诊断方法,从大尺度环流背景、影响系统、物理量场、雷达产品等方面,探讨了2019年8月3—4日临汾市暴雨产生的必然性,以期为临汾市暴雨的预报积累更多的经验。
关键词 暴雨;西南涡;低空急流;切变线中图分类号:P458 文献标识码:A 文章编号:2095-3305(2020)04-065-04DOI:10.19383/ki.nyzhyj.2020.04.027暴雨作为一种严重的灾害性天气,它在缓解旱情及满足工农业需水的同时也对人身安全及社会的发展等产生了重大影响,因此备受广大气象学家及业务工作者关注。
但由于暴雨发生发展的机理相当复杂,又具有一定的突发性、局地性等特点,使得对暴雨的预报成为业务中的难点之一。
因此,做好暴雨过程的预报、分析与总结等工作,对提高未来暴雨预报准确率具有重要意义[1]。
利用micaps高低空资料,地面加密观测资料,多普勒雷达资料,从天气形势、物理量场、雷达回波等方面入手,对2019年8月3—4日临汾市暴雨天气过程进行了分析。
1 天气实况2019年8月3—4日,临汾市出现了一次明显的降水天气过程,降水量分布不均,28站出现暴雨,8站出现大暴雨,暴雨落区主要集中在盆地与西山相邻地区(洪洞、汾西、尧都区、蒲县、吉县、乡宁、襄汾、古县、霍州、大宁),大暴雨区主要出现在北中部地区(洪洞、汾西、蒲县、古县、尧都区),最大降水出现在洪洞曹家沟,为158.5 mm。
本次暴雨天气过程以系统性降水为主,具有范围广、持续时间长、雨强大的特点。
从8月3日凌晨开始出现降水,一直到3日夜间结束,持续时间较长,并伴有短时强降水天气,小时雨强≥10 mm出现133站次,小时雨强≥20 mm出现20站次,最大小时雨强为40.6 mm,出现在蒲县乔家湾,时间为8月4日03∶00。
孔隙形状因子英文Alright, let's talk about the pore shape factor in a casual yet informative way.You know, when we're studying materials like rocks or soil, we often come across this thing called the pore shape factor. It's not just a fancy name; it actually tells us a lot about how those tiny spaces inside the material are shaped. Think of it like a fingerprint for the material's porosity.And why is that so important? Well, pore shape affects things like how fluids flow through the material, or how easily air can pass through. So, knowing the pore shape factor can help us predict things like permeability or even how strong a material might be.But here's the catch: pore shapes can be all sorts of weird and wonderful. They're not always perfect circles or squares. They can be long and skinny, or wide and flat. Andthat's where the shape factor comes in. It gives us a way to quantify just how weird or normal those pore shapes are.So in a nutshell, the pore shape factor is a bit like a secret code that unlocks the mysteries of a material's internal structure. It's a handy tool for anyone who wants to understand how fluids and gases behave within porous media.。
large language models evaluation Language models have become increasingly advanced and sophisticated in recent years, with larger models such as GPT-3 gaining attention for their ability to generate coherent and contextually relevant text. These large language models have the potential to revolutionize various aspects of natural language processing tasks, but evaluating their performance is not a straightforward task.语言模型在最近几年变得越来越先进和复杂化,如GPT-3等更大的模型因其能够生成连贯且与上下文相关的文本而备受关注。
这些大型语言模型有潜力彻底改变自然语言处理任务的各个方面,但评估它们的性能并不是一项简单的任务。
One challenge in evaluating large language models is the lack of standardized benchmarks that can effectively measure their capabilities. Traditional evaluation metrics used for smaller models may not be sufficient or appropriate for assessing the performance of these larger models. As a result, researchers and practitioners need to develop new evaluation frameworks and metrics that arespecifically tailored for these massive language models.评估大型语言模型的一个挑战是缺乏有效衡量它们能力的标准化基准。
a r X i v :a s t r o -p h /0604561v 1 27 A p r 2006The large-scale structure of the UniverseV .Springel 1,C.S.Frenk 2,S.D.M.White 11Max-Planck-Institute for Astrophysics,Karl-Schwarzschild-Str.1,85740Garching,Germany2Institute for Computational Cosmology,Dep.of Physics,Univ.of Durham,South Road,Durham DH13LE,UK Research over the past 25years has led to the view that the rich tapestry of present-day cosmic structure arose during the first instants of creation,where weak ripples were im-posed on the otherwise uniform and rapidly expanding primordial soup.Over 14billion years of evolution,these ripples have been amplified to enormous proportions by gravi-tational forces,producing ever-growing concentrations of dark matter in which ordinary gases cool,condense and fragment to make galaxies.This process can be faithfully mim-icked in large computer simulations,and tested by observations that probe the history of the Universe starting from just 400,000years after the Big Bang.The past two and a half decades have seen enormous advances in the study of cosmic structure,both in our knowledge of how it is manifest in the large-scale matter distribution,and in our understanding of its origin.A new generation of galaxy surveys –the 2-degree Field Galaxy Redshift Survey,or 2dFGRS 1,and the Sloan Digital Sky Survey,or SDSS22–have quantified the distribution of galaxies in the local Universe with a level of detail and onlength scales that were unthinkable just a few years ago.Surveys of quasar absorption and of gravitational lensing have produced qualitatively new data on the distributions of diffuse intergalactic gas and of dark matter.At the same time,observations of the cosmic microwave background radiation,by showing us the Universe when it was only about 400,000years old,have vindicated bold theoretical ideas put forward in the 1980s regarding the contents of the Universe and the mechanism that initially generated structure shortly after the Big Bang.The critical link between the early,near-uniform Universe and the rich structure seen at more recent times has been provided by direct numerical simulation.This has made use of the unremittingincrease in the power of modern computers to create ever more realistic virtual universes: simulations of the growth of cosmic structure that show how astrophysical processes have produced galaxies and larger structures from the primordial soup.Together,these advances have led to the emergence of a“standard model of cosmology”which,although seemingly implausible,has nevertheless been singularly successful.Figure1strikingly illustrates how well this standard model canfit nearby structure.The observational wedge plots at the top and at the left show subregions of the SDSS and2dF-GRS,illustrating the large volume they cover in comparison to the ground-breaking Center for Astrophysics(CfA)galaxy redshift survey3carried out during the1980s(the central small wedge).These slices through the local three-dimensional galaxy distribution reveal a tremen-dous richness of structure.Galaxies,groups and clusters are linked together in a pattern of sheets andfilaments that is commonly known as the“cosmic web”4.A handful of particularly prominent aggregations clearly stand out in these images,the largest containing of the order of10,000galaxies and extending for several hundred million light years.The corresponding wedge plots at the right and at the bottom show similarly constructed surveys of a virtual uni-verse,the result of a simulation of the growth of structure and of the formation of galaxies in the current standard model of cosmology.The examples shown were chosen among a set of random“mock surveys”to have large structures in similar positions to the real surveys.The similarity of structure between simulation and observation is striking,and is supported by a quantitative comparison of clustering5.Here we review what we can learn from this excellent match.The early1980s produced two audacious ideas that transformed a speculative and notori-ously uncertain subject into one of the most rapidly developing branches of physics.Thefirst was the proposal that the ubiquitous dark matter that dominates large-scale gravitational forces consists of a new(and still unidentified)weakly interacting elementary particle.Because theseparticles are required to have small random velocities at early times,they were dubbed“cold dark matter”or CDM.(Hot dark matter is also possible,for example a neutrino with a mass of a few tens of electron volts.Early cosmological simulations showed,however,that the galaxy distribution in a universe dominated by such particles would not resemble that observed6.) The second idea is“cosmic inflation”7,the proposal that the Universe grew exponentially for many doubling times perhaps∼1035seconds after the Big Bang,driven by the vacuum energy density of an effective scalarfield that rolls slowly from a false to the true vacuum.Quantum fluctuations in this“inflaton”field are blown up to macroscopic scales and converted into genuine ripples in the cosmic energy density.These weak seedfluctuations grow under the influence of gravity and eventually produce galaxies and the cosmic web.Simple models of inflation predict the statistical properties of these primordial densityfluctuations:their Fourier components should have random and independent phases and a near-scale-invariant power spectrum8.Inflation also predicts that the present Universe should have aflat geometry.With concrete proposals for the nature of the dark matter and for the initialfluctuation distribution, the growth of cosmic structure became,for thefirst time,a well-posed problem that could be tackled with the standard tools of physics.The backbone of the cosmic web is the clumpy yetfilamentary distribution of dark mat-ter.The presence of dark matter wasfirst inferred from the dynamics of galaxy clusters by Zwicky9.But it took over half a century for dark matter to become an integral part of our view of galaxies and of the Universe as a whole,and for its average density to be estimated reliably.Today,the evidence for the pervasive presence of dark matter is overwhelming and includes galactic rotation curves,the structure of galaxy groups and clusters,large-scale cos-micflows and,perhaps most directly,gravitational lensing,a phenomenonfirst proposed as an astronomical tool by Zwicky himself10.The distorted images of background galaxies as their light travels near mass concentrations reveal the presence of dark matter in the outer haloes of galaxies11,12,in galaxy clusters13and in the general massfield14.When expressed in units of the critical density required for aflat cosmic geometry,the mean density of dark matter is usually denoted byΩdm.Although a variety of dynamical tests have been used to constrainΩdm,in general such tests give ambiguous results because velocities are induced by the unseen dark matter and the relation of its distribution to that of the visible tracers of structure is uncertain.The notion of a substantial bias in the galaxy distribution relative to that of dark matter was introduced in the1980s to account for the fact that different samples of galaxies or clusters are not directly tracing the underlying matter distribution15–17.Defined simply as the ratio of the clustering strengths,the“bias function”was also invoked to reconcile low dynamical estimates for the mass-to-light ratio of clusters with the high global value required in the theoretically preferredflat,Ωdm=1universe.But because massive clusters must contain approximately the universal mix of dark matter and baryons(ordinary matter),bias uncertainties are neatly bypassed by comparing the measured baryon fraction in clusters with the universal fraction under the assumption that the mean baryon density,Ωb,is the value inferred from Big Bang nucleosynthesis18.Applied to the Coma cluster,this simple argument gaveΩdm≤0.3where the inequality arises because some or all of the dark matter could be baryonic18.This was thefirst determination ofΩdm<1 that could not be explained away by invoking bias.Subsequent measurements have confirmed the result19which also agrees with recent independent estimates based,for example,on the relatively slow evolution of the abundance of galaxy clusters20,21or on the detailed structure offluctuations in the microwave background radiation22.The mean baryon density implied by matching Big Bang nucleosynthesis to the observed abundances of the light elements is onlyΩb h2≃0.02,where h denotes the Hubble constant in units of100km s−1Mpc−1.Dynamical estimates,although subject to bias uncertainties, have long suggested thatΩm=Ωdm+Ωb≃0.3,implying that the dark matter cannot be baryonic.Plausibly it is made up of the hypothetical elementary particles postulated in the 1980s,for example axions or the lowest mass supersymmetric partner of the known particles.Such low estimates of the mean matter densityΩm are incompatible with theflat geometry predicted by inflation unless the Universe contains an additional unclustered and dominant contribution to its energy density,for example a cosmological constantΛsuch thatΩm+ΩΛ≃1.Two large-scale structure surveys carried out in the late1980s,the APM(automated photographic measuring)photographic survey23and the QDOT redshift survey of infrared galaxies24,showed that the power spectrum of the galaxy distribution,if it traces that of the mass on large scales,can befitted by a simple CDM model only if the matter density is low,Ωm≃0.3.This independent confirmation of the dynamical arguments led many to adopt the now standard model of cosmology,ΛCDM.It was therefore with a mixture of amazement and d´e j`a vu that cosmologists greeted the discovery in1998of an accelerated cosmic expansion25,26.Two independent teams used dis-tant type Ia supernovae to perform a classical observational test.These“standard candles”can be observed out to redshifts beyond1.Those at z≥0.5are fainter than expected,ap-parently indicating that the cosmic expansion is currently speeding up.Within the standard Friedmann cosmology,there is only one agent that can produce an accelerating expansion:the cosmological constantfirst introduced by Einstein,or its possibly time-or space-dependent generalization,“dark energy”.The supernova evidence is consistent withΩΛ≃0.7,just the value required for theflat universe predicted by inflation.The other key prediction of inflation,a densityfluctuationfield consistent with amplified quantum noise,received empirical support from the discovery by the COsmic Background Explorer(COBE)satellite in1992of smallfluctuations in the temperature of the cosmic mi-crowave background(CMB)radiation27.These reflect primordial densityfluctuations,mod-ified by damping processes in the early Universe which depend on the matter and radiation content of the Universe.More recent measurements of the CMB28–32culminating with those by the WMAP(Wilkinson Microwave Anisotropy Probe)satellite22have provided a strikingconfirmation of the inflationary CDM model:the measured temperaturefluctuation spectrum is nearly scale-invariant on large scales and has a series of“acoustic”peaks that reflect the coherent oscillations experienced by the photon-baryonfluid before the moment when the pri-mordial plasma recombined and the radiation escaped.Thefluctuation spectrum depends on the parameters that define the geometry and content of the Universe and the initialfluctuation distribution,so their values are constrained by the data.In practice,there are degeneracies among the parameters,and the strongest constraints come from combining the CMB data with other large-scale structure datasets.Present estimates22,33–36give aflat universe with Ωdm=0.20±0.020,Ωb=0.042±0.002,ΩΛ=0.76±0.020,h=0.74±0.02.The consis-tency of these values with other independent determinations and the close agreement of the CMB data with theoretical predictions formulated over20years earlier37belong amongst the most remarkable successes of modern cosmology.The growth of large-scale structureThe microwave background radiation provides a clear picture of the young Universe,where weak ripples on an otherwise uniform sea display a pattern that convincingly supports our standard model for the cosmic mass/energy budget and for the process that initially imprinted cosmic structure.At that time there were no planets,no stars,no galaxies,none of the striking large-scale structures seen in Fig.1.The richness of the observed astronomical world grew later in a complex and highly nonlinear process driven primarily by gravity.This evolution can be followed in detail only by direct numerical simulation.Early simulations were able to reproduce qualitatively the structure observed both in large galaxy surveys and in the inter-galactic medium16,38.They motivated the widespread adoption of the CDM model well before it gained support from microwave background observations.Many physical processes affect galaxy formation,however,and many aspects must be treated schematically within even the largest simulations.The resulting uncertainties are best estimated by exploring a wide rangeof plausible descriptions and checking results against observations of many different types. The main contribution of early CDM galaxy formation modelling was perhaps the dethron-ing of the“island universe”or“monolithic collapse”paradigm and the realization that galaxy formation is a process extending from early times to the present day,rather than an event that occurred in the distant past39.In aΛCDM universe,quasi-equilibrium dark matter clumps or“haloes”grow by the collapse and hierarchical aggregation of ever more massive systems,a process described sur-prisingly well by the phenomenological model of Press and Schechter and its extensions40,41. Galaxies form at the centres of these dark haloes by the cooling and condensation of gas which fragments into stars once it becomes sufficiently dense42.Groups and clusters of galax-ies form as haloes aggregate into larger systems.They are arranged in the“cosmic web”,the larger-scale pattern offilaments and sheets which is a nonlinear gravitational“sharpening”of the pattern already present in the gaussian randomfield of initialfluctuations4.Thefirst observable objects were probably massive stars collapsing in small haloes and switching on at redshifts of50and higher43.By a redshift of15these may have been sufficiently numerous for their radiation to re-ionize all the gas in the Universe44.So far they have not been observed directly,but it is one of the main goals of the next generation of low-frequency radio tele-scopes to observe their effects directly in the strongly redshifted21-cm transition of neutral hydrogen.Detailed simulations fromΛCDM initial conditions have been used to study the formation of thefirst luminous objects and the re-ionization of the Universe,but these still await testing against observation44,45.In contrast,predictions for the structure,the ionization state and the heavy element content of intergalactic gas at redshifts below6can be checked in detail against absorption features observed in the spectra of distant quasars.These provide,in effect,a one-dimensional tomographic image of the intervening large-scale structure.As an example,Fig.2shows a typical high-resolution spectrum of a distant quasar at redshift z=3.26.At shorter wavelengths than the Lyman-αemission line of the quasar,there is a‘forest’of absorption lines of differing strength.The modern interpretation is that these features arise from Lyman-αabsorption by the smoothly varying distribution of foreground intergalactic hydrogen,in effect from thefilaments,sheets and haloes of cosmic structure.It was a conceptual breakthrough,and an important success for the CDM paradigm,when hy-drodynamical simulations showed that this interpretation could explain in detail the observed statistics of the absorption lines38,46.Considerable recent advances both in the quality and in the quantity of data available have made it possible to measure a variety of statistics for the Lyman-αforest as a function of redshift to high precision47–paring with appropriately designed numerical simulations has provided strong confirmation of the underlying paradigm at a level that is remarkable,given the evidence that intergalactic gas is contaminated with galaxy ejecta in a way that the simulations do not yet adequately reproduce36,50–52.This ap-proach has also helped to strengthen constraints on the paradigm’s parameters,in particular on the spectrum offluctuations produced by inflation and on the masses of neutrinos.At lower redshift direct and quantitative measures of large-scale structure can be obtained from the weak,coherent distortions of the images of faint galaxies induced by gravitational lensing as their light travels through the intervening cosmic web53.The distortions depend only on the gravitationalfield in intergalactic space and so lensing data test predictions for the mass distribution in a way that is almost independent of the complex astrophysics that determines the observable properties of galaxies.The lensing effect is very weak,but can be measured statistically to high precision with large enough galaxy samples.As an example,Fig.3shows a measure of the mean square coherent distortion of distant galaxy images within randomly placed circles on the sky as a function of the radius of those circles54.Clearly,the distortion is detected with very high significance.The two curves showthe predicted signal in the standardΛCDM model based on(i)detailed simulations of the growth of structure in the dark matter distribution,and(ii)a simple linear extrapolation from the structure present at early times.Nonlinear effects are strong because the distortions are dominated by the gravity of individual dark matter haloes.Meaningful comparison between theory and observation thus requires high-precision large-scale structure simulations,and gen-erating these constitutes a great numerical challenge.Similar lensing measurements,but now within circles centred on observed galaxies(rather than random points),can be used to deter-mine the average total mass surrounding galaxies as a function of radius,redshift and galaxy properties55.This wealth of information can only be interpreted by simulations that follow both the dark matter distribution and the formation and evolution of the galaxy population.The Lyman-αforest and gravitational lensing thus provide windows onto the large-scale structure of the Universe that complement those obtained from galaxy surveys by extending the accessible redshift range and,more importantly,by measuring the structure in the diffuse gas and in the total mass distribution rather than in the distribution of galaxies.In principle, these measures should have different(and perhaps weaker)sensitivity to the many uncer-tain aspects of how galaxies form.Remarkably,all three measures are consistent both with each other and with the standard model at the level that quantitative comparison is currently possible36,54,56.Galaxy surveys such as those illustrated in Fig.1contain an enormous amount of infor-mation about large-scale structure.The strength of clustering is known to depend not only on galaxy luminosity,colour,morphology,gas content,star-formation activity,type and strength of nuclear activity and halo mass,but also on the spatial scale considered and on redshift.Such dependences reflect relations between the formation histories of galaxies and their larger-scale environment.Some(for example,the dependence on halo or galaxy mass)are best thought of as deriving from the statistics of the initial conditions.Others(for example the dependenceon nuclear or star-formation activity)seem more naturally associated with late-time environ-mental influences.Early studies attempted to describe the relation between the galaxy and mass distributions by a bias function.Recent data suggest that this concept is of limited value. Except,perhaps,on the largest scales;bias estimates depend not only on scale,redshift and galaxy properties,but also on the particular measure of clustering studied.Understanding the link between the mass and galaxy distributions requires realistic simulations of the galaxy formation process throughout large and representative regions of the Universe.Given the com-plexity of galaxy formation,such simulations must be tuned“by hand”to match as many of the observed properties of galaxies as possible.Only if clustering turns out to be insensitive to such tuning can we consider the portrayal of large-scale structure to be robust and realistic.In Fig.4,we show the time evolution of the mass and galaxy distributions in a small subregion of the largest simulation of this type yet5.The emergence of the cosmic web can be followed in stunning detail,producing a tight network offilaments and walls surrounding a foam of voids.This characteristic morphology was seen in thefirst generation of cold dark matter simulations carried out over20years ago16,but the match was not perfect;the recipe adopted to relate the galaxy and mass distributions was too crude to reproduce in detail the clustering of galaxies.It has taken models like those of Fig.4to explain why the observed galaxy autocorrelation function is close to a power law whereas the simulated dark matter autocorrelation function shows significant features5,57.Simulated autocorrelation functions for dark matter and for galaxies are shown in Fig.5 for the same times imaged in Fig.4.The shape difference between the two is very evident, and it is remarkable that at z=0the power-law behaviour of the galaxy correlations extends all the way down to10kpc,the observed size of galaxies.Similar behaviour has recently been found for luminous red galaxies in the Sloan Digital Sky Survey58.The galaxy distri-bution in this simulation also reproduces the observed dependence of present-day clusteringon luminosity and colour5as well as the observed galaxy luminosity functions,the observa-tionally inferred formation histories of elliptical galaxies,and the bimodal colour-magnitude distribution observed for galaxies59,60.A striking feature of Fig.4is the fact that while the growth of large-scale structure is very clear in the mass distribution,the galaxy distributions appear strongly clustered at all times.This difference shows up dramatically in the autocorrelation functions plotted in Fig.5 and has been a prediction of CDM theories since thefirst simulations including crude bias recipes16.A decade later when direct measurements of galaxy clustering at redshifts as high as z∼3−4found“surprisingly”large amplitudes,comparable to those found in the present-day Universe61,62,the results turned out to be in good agreement with estimates based on more detailed modelling of galaxy formation in a CDM universe63,64.In effect,the galaxies already outline the pattern of the cosmic web at early times,and this pattern changes relatively little with the growth of structure in the underlying dark matter distribution.Could the standard model be wrong?Given the broad success of theΛCDM model,is it conceivable that it might be wrong in a significant way requiring a fundamental revision?The concordance of experimental results relying on a variety of physical effects and observed over a wide range of cosmic epochs suggests that this is unlikely.Nevertheless,it is clear that some of the most fundamental questions of cosmology(what is the dark matter?the dark energy?)remain unanswered. In addition,some of the key observational underpinnings of the model still carry worrying uncertainties.Can we use our ever-improving measurements of large-scale structure to carry out critical tests?Perhaps the deepest reason to be suspicious of the paradigm is the apparent presence of a dark energyfield that contributes∼70%of the Universe’s content and has,for the past5billion years or so,driven an accelerated cosmic expansion.Dark energy is problematic from afield theoretical point of view65.The simplest scenario would ascribe a vacuum energy to quantum loop corrections at the Planck scale,hc5/G,which is of the order of1019GeV, where gravity should unify with the other fundamental forces.This is more than120orders of magnitude larger than the value required by cosmology.Postulating instead a connection to the energy scale of quantum chromodynamics would still leave a discrepancy of some40 orders of magnitude.A cosmological dark energyfield that is so unnaturally small compared with these particle physics scales is a profound mystery.The evidence for an accelerating universe provided by type Ia supernovae relies on a purely phenomenological calibration of the relation between the peak luminosity and the shape of the light curve.It is this that lets these supernovae be used as an accurate standard candle. Yet this relation is not at all understood theoretically.Modern simulations of thermonuclear explosions of white dwarfs suggest that the peak luminosity should depend on the metallicity of the progenitor star66,67.This could,in principle,introduce redshift-dependent systematic effects,which are not well constrained at present.Perhaps of equal concern is the observation that the decline rate of type Ia supernovae correlates with host galaxy type68,69,in the sense that the more luminous supernovae(which decline more slowly)are preferentially found in spiral galaxies.Interestingly,it has been pointed out that without the evidence for accelerated expansion from type Ia supernovae,a critical density Einstein-de Sitter universe can give a good account of observations of large-scale structure provided the assumption of a single power law for the initial inflationaryfluctuation spectrum is dropped,a small amount of hot dark matter is added,and the Hubble parameter is dropped to the perhaps implausibly low value h≃0.45(ref.70).The CMB temperature measurements provide particularly compelling support for the paradigm.The WMAP temperature maps do,however,show puzzling anomalies that are notexpected from gaussianfluctuations71–73,as well as large-scale asymmetries that are equally unexpected in an isotropic and homogeneous space74,75.Although these signals could perhaps originate from foregrounds or residual systematics,it is curious that the anomalies seem well matched by anisotropic Bianchi cosmological models,although the models examined so far require unacceptable cosmological parameter values76.Further data releases from WMAP and future CMB missions such as PLANCK will shed light on these peculiarities of the current datasets.Perhaps the anomalous effects will go away;or they could be thefirst signs that the standard model needs substantial revision.The unknown nature of the dark matter is another source of concern.Is the dark matter really“cold”and non-interacting,and is it really dark?Does it exist at all?Until the posited elementary particles are discovered,we will not have definitive answers to these questions. Already there are hints of more complicated possibilities.It has been suggested,for instance, that theγ-ray excessflux recently detected in the direction of the Galactic Centre77might be due to self-annihilating dark matter particles78,an idea that is,in principle,plausible for a range of dark matter candidates in supersymmetricfield theories.Alternative theories of gravity,most notably modified newtonian dynamics(MOND)79have been proposed to do away with the need for dark matter altogether.Although MOND can explain the rotation curves of galaxies,on other scales the theory does not seem to fare so well.For example, although it can account for the total mass in galaxy clusters,MOND requires the presence of large amounts of unseen material within the central few kiloparsecs of the cluster cores80. It has yet to be demonstrated convincingly that MOND can reproduce observed large-scale structure starting from the initial conditions imaged in the CMB and so pass the test illustrated in Fig.1.At present the strongest challenge toΛCDM arises not from large-scale structure,but from the small-scale structure within individual galaxies.It is a real possibility that the model。
LETTERS Large-scale pattern growth of graphene films for stretchable transparent electrodesKeun Soo Kim1,3,4,Yue Zhao7,Houk Jang2,Sang Yoon Lee5,Jong Min Kim5,Kwang S.Kim6,Jong-Hyun Ahn2,3, Philip Kim3,7,Jae-Young Choi5&Byung Hee Hong1,3,4Problems associated with large-scale pattern growth of graphene constitute one of the main obstacles to using this material in device applications1.Recently,macroscopic-scale graphene films were prepared by two-dimensional assembly of graphene sheets chem-ically derived from graphite crystals and graphene oxides2,3. However,the sheet resistance of these films was found to be much larger than theoretically expected values.Here we report the direct synthesis of large-scale graphene films using chemical vapour deposition on thin nickel layers,and present two different methods of patterning the films and transferring them to arbitrary sub-strates.The transferred graphene films show very low sheet resis-tance of280V per square,with80per cent optical transparency. At low temperatures,the monolayers transferred to silicon dioxide substrates show electron mobility greater than3,700cm2V21s21 and exhibit the half-integer quantum Hall effect4,5,implying that the quality of graphene grown by chemical vapour deposition is as high as mechanically cleaved graphene6.Employing the outstanding mechanical properties of graphene7,we also demonstrate the mac-roscopic use of these highly conducting and transparent electrodes in flexible,stretchable,foldable electronics8,9.Graphene has been attracting much attention owing to its fasci-nating physical properties such as quantum electronic transport4,5,a tunable band gap10,extremely high mobility11,high elasticity7and electromechanical modulation12.Since the discovery of the first iso-lated graphene prepared by mechanical exfoliation of graphite crys-tals6,many chemical approaches to synthesize large-scale graphene have been developed,including epitaxial growth on silicon carbide (refs13,14)and ruthenium(ref.15)as well as two-dimensional assembly of reduced graphene oxides3,16–18and exfoliated graphene sheets2.Epitaxial growth provides high-quality multilayer graphene samples interacting strongly with their substrates,but electrically isolated mono-or bilayer graphene for device applications has not been made.On the other hand,the self-assembly of soluble graphene sheets demonstrates the possibility of low-cost synthesis and the fabrication of large-scale transparent films.However,these assembled graphene films show relatively poor electrical conductivity owing to the poor interlayer junction contact resistance and the structural defects formed during the vigorous exfoliation and reduc-tion processes.In this work,we develop a technique for growing few-layer graphene films using chemical vapour deposition(CVD)and successfully transferring the films to arbitrary substrates without intense mechanical and chemical treatments,to preserve the high crystalline quality of the graphene samples.Therefore,we expect to observe enhanced electrical and mechanical properties.The growth, etching and transferring processes of the CVD-grown large-scale graphene films are summarized in Fig.1.It has been known for over40years that CVD of hydrocarbons on reactive nickel or transition-metal-carbide surfaces can produce thin graphitic layers19–21.However,the large amount of carbon sources absorbed on nickel foils usually form thick graphite crystals rather than graphene films(Fig.2a).To solve this problem,thin layers of nickel of thickness less than300nm were deposited on SiO2/Si sub-strates using an electron-beam evaporator,and the samples were then heated to1,000u C inside a quartz tube under an argon atmosphere. After flowing reaction gas mixtures(CH4:H2:Ar550:65:200standard cubic centimetres per minute),we rapidly cooled the samples to room temperature(,25u C)at the rate of,10u C s21using flowing argon. We found that this fast cooling rate is critical in suppressing formation of multiple layers and for separating graphene layers efficiently from the substrate in the later process20.A scanning electron microscope(SEM;JSM6490,Jeol)image of graphene films on a thin nickel substrate shows clear contrast between areas with different numbers of graphene layers(Fig.2a).Transmission electron microscope(TEM;JEM3010,Jeol)images(Fig.2b)show that the film mostly consists of less than a few layers of graphene.After transfer of the film to a silicon substrate with a300-nm-thick SiO2 layer,optical and confocal scanning Raman microscope(CRM200, Witech)images were made of the same area(Fig.2c,d)22.The brightest area in Fig.2d corresponds to monolayers,and the darkest area is composed of more than ten layers of graphene.Bilayer structures appear to predominate in both TEM and Raman images for this particular sample,which was prepared from7min of growth on a 300-nm-thick nickel layer.We found that the average number of gra-phene layers,the domain size and the substrate coverage can be con-trolled by changing the nickel thickness and growth time during the growth process(Supplementary Figs1and2),thus providing a way of controlling the growth of graphene for different applications. Atomic force microscope(AFM;Nanoscopes IIIa and E,Digital Instruments)images often show the ripple structures caused by the difference between the thermal expansion coefficients of nickel and graphene(Fig.2c,inset;see also Supplementary Fig.3)19.We believe that these ripples make the graphene films more stable against mech-anical stretching23,making the films more expandable,as we will discuss later.Multilayer graphene samples are preferable in terms of mechanical strength for supporting large-area film structures,whereas thinner graphene films have higher optical transparency.We find that a,300-nm-thick nickel layer on a silicon wafer is the optimal sub-strate for the large-scale CVD growth that yields mechanically stable, transparent graphene films to be transferred and stretched after they are formed,and that thinner nickel layers with a shorter growth time yield predominantly mono-and bilayer graphene film for microelec-tronic device applications(Supplementary Fig.1c).1Department of Chemistry,2School of Advanced Materials Science and Engineering,3SKKU Advanced Institute of Nanotechnology,4Center for Nanotubes and Nanostructured Composites,Sungkyunkwan University,Suwon440-746,Korea.5Samsung Advanced Institute of Technology,PO Box111,Suwon440-600,Korea.6Department of Chemistry,Pohang University of Science and Technology,Pohang790-784,Korea.7Department of Physics,Columbia University,New York,New York10027,USA.doi:10.1038/nature077191SiO Ni/C layerCH /H /Ar Ar Cooling ~RTPatterned Ni layer (300 nm)FeCl (aq)or acids Ni-layer etchingHF/BOE -layer etching (short)Ni-layer etching PDMS/graphene Downside contact (scooping up)HF/BOE StampingFloating graphene/NiGraphene/Ni/SiO 2/Sia cNi SiFigure 1|Synthesis,etching and transfer processes for the large-scale and patterned graphene films.a ,Synthesis of patterned graphene films on thin nickel layers.b ,Etching using FeCl 3(or acids)and transfer of graphene films using a PDMS stamp.c ,Etching using BOE or hydrogen fluoride (HF)solution and transfer of graphene films.RT,room temperature (,25uC).I n t e n s i t y (a .u .)Raman shift (cm –1)a c 5 µme2 µm3 layersBilayer4–5 layers 0.34 nmb5 µmd>54321Figure 2|Various spectroscopic analyses of the large-scale graphene films grown by CVD.a ,SEM images of as-grown graphene films on thin (300-nm)nickel layers and thick (1-mm)Ni foils (inset).b ,TEM images of graphene films of different thicknesses.c ,An optical microscope image of thegraphene film transferred to a 300-nm-thick silicon dioxide layer.The inset AFM image shows typical rippled structures.d ,A confocal scanning Raman image corresponding to c .The number of layers is estimated from the intensities,shapes and positions of the G-band and 2D-band peaks.e ,Raman spectra (532-nm laser wavelength)obtained from the corresponding coloured spots in c and d .a.u.,arbitrary units.d eg h2 cm2 cmStampingPatterned grapheneab fc5 mmFigure 3|Transfer processes for large-scale graphene films.a ,Acentimetre-scale graphene film grown on a Ni(300nm)/SiO 2(300nm)/Si substrate.b ,A floating graphene film after etching the nickel layers in 1M FeCl 3aqueous solution.After the removal of the nickel layers,the floating graphene film can be transferred by direct contact with substrates.c ,Various shapes of graphene films can be synthesized on top of patterned nickel layers.d ,e ,The dry-transfer method based on a PDMS stamp is useful in transferring the patterned graphene films.After attaching the PDMS substrate to the graphene (d ),the underlying nickel layer is etched and removed using FeCl 3solution (e ).f ,Graphene films on the PDMS substrates are transparent and flexible.g ,h ,The PDMS stamp makes conformal contact with a silicon dioxide substrate.Peeling back the stamp (g )leaves the film on a SiO 2substrate (h ).LETTERSNATURE2Etching nickel substrate layers and transferring isolated graphene films to other substrates is important for device ually,nickel can be etched by strong acid such as HNO 3,which often produces hydrogen bubbles and damages the graphene.In our work,an aqueous iron (III )chloride (FeCl 3)solution (1M)was used as an oxidizing etchant to remove the nickel layers.The net ionic equation of the etching reaction can be represented as follows:2Fe 3z (aq )z Ni (s )?2Fe 2z (aq )z Ni 2z (aq )This redox process slowly etches the nickel layers effectively within a mild pH range without forming gaseous products or precipitates.In a few minutes,the graphene film separated from the substrate floats on the surface of the solution (Fig.3a,b),and the film is then ready to be transferred to any kind of e of buffered oxide etchant (BOE)or hydrogen fluoride solution removes silicon dioxide layers,so the patterned graphene and the nickel layer float together on the solution surface.After transfer to a substrate,further reaction with BOE or hydrogen fluoride solution completely removes the remain-ing nickel layers (Supplementary Fig.5).We also develop a dry-transfer process for the graphene film using a soft substrate such as polydimethylsiloxane (PDMS)stamp 24.Here we first attach the PDMS stamp to the CVD-grown graphene film on the nickel substrate (Fig.3d).The nickel substrate can be etched away using FeCl 3as described above,leaving the adhered graphene film on the PDMS substrate (Fig.3e).By using the pre-patterned nickel substrate (Fig.3c),we can transfer various sizes and shapes of gra-phene film to an arbitrary substrate.This dry-transfer process turns out to be very useful in making large-scale graphene electrodes and devices without additional lithography processes (Fig.3f–h).Microscopically,these few-layer transferred graphene films often show linear crack patterns with an angle of 60u or 120u ,indicating a particular crystallographic edge with large crystalline domains (Supplementary Fig.1b)25.In addition,the Raman spectra measured for graphene films on nickel substrates show a strongly suppressed defect-related D-band peak (Supplementary Fig.3).This D peak grows only slightly after the transfer process (Fig.2e),indicating overall good quality of the resulting graphene film.Further optimi-zation of the transfer process with substrate control makes possible transfer yields approaching 99%(Supplementary Table 1).at 550 nm1010101010101010Stretching (%)123456789R e s i s t a n c e (k Ω)Bending radius (mm)BendingyxyxR R xy1011021031042nd 1st R e s i s t a n c e (Ω)Stretching (%)R y R xStable Stretching cycles84TrR s T r a n s m i t t a n c e (%)Wavelength (nm)0–40–60V g (V)M a g n e t o r e s i s t a n c e (k Ω)010–10–15–55420–60060R e s i s t a n c e (k Ω)V g (V)–20604020RecoveryR e s i s t a n c e (Ω)302510530630abc d Figure 4|Optical and electrical properties of the graphene films.a ,Transmittance of the graphene films on a quartz plate.The discontinuities in the absorption curves arise from the different sensitivities of the switching detectors.The upper inset shows the ultraviolet (UV)-induced thinning and the consequent enhancement of transparency.The lower inset shows the changes in transmittance,Tr,and sheet resistance,R s ,as functions ofultraviolet illumination time.b ,Electrical properties of monolayer graphene devices showing the half-integer quantum Hall effect and high electron mobility.The upper inset shows a four-probe electrical resistancemeasurement on a monolayer graphene Hall bar device (lower inset)at 1.6K.We apply a gate voltage,V g ,to the silicon substrate to control the charge density in the graphene sample.The main panel shows longitudinal (R xx )and transverse (R xy )magnetoresistances measured in this device for a magnetic field B 58.8T.The monolayer graphene quantum Hall effect isclearly observed,showing the plateaux with filling factor n 52at R xy 5(2e 2/h )21and zeros in R xx .(Here e is the elementary charge and h is Planck’s constant.)Quantum Hall plateaux (horizontal dashed lines)are developing for higher filling factors.c ,Variation in resistance of a graphene filmtransferred to a ,0.3-mm-thick PDMS/PET substrate for different distances between holding stages (that is,for different bending radii).The left inset shows the anisotropy in four-probe resistance,measured as the ratio,R y /R x ,of the resistances parallel and perpendicular to the bending direction,y .The right inset shows the bending process.d ,Resistance of a graphene film transferred to a PDMS substrate isotropically stretched by ,12%.The left inset shows the case in which the graphene film is transferred to an unstretched PDMS substrate.The right inset shows the movement of holding stages and the consequent change in shape of the graphene film.NATURELETTERS3For the macroscopic transport electrode application,the optical and electrical properties of131cm2graphene films were respectively measured by ultraviolet–visible spectrometer and four-probe Van der Pauw methods(Fig.4a,b).We measured the transmittance using an ultraviolet–visible spectrometer(UV-3600,Shimazdu)after transfer-ring the floating graphene film to a quartz plate(Fig.4a).In the visible range,the transmittance of the film grown on a300-nm-thick nickel layer for7min is,80%,a value similar to those found for previously studied assembled films2,3.Because the transmittance of an individual graphene layer is,2.3%(ref.26),this transmittance value indicates that the average number of graphene layers is six to ten.The transmit-tance can be increased to,93%by further reducing the growth time and nickel thickness,resulting in a thinner graphene film(Supple-mentary Fig.1).Ultraviolet/ozone etching(ultraviolet/ozone cleaner, 60W,BioForce)is also useful in controlling the transmittance in an ambient condition(Fig.4a,upper inset).Indium electrodes were deposited on each corner of the square(Fig.4a,lower inset)to mini-mize contact resistance.The minimum sheet resistance is,280V per square,which is,30times smaller than the lowest sheet resistance measured on assembled films2,3.The values of sheet resistance increase with the ultraviolet/ozone treatment time,in accordance with the decreasing number of graphene layers(Fig.4a).For microelectronic application,the mobility of the graphene film is critical.To measure the intrinsic mobility of a single-domain gra-phene sample,we transferred the graphene samples from a PDMS stamp to a degenerate doped silicon wafer with a300-nm-deep ther-mally grown oxide layer.Monolayer graphene samples were readily located on the substrate from the optical contrast26and their iden-tification was subsequently confirmed by Raman spectroscopy22. Electron-beam lithography was used to make multi-terminal devices (Fig.4b,lower inset).Notably,the multi-terminal electrical measure-ments showed that the electron mobility is,3,750cm2V21s21at a carrier density of,531012cm22(Fig.4b).For a high magnetic field of8.8T,we observe the half-integer quantum Hall effect(Fig.4b) corresponding to monolayer graphene4,5,indicating that the quality of CVD-grown graphene is comparable to that of mechanically cleaved graphene(Supplementary Fig.6)6.In addition to the good optical and electrical properties,the gra-phene film has excellent mechanical properties when used to make flexible and stretchable electrodes(Fig.4c,d)7.We evaluated the fold-ability of the graphene films transferred to a polyethylene terephthalate (PET)substrate(thickness,,100m m)coated with a thin PDMS layer (thickness,,200m m;Fig.4c)by measuring resistances with respect to bending radii.The resistances show little variation up to the bending radius of2.3mm(approximate tensile strain of6.5%)and are perfectly recovered after unbending.Notably,the original resistance can be restored even for the bending radius of0.8mm(approximate tensile strain of18.7%),exhibiting extreme mechanical stability in compari-son with conventional materials used in flexible electronics27.The resistances of graphene films transferred to pre-strained and unstrained PDMS substrates were measured with respect to uniaxial tensile strain ranging from0to30%(Fig.4d).Similar to the results in the folding experiment,the transferred film on an unstrained sub-strate recovers its original resistance after stretching by,6%(Fig.4d, left inset).However,further stretching often results in mechanical failure.Thus,we tried to transfer the film to pre-strained substrates28 to enhance the electromechanical stabilities by creating ripples similar to those observed in the growth process(Fig.2c,inset;Supplementary Fig.4).The graphene transferred to a longitudinally strained PDMS substrate does not show much enhancement,owing to the transverse strain induced by Poisson’s effect29.To prevent this problem,the PDMS substrate was isotropically stretched by,12%before transfer-ring the film to it(Fig.4d).Surprisingly,both longitudinal and trans-verse resistances(R y and R x)appear stable up to,11%stretching and show only one order of magnitude change at,25%stretching.We suppose that further uniaxial stretching can change the electronic band structures of graphene,leading to the modulation of the sheet resistance.These electromechanical properties thus show our graphene films to be not only the strongest7but also the most flexible and stretchable conducting transparent materials so far measured26. In conclusion,we have developed a simple method to grow and transfer high-quality stretchable graphene films on a large scale using CVD on nickel layers.The patterned films can easily be transferred to stretchable substrates by simple contact methods,and the number of graphene layers can be controlled by varying the thickness of the catalytic metals,the growth time and/or the ultraviolet treatment time.Because the dimensions of the graphene films are limited sim-ply by the size of the CVD growth chamber,scaling up can be readily achieved,and the outstanding optical,electrical and mechanical properties of the graphene films enable numerous applications including use in large-scale flexible,stretchable,foldable transparent electronics8,9,30.Received5October;accepted8December2008.Published online14January2009.1.Geim,A.K.&Novoselov,K.S.The rise of graphene.Nature Mater.6,183–191(2007).2.Li,X.et al.Highly conducting graphene sheets and Langmuir–Blodgett films.Nature Nanotechnol.3,538–542(2008).3.Eda,G.,Fanchini,G.&Chhowalla,rge-area ultrathin films of reducedgraphene oxide as a transparent and flexible electronic material.NatureNanotechnol.3,270–274(2008).4.Novoselov,K.S.et al.Two-dimensional gas of massless Dirac fermions ingraphene.Nature438,197–200(2005).5.Zhang,Y.,Tan,J.W.,Stormer,H.L.&Kim,P.Experimental observation of thequantum Hall effect and Berry’s phase in graphene.Nature438,201–204(2005).6.Novoselov,K.S.et al.Electric field effect in atomically thin carbon films.Science306,666–669(2004).7.Lee,C.,Wei,X.,Kysar,J.W.&Hone,J.Measurement of the elastic properties andintrinsic strength of monolayer graphene.Science321,385–388(2008).8.Kim,D.-H.et al.Stretchable and foldable silicon integrated circuits.Science320,507–511(2008).9.Sekitani,T.et al.A rubberlike stretchable active matrix using elastic conductors.Science321,1468–1472(2008).10.Han,M.Y.,Oezyilmaz,B.,Zhang,Y.&Kim,P.Energy band gap engineering ofgraphene nanoribbons.Phys.Rev.Lett.98,206805(2007).11.Bolotin,K.I.et al.Ultrahigh electron mobility in suspended graphene.Solid StateCommun.146,351–355(2008).12.Bunch,J.S.et al.Electromechanical resonators from graphene sheets.Science315,490–493(2008).13.Ohta,T.,Bostwick,A.,Seyller,T.,Horn,K.&Rotenberg,E.Controlling theelectronic structure of bilayer graphene.Science313,951–954(2006).14.Berger,C.et al.Electronic confinement and coherence in patterned epitaxialgraphene.Science312,1191–1196(2006).15.Sutter,P.W.,Flege,J.-I.&Sutter,E.A.Epitaxial graphene on ruthenium.NatureMater.7,406–411(2008).16.Dikin,D.A.et al.Preparation and characterization of graphene oxide paper.Nature448,457–460(2007).17.Stankovich,S,et al.Graphene-based composite materials.Nature442,282–286(2006).18.Li,D.,Muller,M.B.,Gilje,S.,Kaner,R.B.&Wallace,G.G.Processable aqueousdispersions of graphene nanosheets.Nature Nanotechnol.3,101–105(2008). 19.Obraztsov,A.N.,Obraztsova,E.A.,Tyurnina,A.V.&Zolotukhin,A.A.Chemicalvapor deposition of thin graphite films of nanometer thickness.Carbon45,2017–2021(2007).20.Yu,Q,et al.Graphene segregated on Ni surfaces and transferred to insulators.Appl.Phys.Lett.93,113103(2008).21.Reina,A.et rge area,few-layer graphene films on arbitrary substrates bychemical vapor deposition.Nano Lett.article ASAP atÆ/doi/ abs/10.1021/nl801827væ(2008).22.Ferrari,A.C.et al.Raman spectrum of graphene and graphene layers.Phys.Rev.Lett.97,187401(2006).23.Khang,D.-Y.et al.Individual aligned single-wall carbon nanotubes on elastomericsubstrates.Nano Lett.8,124–130(2008).24.Yang,P.et al.Mirrorless lasing from mesostructured waveguides patterned bysoft lithography.Science287,465–467(2000).25.Li,X.,Wang,X.,Zhang,L.,Lee,S.&Dai,H.Chemically derived,ultrasmoothgraphene nanoribbon semiconductors.Science319,1229–1232(2008).26.Nair,R.R.et al.Fine structure constant defines visual transparency of graphene.Science320,1308(2008).27.Lewis,J.Material challenge for flexible organic devices.Mater.Today9,38–45(2006).28.Sun,Y.,Choi,W.M.,Jiang,H.,Huang,Y.Y.&Rogers,J.A.Controlled buckling ofsemiconductor nanoribbons for stretchable electronics.Nature Nanotechnol.1, 201–207(2006).LETTERS NATURE 429.Khang,D.-Y.,Jiang,H.,Huang,Y.&Rogers,J.A.A stretchable form of single-crystal silicon for high-performance electronics on rubber substrates.Science311, 208–212(2006).30.Ko,H.C.et al.A hemispherical electronic eye camera based on compressiblesilicon optoelectronics.Nature454,748–753(2008).Supplementary Information is linked to the online version of the paper at /nature.Acknowledgements We thank J.H.Han,J.H.Kim,H.Lim,S.K.Bae and H.-J.Shin for assisting in graphene synthesis and analysis.This work was supported by the Korea Science and Engineering Foundation grant funded by the Korea Ministry for Education,Science and Technology(Center for Nanotubes and Nanostructured Composites R11-2001-091-00000-0),the Global Research Lab programme (Korea Foundation for International Cooperation of Science and Technology),the Brain Korea21project(Korea Research Foundation)and the information technology research and development programme of the Korea Ministry of Knowledge Economy(2008-F024-01).Author Contributions B.H.H.planned and supervised the project;J.-Y.C.supported and assisted in supervision on the project;S.Y.L,J.M.K.and K.S.K.advised on the project;K.S.K.and B.H.H.designed and performed the experiments;B.H.H.,P.K., J.-H.A and K.S.K.analysed data and wrote the manuscript;Y.Z.and P.K.made the quantum Hall devices and the measurements;and H.J.and J.-H.A.helped with the transfer process and the electromechanical analyses.Author Information Reprints and permissions information is available at /reprints.Correspondence and requests for materials should be addressed to B.H.H.(byunghee@)or J.-Y.C.(jaeyoung88.choi@).NATURE LETTERS5。
•Article •Advances in Polar Sciencedoi: 10.13679/j.advps.2018.3.00165September 2018 Vol. 29 No. 3: 165-180A glacial control on the eruption rate of Mt Erebus, AntarcticaMaximillian VAN WYK de VRIES *Department of Earth Sciences, University of Minnesota, Twin Cities campus, Minneapolis 55455, MN, United StatesReceived 17 May 2018; accepted 31 July 2018Abstract Mt Erebus is the most active Antarctic volcano, on the flanks of the world’s largest ice sheet. Despite this, the interactions between its eruptions and the ice cover have not been studied in detail. Focusing on the most recent deglaciation, we build a glacial retreat model and compare this to recent lava geochemistry measurements to investigate the processes involved. This analysis exposes a previously unknown link between Antarctic glaciation and eruptions, of vital importance to the understanding of volcanism in this context. We find that deglaciation led to rapid emptying of the shallow magma plumbing system and a resulting peak in eruption rates synchronous with ice retreat. We also find that the present day lavas do not represent steady state conditions, but originate from a source with up to 30% more partial melting than older >4 ka eruptions. This finding that deglaciation affects volcanism both on short and longer timescales may prompt a re-evaluation of eruptions in glaciated and previously glaciated terrains both in Antarctica and beyond.Keywords glaciovolcanism, Mt Erebus, Antarctica, Earth SystemsCitation : Van Wyk de Vries M. A glacial control on the eruption rate of Mt Erebus, Antarctica. Adv Polar Sci, 2018, 29(3):165-180, doi: 10.13679/j.advps.2018.3.001651 IntroductionContinental scale geological changes are typically slow processes; mountain belts form and plates collide over tens to hundreds of millions of years. Ice sheets are one important exception to this rule, bodies of ice covering millions of square kilometres can grow and recede over mere 100s to 1000s of years. In glaciated zones this rapid change exhibits a strong control on both the surface and subsurface. V olcanism is sensitive to pressure variations, thus joint glacial and volcanic regions are particularly dynamic (Smellie and Edwards, 2016).The relationship between glaciation and crustal loading is fairly well understood: when glaciation is initiated the subsurface is temporarily compressed as the ice grows faster than it is isostatically compensated. Conversely when deglaciation occurs the crust and mantle are*Corresponding author, E-mail: vanwy048@temporarily decompressed as the crust rebounds and the denser mantle slowly flows back (Smellie and Edwards, 2016). One often overlooked consequence of this is that volcanism can be locally inhibited by the growth of ice sheets and magnified as ice retreats. Wherever the mantle is near its melting point, changes in ice coverage affect mantle melt fraction and thus eruption rate (Jull and McKenzie, 1996).I ncreased eruption rates have been identified in many previously glaciated landscapes (Huybers and Langmuir, 2009). For instance, volcanism in the rocky mountains was higher in the early Holocene corresponding to the removal of the Cordilleran and Laurentide ice sheets (Watt et al., 2013). In Iceland, volcanism rates in the thousand years immediately following deglaciation (10–9 ka) were almost 30 times background glacial and present day rates (Schmidt et al., 2013). As well as being apparent in the abundance and thickness of flows erupted at that period, increased melt production in Iceland has been confirmed166Van Wyk de Vries M . Adv Polar Sci September(2018) Vol. 29 No. 3via numerical modelling (Jull and McKenzie, 1996) and identified in the geochemistry of lavas (Hardarson and Fitton, 1991).Mt Erebus is the youngest and most active volcano on the heavily glaciated Antarctic plate (LeMasurier and Thomson, 1990). Mt Erebus is a 3792 m high polygenetic stratovolcano, composed of a stratocone built upon a mafic shield base, similar to Mount Etna (Esser and McIntosh, 2004). The eruptive centre itself is 1.3 Ma old, and has been erupting at an increased rate for the last 250000 years (Parmelee et al., 2015). To this day Mt Erebus hosts a convecting phonolite lava lake and a number of active ice fumaroles (Kelly et al., 2008; Wardell et al., 2004). Figure 1 shows the location and overall geological context of Mt Erebus - note the abundance of volcanism throughout WestAntarctica.Figure 1 Overall geological, volcanic and glaciological context of Mt Erebus. Note the abundance of volcanism on the Antarctic plate, especially in the deep subglacial basins of West Antarctica. Mt Erebus is located on one branch (Terror Rift) of the West Antarctic Rift System that bisects the continent. V olcano locations from Van Wyk de Vries et al. (2017); local Mt Erebus geology from Rilling et al. (2007).V olcanism in Antarctica is by no means unusual, particularly in the western regions. A large continental rift, the West Antarctic Rift System, extends from the Ross Sea to the Antarctic Peninsula and is associated with extensive volcanism (Van Wyk de Vries et al., 2017). The region is still strongly data-limited, however around 40 exposed and nearly 100 subglacial volcanoes have been identified (Figure 1). Recent studies suggest that runaway deglaciation of sections of the West Antarctic ice sheet is possible, or even likely (DeConto and Pollard, 2016), thus many of these volcanoes may soon experience reductions in pressure similar to around Mt Erebus 10–5 ka (Hall and Denton, 1999). Understanding the effects of deglaciation on Mt Erebus can thus play an important role in forecasting the future of volcanic activity in Antarctica, and may be useful for accurately measuring future ice-melt rates.M t Erebus has been active for much of the last 250 ka,A glacial control on the eruption rate of Mt Erebus, Antarctica167so the underlying mantle necessarily oversteps its solidus to some degree. Figure 2 illustrates how the growth and retreat of ice sheets can affect the mantle geotherm and generate additional melt. Thickening and thinning of ice sheets affects the pressure in the underlying mantle and can raise (deglaciation) or depress (glaciation) the geotherm from its average position. In Figure 2a, a typically non-volcanic region becomes active during deglaciation periods. Note that this does nevertheless require a mantle close to its melting point to be effective; most non-volcanic regions will not melt even during deglaciation. In Figure 2b, a moderately volcanic region usually generates small amounts of melt, and experiences peaks in melt production during deglaciation. This same region may also become volcanically inactive during the glacial advances due to depression of the geotherm. In Figure 2c, a strongly volcanic region naturally generates large amounts of melt. These conditions make it particularly sensitive to changes in glacial loading-large peaks in volcanic activity would be expected during deglaciation. These peaks may however be rapidly overprinted as volcanism rates remain high in steady state conditions and moderate even during glacialadvances.Figure 2 Schematic illustration of how both glaciation and deglaciation can affect the generation of melt in the subsurface. See Huybers and Langmuir (2009) and Jull and McKenzie (1996) for more details.Another potential source of lava is the emptying of already established shallow magma chambers due to rapid loss of overburden pressure. Recent advances in both the dating of Erebus’s <10 ka lava flows (Parmelee et al., 2015) and the timing of glacial retreat in the region (Anderson et al., 2014; Bentley et al., 2014; Ingólfsson, 2004) makes a study of these processes more reliable. In this study we aim to: (i) Determine whether ice loss during the last interglacial period stimulated higher eruption rates at Mt Erebus.(ii) If so, constrain what igneous, glacial or tectonic processes generated this increase in activity.(iii) Establish whether present day Mt Erebus activity is steady state, or whether it is still adapting to past glacial changes.168Van Wyk de Vries M . Adv Polar Sci September(2018) Vol. 29 No. 3First we build up a model to evaluate the magnitude of changes around Mt Erebus and what effect this could have on the rate of volcanism. We will then follow that up with an analysis of the geochemistry of Mt Erebus’s eruptive products. Finally, these results will be considered within the context of West Antarctica as a whole to assess their relevance to other glaciovolcanic systems.2 Coupled deglaciation modelThe timing of glacial retreat and maximum ice thickness in the Ross Sea has been the focus of many studies (Anderson et al., 2014; Bentley et al., 2014; Ingólfsson, 2004; Hall and Denton, 1999). Bathymetric studies show glacial erosion marks, glacial lineations and moraines up to the edge of the continental shelf, confirming the extent of ice at the glacial maximum (Anderson et al., 2014; Bentley et al., 2014). The last glacial maximum grounding line is in places over 1000 km beyond its current position, which is a measure of the volume of ice lost. The inland reaches of the Antarctic ice sheet are not thought to have changed much in the last 30 ka, and greatest ice loss occurred in the Ross and Weddell seas (Bentley et al., 2014; Ingólfsson, 2004; Hall and Denton, 1999).The Ross Sea’s current bathymetry is made up of deep glacial troughs and wide plateaus due to differential glacial erosion and graben-horst style rifting (Paulsen et al., 2014).There is some lateral variation in depth, but the average is close to 500 m below sea level (Fretwell et al., 2013). Marine based ice sheets are often prone to rapid retreat, especially if strong grounding lines are breached. Recent studies in Greenland have shown that bases of glaciers exposed to warm sea waters can be rapidly eroded at the contact (Rignot et al., 2010). The large ice shelves present in Antarctica’s Ross and Weddell seas are remnants of former ice sheets covering the area that have been undercut by warm sea waters with only floating ice remaining. Ice shelves are composed of floating ice, so do not exert any extra load on the crust.Sea based ice is thinned mainly by rapid grounding line retreat and basal melting by seawater, whereas land based ice in cold regions thins mostly through excess ice outflow (dynamic thinning). The two components are linked as land based ice retreat rate is related to marine grounding line position. Figure 3 shows the two main hypotheses for ice retreat in the Ross Sea, one with a gradual retreat perpendicular to the Victoria Land coast, and the other with rapid growth of a central embayment (Lee et al., 2017). The discrepancy between the models stems from an absence of data from the central part of the Ross Ice Shelf, however the retreat is much better constrained around Mt Erebus. Note that both models involve the grounding line retreating past Mt Erebus between 8 ka and 7 ka, and involve similar timings of inland ice retreat on Victoria Land.Figure 3 Depiction of the two main theories for glacial retreat in the Ross Sea, modified from Lee et al. (2017) and Anderson et al. (2014). The differences in timing are relatively minor in the Mt Erebus area where retreat has been well constrained (e.g. Hall and Denton, 1999).a) Marine deglaciationThree main factors influence the retreat of marine grounded ice sheets: calving rate, basal melting and surface melting. In this region, surface melting is negligible (cold polar climate) and calving can only occur on the ice sheet boundary exposed to basal melting. By assuming calving ice was exposed to basal melting long enough to be floating (ice shelf conditions),calving is also discounted. This leaves only one term: basal melting (b ) due to warm seawater. This melting only affects ice on the marine side of the grounding line, so the rate of grounding line retreat must be defined. A coordinate system is set up with x parallel to the Victoria Land coast, y perpendicular to this coast and z vertical. The ice sheet is considered uniform in the y direction, so this is a relationA glacial control on the eruption rate of Mt Erebus, Antarctica169between x and t : x=θ(t ), with θ(t ) a function of time measuring the retreat of the grounding line. To measure θ(t ), grounding line position data (Lee et al., 2017; Anderson et al., 2014) was used, the 10 ka position is taken as the starting point (x = 0). A polynomial function was fitted to this data (R 2 = 0.998) giving x =θ(t )=45t 2–7.5t which accurately depicts the ice retreat rate until around 4 ka. Basal melting b is thus set as b =0 for x >θ(t ) and b =k with k a constant describing the melt rate at x <θ(t ). Data suggests the initial ice sheet thickness was 650 m, and the sea bed has an average depth of 500 m (Anderson et al., 2014; Fretwell et al., 2013). A value of 0.1 m·a −1 was here used for k , this is an intermediate melt rate for ice interacting with sea water (Depoorter et al., 2013; Jenkins and Doake, 1991). This gives a thickness z : z =650 for x >θ(t ), and z =650–b (t –t g ) for x <θ(t ), with t time and t g the date at which the grounding line retreated past this point.Since water has a higher density than ice only a small portion of the marine ice melted will actually result in decompression before the ice sheet reaches ice shelf status equivalent to open sea. The densities of water and ice arerespectively 1000 kg·m −3 and 900 kg·m −3. Using this we can calculate that the mass of the 500 m thick water layer T w present today (Anderson et al., 2014; Fretwell et al., 2013) is equivalent to a layer of ice with a thickness T i given by:1000500555ٛm ,900w i w i T T ρρ==⨯=with ρw and ρi the respective densities of water and ice. The ice overload is 650–555=95 m, only 95 m of the total ice column affect pressure in the underlying mantle when removed (ignoring eustatic changes). After this point the ice sheet transitions into an ice shelf and subsequent melt has no effect. Figure 4 shows the evolution of the marine sections from an ice sheet to a retreating ice shelf from 9 ka to present day (the thickness is simplified to a uniform 650 m at 10 ka). The model was run with a temporal resolution of 500 years. Mt Erebus is a distance of 260 km from the 10 ka grounding line, at this distance the ice sheet–shelf transition occurs between 7 ka and 6 ka. This model also reproduces the present day thicknessof the Ross Ice Shelf with an error of only 50 m (within ±20%).Figure 4 Modelled marine based ice thickness through time, representing the rapid undercutting of the Ross Sea ice by warm seawaters. Despite simplifications, the model does a reasonable job at reproducing the present day ice shelf thickness around Mt Erebus.T he reduction in pressure generated by this glacial unloading can be calculated (assuming an ice density of 900 kg·m −3). Pressure P is given by P=ρgh with g the acceleration due to gravity and h the thickness of the layer. Here all 95 m of relevant marine ice are lost, so P =9.8×900×95=8.4×105 Pa=0.84 MPa. b) Grounded ice lossLand based (grounded) ice reacts differently to marine ice as the grounding line will not be undercut by warm seawater. As a result basal and surface melting are both negligible terms in the overall mass balance. However, as the marine ice sheet retreats, the land ice is no longer in equilibrium and outflow will dominate inflow. This results in a steady thinning of the ice without significant melt. The rate at which thinning occurs in a given area should increase once ice is no longer grounded. A thinning term τ (Depoorter et al., 2013; Rignot et al., 2010) can be defined as: τ = 0.01 m·a −1 for x >θ(t ) and τ = 0.02 m·a −1 for x <θ(t ). As the land and marine based ice sheets are defined in the same coordinate systems, this θ(t ) term is the same as that defined in the previous section. At 10 ka the average thickness of ice on the coast inland of Mt Erebus was 550 m (Anderson et al., 2014) and the current ice thickness is an average of 100 m thick (Fretwell et al., 2013), which means 450 m of ice were lost. In contrast to marine ice, all of this ice column will contribute to depressurisation when removed.Figure 5 presents the coastal ice sheet’s thickness against distance inland throughout time. A sharp inflexion in the thickness profiles shows the retreat of the grounding line and its thinning effect on land based ice. The reduction in pressure ΔP is given by ΔP=ρg Δh with Δh the total ice lost between 10 ka and a given time t . However, given that Mt Erebus is 60 km offshore (Fretwell et al., 2013) it will experience only a portion of this pressure drop. The flexural equation (Turcotte and Schubert, 2002) can be used to calculate what percentage of the maximum loading is experienced at Mt Erebus. This equation relates an increase170Van Wyk de Vries M . Adv Polar Sci September(2018) Vol. 29 No. 3or decrease in pressure W to distance γ from the load:()4242d d (),d d m w w w D Pwg q ρργγγ++-=with P the horizontal pressure acting on the plate (positive if compressional, negative if extensional), g the accelerationdue to gravity, ρm the density of the mantle (3300 kg·m −3), ρw the density of the load (here the load is ice so 900 kg·m −3), flexural rigidity D is a measure of how well these loads are distributed over distance (Capra and Dietrich, 2008; Kaufmann et al., 2005) and ()q γis the vertical loading causing the flexure.Figure 5 Modelled retreat of coastal ice on the Victoria coast/Transantarctic Mountains margin. A minimum ice thickness was set at 100m to reproduce present day thicknesses in the region (Fretwell et al., 2013). Note a rapid increase in ice loss after the grounding line retreats south of this point between 8 and 7 ka.We can assume that the horizontal pressure gradients are negligible (P =0) and that loading q (γ) is 0 when γ is not 0. This will model the flexure beyond the edge of a load (here beyond the edge of the land based ice sheet) in an area in tectonic equilibrium (over the timescales considered). This simplifies the flexural equation to:()44d 0d m w wD wg ρργ+-= Using the boundary conditions d 0d wγ= at γ=0 andlim 0w γ→∞= this equation can be solved to give:0()(cossin ),w w e γαγγγαα-=+ with w 0 the depression or pressure overload at γ=0 and α the flexural parameter (Turcotte and Schubert, 2002). Finally, to simplify the comparison with values from the glacial model, we define a normalised value β(γ) such as:()0()100w w γβγ=⨯β(γ) is a dimensionless quantity that describes what percentage of the central loading w 0 is induced at distance γ from this load. Figure 6 shows a plot of β(γ) against γ,Figure 6 Decay in compression away from the edge of a central load ‘broken plate model’ given by the isostatic flexure equation(Turcotte and Schubert, 2002) with Mt Erebus labelled.A glacial control on the eruption rate of Mt Erebus, Antarctica171showing the fraction of the land based ice load pressurisation that is transmitted to Mt Erebus (60 km offshore from the coastline). The bulge and decay rate predicted in the model closely fit observations of flexure near oceanic islands and seamounts, and post-glacial isostatic adjustment uplift values from Scandinavia and North America (Watts, 2002).The coastal pressure reduction values can be multiplied by the flexural parameter for 60 km to give the pressure reduction at Mt Erebus. β(60)=69.3%, so the pressure reduction at Mt Erebus is 69.3% of that at the coast. Mt Erebus is assumed to be in contact with the ocean, so flexure is not considered for the marine based ice retreat. U sing this result, the combined pressure loss due to both the grounded and marine ice can be calculated. A total pressure drop of 3.66 MPa occurred between 9 and 3.5 ka, with peak decompression at 6 ka. Figure 7 shows a plot of this pressure drop through time along with the ages of Mt Erebus surface lavas calculated by Parmelee et al. (2015). The timing of the pressure reduction coincides well with the peak in lava age (also 9–3.5 ka), although eruptions are more discontinuous. There are two main ways in which depressurisation could affect eruption rate: an increase in the degree of partial melting at depth and the rapid emptying of pre-existing shallow magmachambers.Figure 7 Comparison between the postglacial decompression pulse predicted from this deglaciation model and the age distribution of summit lavas from Parmelee et al. (2015). There is a close association between the two that suggests they may be linked.c) Shallow magma chamber releaseWhilst deglaciation does affect the pressure state in themantle, the relative change in overburden pressure (PP∆, the pressure drop divided by the total post-deglaciation pressure) will be low at depths. At shallow levels the crust’s temperature is far too low for any partial melt to occur,however PP∆ can be very high (and can even exceed [1 atdepths of less than 150 m for the pressure] drop calculated above). As a result of this, the local environment of any previously stable magma chambers, sills or dykes will drastically change. Similarly to how major volcanic edifice avalanches can induce eruptions (Siebert, 1984), rapid deglaciation can relieve pressure on previously stagnant bodies of magma and permit them to escape to the surface. Deglaciation is thought to involve not only reduced vertical stresses but also lesser horizontal stresses and increased regional strain rates (Watt et al., 2013). This joint action serves to promote opening of cracks, dyke formation and fault motion, all of which can induce eruptions. Figure 8 describes schematically the effect deglaciation can have on shallow magma chambers.The exact additional volumes of lava that this process would provide are uncertain as they depend on the volume of magma stored in sensitive shallow magma chambers during deglaciation. Present day volcanological studies of Mt Erebus suggest that it is underlain by a fairly extensive magma plumbing system (e.g. Oppenheimer et al. 2011; Esser et al. 2004) that could have been tapped during deglaciation. As the stable fractionation of magma in these shallow chambers was interrupted by deglaciation, magmas erupted as a result would be more primitive (less fractionated) than non-affected lavas. In particular compatible element concentrations (Co, Ni, V , possibly Eu, Ca, Mg, etc; all largely insensitive to differences in degree of partial melting) would be higher in affected magmas as the relevant mineral phases would not have fully crystallised out of the melt. Incompatible elements (and even more so incompatible element ratios) would be much less affected by this shorter fractionation. d) Mantle decompression meltingWe used the MELTS software to quantify the effect of this decrease in pressure on mantle melting under Mt Erebus (pMELTS version 5.6.1: Ghiorsio and Sack (1995), Asimow and Ghiorsio (1998)). Defining a series of pressure172Van Wyk de Vries M . Adv Polar Sci September(2018) Vol. 29 No. 3Figure 8 Schematic bloc diagram of Mt Erebus and the underlying crust-mantle, including the shallow magma plumbing system. This figure shows how the removal of superficial ice (and associated drop in overburden pressure) can cause rapid drainage of shallow magma chambers and increased eruption rates.and temperature steps allows MELTS to calculate a change in composition over this range. In this case we do not need to know the absolute melt fraction at a specific depth, onlythe sensitivity to changes in pressure: FP∆∆, with ΔF the change in melt fraction and ΔP the change in pressure. This ratio describes how much an increase or decrease in pressure will affect the amount of melting. Questions remain about the exact composition and temperature of the lithosphere below Mt Erebus, so appropriate lithospheric averages were used (Xia and Hao, 2013; Green et al., 2010; McKenzie and O’Nions, 1991). Bulk composition and temperature are considered constant at a given depth. Foreach depth, FP∆∆ was calculated for three temperatures (within ±10% of the average geotherm) and averaged.FP∆∆ measurements were taken at 4 pressure ranges corresponding to depths of 25, 50, 75 and 100 km and are respectively 0, 0.0002%, 0.0016% and 0.0043% melt per MPa decompression. Using these data points, a functionrelating FP∆∆ to P (equivalent to depth) can be found, with: 9261102100.011FP P P--∆=⨯-⨯+∆ This function goes to 0 at 35 km depth (1 GPa), meaning no melt produced in the upper lithosphere, in line with observations(Esser et al., 2004). At 150 km depth (4.5 GPa)FP∆∆ reaches around 0.012% per MPa, consistent with deep melting at a hotspot/continental rift. Huybers and Langmuir (2009) also suggest a comparableFP∆∆ value of 0.01% per MPa. The pressure reductions can be multiplied by the volume of a melting cone beneath Mt Erebus, weightedaccording to the depth vs FP∆∆ relation, to obtain the total melt production (as in Jull and McKenzie, 1996). Accordingly, the 3.66 MPa pressure drop linked to deglaciation generates a total of 280 km 3 of magma in the conical melt region. The largest part of this melt occurs at great depths (>100 km), so only a small fraction may reach the surface. Studies have shown that in Iceland 5%–10% of the total melt may reach the surface, erupting on average 1 ka later (Slater et al., 1998; Jull and McKenzie, 1996). Melt at Mt Erebus is deeper and melt fraction is over an order of magnitude lower than in Iceland, so we could hypothesize that 0.5%–1% of the magma reaches the surface on average 10 ka after deglaciation. Figure 9 shows the processes involved in this deglaciation-partial melt increase cycle.Overall this model predicts that up to 280 km 3 of additional melt may form at depth, with 1–3 km 3 of this being erupted. The exact volume erupted and timing of eruption are poorly constrained, however it would likely reach the surface spread out several thousand years after deglaciation. This process would be expected to result in a distinctive trace element pattern —with the higher than usual melt fraction resulting in different incompatible element ratios. Ratios of highly incompatible to moderately incompatible elements (Ce/Y , Nb/Zr, La/Ho, etc; insensitive to fractional crystallisation) would likely be lower during the eruption of these lavas (Hardarson and Fitton, 1991). Compatible element concentrations would be largely unaffected by these processes.A glacial control on the eruption rate of Mt Erebus, Antarctica173Figure 9 Schematic bloc diagram of Mt Erebus and the underlying crust-mantle. This figure shows how the removal of superficial ice can generate increased degrees of partial melting at depth and induce higher eruption rates.3 Geochemical analysisBasic conceptual modelling suggests that a link between icecoverage and volcanism should exist; postglacial depressurization should indeed induce eruptions. However,while the model is useful for understanding the processesoccurring at Mt Erebus, it cannot alone be used as a guide forunderstanding what is occurring in reality. As the previoussection explores, there are two main mechanisms by whichglacial retreat could generate eruptions: increased deep partialmelting and draining of shallow magma chambers (Jull andMcKenzie, 1996). In each case the erupted magma has a verydifferent origin which allows us to make specific predictionsabout its chemistry. As the geochemistry of both modernlavas (Kelly et al., 2008), lava flows from the late glacialperiod (8.5–4 ka, Parmelee et al., 2015) and older lavas fromthe glacial period (60–20 ka; Iverson et al., 2014) has beencollected we can test these predictions. In this section thisgeochemical data is analysed in order to better understandthe postglacial evolution of Mt Erebus.Geochemical (major, minor and trace elementconcentrations) data for Mt Erebus lavas is available fromseveral studies of the summit plateau lavas and volcanictephra (Parmelee et al., 2015; Kelly et al., 2008), recenteruptive episodes are well sampled. Older flows are mostlyburied under recent flows and ice so are less well studied,however englacial tephras from older eruptions have alsobeen analysed (Iverson et al., 2014; samples from EIT008and 034). For modern (< 4 ka) and late glacial lavas(8.5–4 ka), analyses of over 10 individual samples areavailable (Parmelee et al., 2015). Modern values comemostly from analyses of glass and crystals from tephra scatter which has been the main mode of eruption over thelast 4 ka (Kelly et al., 2008). The mean values and standard deviations of each element concentration were calculated for each timeperiod: modern (<4 ka), late glacial (8.5–4 ka) and glacial (60–20 ka). This data is presented in Table 1. Incompatible elements, such as K, Zr, Ce, Nb, Rb and Y (McKenzie and O’Nions, 1991) are very sensitive to changes in degree of melting, but less so to changes in fractional crystallisation (particularly if the ratios of these elements are taken, Hardarson and Fitton, 1991). Compatible elements such as Ni, V (and in most basic magmas Ca, Mg, Eu, etc.) are on the contrary largely insensitive to changes in melt fraction, but strongly affected by differences in fractionation (Ghiorsio and Sack, 1995). Figures 10 and 11 present two plots of compatible elements versus SiO 2. Silica concentration is only lightly affected by changes in fractionation and almost unaffected by differences in melt fraction. Figure 10 shows that concentrations of CaO and Eu, both highly compatible in common basic mineral plagioclase (and to a lesser extent clinopyroxene) were considerably higher in late glacial times than both modern and glacial equivalents. Figure 11 shows a similar trend for MgO (particularly compatible in forsteritic olivine and pyroxenes) and trace element V (small, low charge cation compatible in most mineral phases). Overall these differences reveal that late glacial lavas fractionated less mafic minerals than their modern and glacial counterparts, most likely because they spent less time in the magma chambers. This is consistent with the predictions of the shallow magma chamber release model discussed in the previous section.。
大型马歇尔试验击实次数第24卷第1期2004年1月长安大学(自然科学版)JournalofChanganUniversity(NaturalScienceEdition)V o1.24No.1Jan.2004文章编号:1671—8879(2004)O1—0021-04大型马歇尔试验击实次数陆长兵,黄晓明(东南大学交通学院,江苏南京210096)摘要:大粒径碎石沥青混合料能减少甚至消除车辙.通过对大型马歇尔不同击实次数击实成型试验各参数的对比分析,从理论和试验上确定了大型马歇尔击实成型试验的击实次数为112次.经过标准马歇尔击实试验和大型马歇尔击实试验的对比研究,验证了大型马歇尔击实112次成型试件的稳定度和流值分别为标准马歇尔击实75次成型试件的2.25倍和1.50倍,从而确定大型马歇尔试验的技术指标.关键词:道路工程;沥青混合料;大型马歇尔击实试验;击实次数;稳定度;流值中图分类号:U416.217文献标识码:A StrikingtimesoflargescaleMarshallcompactiontestsLUChang—bing.HUANGXiao—ming(SchoolofTransportation,SoutheastUniversity,Nanjing210096,China)Abstract:Theasphaltmixturewithlargestonescanreduceprematureruttingofasphalt pavement,butitscompactiondegreeisakeyindexforpavementoperation.Thestrikingtimesof largescaleMarshallcompactiontestswasanalyzedandcomputed,itwasdetermined112time s. Atthiscompactionconditionthestabilityandflowvalueofasphaltmixtureare2.25timeand 1.50timetotheonesthatweremeasuredunderstandardMarshallcompactiontests,andits strikingtimesare75.,Keywords~roadengineering;asphaltmixture;largescaleMashallcompactiontest;striking times;stability;flowvalue0引言国内外高等级公路建设和运营实践表明,随着交通量的增长,重车和胎压的增大以及交通车辆的渠化,使得沥青路面的抗车辙能力和路面的耐久性变差[1].近年来,国外许多专家都开始研究大粒径沥青稳定碎石(LSAM),国外研究表明,大粒径沥青稳定碎石(粒径通常超过26.5ram)具有以下几方面的优点:(1)级配良好的大粒径沥青稳定碎石可以抵抗较大的塑性和剪切变形,承受重载交通的作用,具有良好的抗车辙能力,提高了沥青路面的高温稳定性;收稿日期:2003—01—21作者简介:陆长兵(1979一),男,江苏海安人,东南大学硕士研究生特别对于低速,重车路段,需要的持荷时间较长时,设计良好的LSAM与传统沥青混凝土相比,显示出十分明显的抗永久变形能力.(2)LSAM沥青用量可以比常规沥青混合料低30.同时较粗的集料要求的破碎功降低,因此混合料的生产费用降低.较低的VMA会降低集料比表面积,能得到更厚的沥青膜,从而有利于抗老化和水损害.目前,中国高等级公路路面结构形式90以上采用半刚性基层沥青路面,沥青混合料主要用于面层,混合料最大粒径都小于26.5mm.因此,中国进行沥青混合料配合比设计时基本上使用标准马歇22长安大学(自然科学版)2004年尔击实仪.通过多年的建设和使用发现,半刚性基层沥青路面的使用寿命不尽人意,尤其是在多雨潮湿地区,近年来发生了多起使用不久就大规模严重破坏的现象,不得不挖除重新铺筑.借鉴国外先进的经验,国内已开始对沥青混合料用于基层进行研究,由于混合料最大粒径一般都大于26.5mm,有的甚至达到50.8mm.粒径过大时,击实过程中颗粒在模筒内难于活动到稳定就位状态,从而影响所能达到的密实程度.因此,必须采用大型马歇尔击实仪进行混合料配合比设计.在进行马歇尔成型试件时,击实次数对成型试件空隙率,密度等参数影响较大,首先必须通过理论分析和试验确定大型马歇尔击实试验采用的击实次数. 1理论分析目前针对大粒径沥青混合料使用的大型马歇尔击实仪与标准马歇尔击实仪的物理参数对比如表1. 裹1大型与标准马歇尔击实仪物理参数对比物理参数大型马歇尔击实仪标准马歇尔击实仪试件直径/2.54cm3.752.5试件高度/2.54cm6.OO4.O落锤重量/o.454kg22.5O10.0落锤高度/2.54cm18.0018.0中国《公路沥青路面设计规范》(JTJ014—97)中规定高等级公路标准马歇尔击实试验两面各击实75次.根据大型马歇尔击实仪与标准马歇尔击实仪成型试件单位体积上所做的功相等的原则,计算出大型马歇尔击实仪击实次数.m1griN1一mzgHNz,1,nDih1/4nD;h2/4…式中:m,N,D,h分别为大型马歇尔击实仪落锤重量,击实次数,试件直径,试件高度;mz,Nz,Dz,hz分别为标准马歇尔击实仪落锤重量,击实次数,试件直径,试件高度;g为重力加速度;H为落锤高度.将表1中数据代人式(1)可求得N为112次.理论计算表明,按试件单位体积上所做的功相等的原则,大型马歇尔击实试验两面各击实112次对试件单位体积所做的功与标准马歇尔击实试验两面各击实75次对试件单位体积所做的功相等.2最佳击实次数选择不同材料和不同级配进行室内试验,以验证理论分析的正确性和可行性.2.1级配1级配1为ATB-30密级配沥青混合料,其材料参数见表2.通过室内数据绘制击实次数与马歇尔试验体积指标关系图,以击实次数为横坐标,以VMA,毛体积密度,空隙率为纵坐标,将试验结果绘制成击实次数与各项指标的关系曲线如图1.从图1中击实次数与体积指标的关系图可知,击实次数小于112次时,随击实次数的增加,空隙率不断减小,密度不断增加,VMA不断减小.在该过程中,沥青混合料在外界压实功的作用下,集料颗粒之间的距离不断减小,沥青混合料处于压密的过程.击实次数大于112次时,由于沥青混合料已达到最佳的密实状态,裹2级配1材料参数材料品种2~41~31~20.5~1石屑砂矿粉配合比配比3615O2215102100视密度/g?cm2.7762.8182.8182.8232.6912.6082.757毛体积密度/g?cm2.7412.7772.7772.729沥青品种:AH一9O沥青密度/g?cm~:0.974油石比/:3.3 再施加外界压实功,相当于给沥青混合料施加了外部挠动,同时,击实功过大,石料部分被压碎,导致空隙率增大,密度减小,,A增大.2.2级配2级配2为LSM一30排水性沥青混合料,孔隙率15~20,其材料参数见表3.级配2沥青混合料不同击实次数与VMA,毛体积密度,以及空隙率的关系见表4.试验表明,表4和图1有着类似的结论.通过对ATB-30和LSM一30两种级配不同击实次数的研究,从试验上论证了大型马歇尔试验击实次数112次是最佳的击实次数.3最佳沥青用量众所周知,马歇尔击实试验确定的最佳沥青用量不一定是真正的最佳沥青用量0lAC,还必须进行相关的性能试验,检验马歇尔击实试验确定最佳沥青用量的可行性.本文采用级配2对大型马歇尔击第1期陆长兵,等:大型马歇尔试验击实次数23裹3级配2材料参数筛孔尺寸/ram37.50031.50026.50019.00016.00013.2009.5004.75O2.360l_18O0.6000.300O.15 O0.075通过百分率/OO.00099.70087.60060.80046.70039.20031.30015.1O09.1O07.3006.7005.6004.700 3.700毛体积密度/g?cm2.6982.6992.7012.6952.6842.6772.670视密度/g?cmI32.6822.6732.6722.6512.6582.686沥青品种:MARC-70沥青密度/g?cm:1.021油石比/%:2.9裹4击实次数与体积参数的关系击实次数/次7597112127VMA/26.70025.90025.70026.100毛体积密度/g?cm2.0332.0562.O592.O51空隙率/%21.20020.70019.60019.900目口●榴蛙1】j)击实次数/次(a)击实次数与VMA的关系击实次数/次(b)击实次数与毛体积密度的关系020*******O0120140击实次数/次(c)击实次数与空隙率的关系图1击实次数与马歇尔试验体积指标关系图实试验击实112次确定的最佳沥青用量OAC和OAC:I=O.3三种沥青用量进行高温车辙试验和浸水马歇尔试验,检验大型马歇尔击实试验击实112次确定最佳沥青用量的可行性.车辙试验按《公路工程沥青及沥青混合料试验规程》(JTJ052—2000)中TO719-1993进行,试验温度60℃,轮压o.7MPa].本试验车辙板厚度选用8cm,试验数据见表5.裹5LSM一30沥青混合料车辙试验动稳定度沥青用量级配OAC一0.30lAC0AC+0.3LSM一30/次?mm485246303348浸水马歇尔试验按《公路工程沥青及沥青混合料试验规程》(JTJ052-2000)中TO7O9进行,一组在60℃水浴中保养45min后测其马歇尔稳定度MS; 另一组在抽真空至真空度98.3kPa以上维持15 min,然后打开进水胶管,靠负压进入冷水流使试件全部浸入水中,浸水15min,取出试件放在60℃的恒温水槽中保温48h后测定马歇尔稳定度MS1,计算浸水残留稳定度MSO—Ms1/MS×100.试验数据见表6.裹6浸水马歇尔试验数据沥青用量/%MS/kNMS1/kNMSo/kN2.614.911.879.22.915.612.982.73.215.212.884.2车辙试验表明,随着沥青用量的减少,动稳定度增加,而浸水马歇尔试验表明随着沥青用量的增加, 残留稳定度增加.通过简单的性能试验表明,从兼顾高温性能和水稳定性性能的角度,大型马歇尔击实试验击实112次确定的最佳沥青用量能满足综合的路用性能.4成型试件参数关系上述理论和试验分析,确定大型马歇尔击实次数为112次.下面通过大型马歇尔击实成型击实112次和标准马歇尔击实成型击实75次,对大型马歇尔击实成型和标准马歇尔击实成型各体积力学参数进行比较.O5O5O5O5O43322llO,丹越州24长安大学(自然科学版)2004阜试验用材料参数见表2,试验所得各数据见表7.从表7可以看出,对于大粒径碎石沥青混合料,型试件相比,所得的稳定度和流值更具有重复性,两种试验方法密度,空隙率以及VMA基本相近,稳定大型马歇尔击实ll2次与标准马歇尔击实75次成度比值2.15,流值比值1.36. 表7大型马歇尔和标准马歇尔击实成型各参数对比成型方法毛体积密度/g?cm空隙率/VMA/稳定度/kN流值/mm2.5042.771O.9819.6O2.90大型马歇尔2.4982.991l_182O.OO3.30击实112次2.5022.85l1.04l9.7O3.20均值2.5O12.8711.O719.773.132.4833.5511.7O8.5O2.30标准马歇尔2.5042.771O.987.2O2.40击实75次2.5241.981O.2611.9O2.40均值2.5032.77lO.989.2O2.30理论上两种试验方法的体击实功相同,密度应相同,考虑各种不确定因素,室内试验所得的结论验证了理论的正确性.两种试验方法所得的稳定度和流值相差较大,理论研究表明,大型马歇尔击实成型试件的截面积与标准马歇尔击实成型试件的截面积比值2.25.因此,当两种试件截面单位面积上产生相同的应力时,大型马歇尔击实成型试件截面上作用的荷载就必须是标准马歇尔击实成型试件截面上作用荷载的2.25倍,故两种试验方法稳定度的比值为2.25.流值与试件的直径有关,它是根据单位直径上的流动变形来计算的.由于大型马歇尔击实成型试件的直径是标准马歇尔击实成型试件的1.5 倍,因此,两种试验方法流值的比值为1.5『4].上述分析表明,考虑材料和试验的变异性以及材料的尺寸效应等影响因素,试验结论与理论分析基本相符.因此,可以很容易地根据现行沥青混合料试验规程中制定的标准马歇尔稳定度和流值的技术标准确定大型马歇尔试件的稳定度和流值技术指标.如中国《公路沥青路面设计规范》(JTJO14—97) 对于I型沥青混凝土标准马歇尔试验用于高等级公路稳定度的指标为大于7.5kN,流值指标为20~40 (0.1ram),则大型马歇尔击实试验的稳定度大于17kN,流值为30~60(0.1mm).5结语通过对不同材料,不同级配,不同击实次数大型马歇尔击实成型试验各参数的对比分析,从理论和试验上确定了大型马歇尔击实成型试验的击实次数.此外,通过标准马歇尔击实试验和大型马歇尔击实试验的对比研究,分析两者之间的内在联系,从而确定大型马歇尔的技术指标.(1)大粒径沥青稳定碎石主要是粒径大于26.5mm沥青混合料,国内外研究表明,大粒径沥青稳定碎石能有效减小车辙,提高路面使用性能.(2)理论分析和室内试验确定大型马歇尔击实成型试件的击实次数为ll2次.(3)理论分析大型马歇尔击实ll2次成型试件的稳定度和流值分别为标准马歇尔击实75次成型试件的2.25倍和1.50倍,室内试验验证了理论的正确性.(4)确定大型马歇尔试验的技术指标:稳定度大于17kN,流值为30~60(0.1mm).参考文献:[-11刘中林,田文,史建方,等.高等级公路沥青混凝土路面新技术EM].北京:人民交通出版社,2002.E2-1JTJ014—97.公路沥青路面设计规范Is].1997r3-]JTJ052—2000.公路工程沥青及沥青混合料试验规程Is].2001.[rgestoneasphaltmixes:design andconstruction[R].NCA TReportNo.90—4,1990.[5]PaulKhoslaN,eoflargestoneas~phalticconcreteoverlaysofflexiblepavements[R].Re~searchPr0ject,No.23241—94-7,1994.[6]王旭东.大型马歇尔击实试验研究I-J].公路交通科技, 2002,I9(I):16—19.[责任编辑孙守增]。
收缩测点限值英文英文回答:Shrinkage Measurement Threshold.The shrinkage measurement threshold is a critical parameter in lithographic patterning. It refers to the minimum amount of shrinkage that can be reliably detected and measured using a specific metrology tool. The threshold value is typically determined by the tool's resolution and accuracy, as well as the process conditions being used.In order to achieve high-quality lithographic patterning, it is important to minimize shrinkage. Shrinkage can lead to distortions in the pattern, which can affect the performance of the device. By understanding the shrinkage measurement threshold and taking steps to minimize shrinkage, manufacturers can improve the quality of their lithographic patterning processes.There are a number of different factors that can affect the shrinkage measurement threshold. These factors include:The resolution of the metrology tool: The resolution of the metrology tool refers to the smallest feature size that can be reliably detected and measured. The higher the resolution of the tool, the lower the shrinkage measurement threshold.The accuracy of the metrology tool: The accuracy of the metrology tool refers to the ability of the tool to make precise measurements. The higher the accuracy of the tool, the lower the shrinkage measurement threshold.The process conditions being used: The process conditions being used during lithography can also affect the shrinkage measurement threshold. For example, the exposure dose, the development time, and the type of resist used can all affect the amount of shrinkage that occurs.By understanding the factors that affect the shrinkage measurement threshold, manufacturers can optimize theirlithography processes to minimize shrinkage and improve the quality of their patterned devices.中文回答:收缩测点限值。