A Result About the Density of Iterated Line Intersections in the Plane
- 格式:pdf
- 大小:259.57 KB
- 文档页数:11
a r X i v :c o n d -m a t /9908187v 1 [c o n d -m a t .m e s -h a l l ] 12 A u g 1999Spatial structure of an incompressible Quantum Hall stripIvan rkin †and L.S.Levitov ‡†Department of Physics and Astronomy,University of Sheffield,Sheffield S37RH,UK ‡Center for Materials Science &Engineering,Physics Department,MIT,Cambridge,MA-02139Abstract The incompressible Quantum Hall strip is sensitive to charging of localized states in the cyclotron gap.We study the effect of localized states by a density functional approach and find electron density and the strip width as a function of the density of states in the gap.Another important effect is electron exchange.By using a model density functional which accounts for negative compressibility of the QH state,we find electron density around the strip.At large exchange,the density profile becomes nonmonotonic,indicating formation of a 1D Wigner crystal at the strip edge.Both effects,localized states and exchange,lead to a substantial increase of the strip width.1.Introduction Theory of the QHE predicts that near integer filling the system divides into compressible regions separated by incompressible strips [1].Potential distribution within a QHE sample was recently imaged by using capacitance probes [2,3],atomic force microscope [4]and single electron transistor [5].High resolution images of the incompressible strip [2,3]give the strip width several times larger than the theoretical prediction [1].To bridge between theory and experiment one has to extend the analysis [1]to include •the effect of disorder producing finite density of states in the cyclotron gap;•electron exchange correlations which affect compressibility of the QHE state.Below we study these effects using a Density Functional approach,taking special care of the effect of a large dielectric constant (ǫGaAs =12.1).Because of relatively small depth of the 2DEG beneath the semiconductor surface,the interparticle interaction within the 2DEG is affected by image charges.This changes electrostatics of the strip,and modifies potential induced on the exposed surface.Finite density of states in the QHE gap gives rise to a finite screening length.For an incompressible strip of width exceeding this screening length,we find a large departure from the theory [1],in agreement with [8].The results compare well with the experiment [3].The effect of electron exchange is important in determining the structure of compressible regions adjacent to the strip.Exchange correlation gives rise to negative compressibility [7]of the 2DEG.We consider negative compressibility by using a model density functional,and show that it strongly alters the distribution of electric charge,even to the extent that the potential and the charge density profiles can become nonmonotonic.2.The effect of finite density of states in the cyclotron gapIncompressible strips are formed in the regions of nonuniform 2DEG density,at nearly integer filling,created either by perturbing the exposed surface by an STM probe [2]or by gating the 2DEG [3].The strips are aligned normal to the average 2DEG density gradient.Charge distribution around the strip is controlled by electrostatics [1,6].Density n (r )in the 2DEG buried at a distance d beneath semiconductor surface can be found by minimizing a density functional:−U ext(r)= V(r−r′)n(r′)d2r′+µ(n),V(r)=e2ǫ(ǫ+1)√|r−r′|=δn(r)Fγ(u)du(5)whereδn(r)=n(r)−n LL,and Fγ(u)=γ−1for|u|<γ/2,and0otherwise.Here the coordinate system is such that the x axis is normal to the strip,and the y axis is parallel to the strip.One can obtain exact results forγ→0andγ≫1.The strip width atγ=0is 2w0/π,in accord with the electrostatic sproblem[1].Atγ≫1the deviation from constant density gradient is small,because electrostatic potential is well screened.In this case,spatial variation of the chemical potential follows that of the density,increasing by¯hωc across the strip.Hence the strip width is n gap/|∇n|.FIGURESFIG.1.The gap0units used are w0= (ǫ+1)¯hωc/2e2|∇n| 1/2and n0= (ǫ+1)¯hωc|∇n|/2e2 1/2.We solve the problem numerically for allγ(see Fig.1).In the whole range ofγthe strip width is reasonably accurately given by the formulaw≈(2/π+γ)w0=(2/π)w0+n gap/|∇n|,(6) interpolating between the two exactly solvable limits.A common model for the density of states in a Landau level is a broad line(gaussian or lorentzian)with the states localized in the tail.In this model,the transition between(less compressible)strip and(more compressible)outer region will be more gradual than in the model considered.The estimate(6)for the width of the strip,however,will remain correct, assuming that n gap measures total number of states in the Landau levels tails.In the experiment[3],at the m=2plateau,the density of states in the gap n gap≈0.03n total,where n total=1.5·1011cm−2.The density gradient∇n≈2·1010cm−2/µm. Substituting this in(4),we get w0=0.3µm,γ≈1.In the fully incompressible case[1],the strip width would be2w0/π=0.2µm.The observed width0.5µm agrees with Eq.(6)for estimatedγ.3.The effect of negative compressibility of the compressible edgeFor a fully incompressible strip(n gap=0),the density is constant within it and varies outside as a square root of the distance from the strip edge[1].Here we study how this behavior is modified due tofinite compressibility of the Landau level states.The Thomas–Fermi theory recipe is to use Eq.(1)withµ(n)=δn/κ,whereκis compressibility.Such a model,however,is inconsistent,because of the negative sign ofκin the QH state[7].The Thomas–Fermi problem withκ<0leads to an unphysical instability.The difficulty is circumvented by realizing that the exchange interaction in the case of negative compressibility is essentially nonlocal[9].This motivates using in(1)an effective interaction which is simplest to write in the Fourier representation:V eff(k)=4πe24πe2κ=−a,(7)where a>0is the screening length.The interaction(7)with listed restrictions onΛ(k) ensures stability as well as correct Hartree interaction and compressibility.Otherwise,one can make a reasonable choice ofΛ(k)at ka>1.The problem(1)near the strip edge,with V effof the form(7)accounting for ex-change effects,can be solved by the Wiener–Hopf method[9].For that,we write δn(x)=n+(x)θ(x)+n−(x)θ(−x),where x>0is the compressible region,and Fourier transform Eq.(1):V eff(k)n+k=−(U+ext(k)+U−ext(k)),where n±k and U±ext(k)are analytic in the upper and lower complex k half-planes.The Wiener–Hopf trick requires factor-ing V eff(k)=A+k/A−k,where±indicates the analyticity half-plane.Then,A−k U+ext(k)= A−k U+ext(k) ++ A−k U+ext(k) −which yields n+k=−[A−k U+ext(k)]+/A+k.We use V eff(k)of the form(7)withΛ(k)=exp −ak(1−(2/π)tan−1(k/λ)) ,(8) and obtain a Wiener–Hopf solution in a closed form.Here the parameterλregularizes the interaction at large k(and small r):V eff(r≪λ−1)=e−2λa/πV(r).Factoring this V eff(k) givesA+k=4πe2λ+ikiak/π,A−k=(k+iδ)1/2 δ−ik4πe2(iδ∗)1/2k−iδiak/π,(10)whereδ∗∼w−10.The inverse Fourier transform of n+k gives the charge distribution near the strip edge.Note the asymptotic behavior ofδn(x):δn(x≫λ−1)=2n0(x/πw0)1/2,δn(x≪λ−1)=2n0(x/πw0)1/2eλa/π.Here we expressed E andδ∗in terms of w0and n0.The solution shows that at large screening length a there is a significant departure of the density near the edge from the square root profile of[1].The density profile becomes nonmonotonic at aλ≫1.The interpretation of this result is that as the density is lowered, electrons at the edge form a one dimensional Wigner solid at densities such that the interior of the system is stillfluid.We studied numerically the effect of exchange on the strip width.As the exchange interaction parameter increases,the strip becomes wider(see Fig.2).In the simulation,a model interactionV eff(r)=α/|r|+(1−α)/(r2+˜a2)1/2(11) was used,with˜a=a/(1−α).Similar to the Wiener–Hopf solution for an isolated edge, at large values of the exchange parameter the density profile becomes nonmonotonic.Note that our density functional,being quadratic in n(r),obeys an exact particle–hole symmetry. Hence the density profiles on the upper and lower sides of the plateau in Fig.2are identical up to a sign change.FIG.2.(11)shown for5values of the exchange parameterα,with˜a=0.5w0.In conclusion,wefind thatfinite density of localized states and electron exchange inter-action both have similar effect on the width of the incompressible strip.The strip width increases as a function of the localized states density,and as a function of electron exchange parameter.However,the density profile in these two problems evolves differently.For a high density of localized states the density gradient becomes nearly uniform,whereas at large ex-change the plateau in the density distribution becomes wider.At very large exchange,the density profile becomes nonmonotonic,indicating formation of a one dimensional Wigner crystal at the edge.L.L.is grateful to R.Ashoori,G.Finkelstein,T.D.Fulton,and A.Yacoby for useful discussions of their data.Research at MIT is supported in part by the MRSEC Program of NSF under award6743000IRG.References1.D.B.Chklovskii,B.I.Shklovskii,and L.I.Glazman,Phys.Rev.B46,4026(1992).2.S.H.Tessmer,P.I.Glicofridis,R.C.Ashoori,L.S.Levitov,M.R.Melloch,Nature, 392,No.6671,51(1998);G.Finkelstein,P.I.Glicofridis,S.H.Tessmer,R.C.Ashoori,M.R.Melloch,preprint3.A.Yacoby,H.F.Hess,T.A.Fulton,L.N.Pfeiffer and K.W.West,Solid State Commu-nications,111,1-13(1999)4.K.L.McCormick,M.T.Woodside,M.Huang,M.S.Wu,P.L.McEuen,C.Duruoz,J. S.Harris,Phys.Rev.B,59,4654(1999)5.Y.Y.Wei,J.Weis,K.v.Klitzing,K.Eberl,Phys.Rev.Lett.,81,1674(1999)rkin,J.H.Davies,Phys.Rev.B,52,R5535(1995)7.J.P.Eisenstein,L.N.Pfeiffer,and K.W.West,Phys.Rev.Lett.68,674(1992);B.Tanatar,D.M.Ceperley,Phys.Rev.B39,5005(1989)8.A.L.Efros,cond-mat/9905368rkin,L.S.Levitov,to be published。
a r X i v :c o n d -m a t /9503139v 1 27 M a r 1995Singularity of the density of states in the two-dimensional Hubbard model from finitesize scaling of Yang-Lee zerosE.Abraham 1,I.M.Barbour 2,P.H.Cullen 1,E.G.Klepfish 3,E.R.Pike 3and Sarben Sarkar 31Department of Physics,Heriot-Watt University,Edinburgh EH144AS,UK 2Department of Physics,University of Glasgow,Glasgow G128QQ,UK 3Department of Physics,King’s College London,London WC2R 2LS,UK(February 6,2008)A finite size scaling is applied to the Yang-Lee zeros of the grand canonical partition function for the 2-D Hubbard model in the complex chemical potential plane.The logarithmic scaling of the imaginary part of the zeros with the system size indicates a singular dependence of the carrier density on the chemical potential.Our analysis points to a second-order phase transition with critical exponent 12±1transition controlled by the chemical potential.As in order-disorder transitions,one would expect a symmetry breaking signalled by an order parameter.In this model,the particle-hole symmetry is broken by introducing an “external field”which causes the particle density to be-come non-zero.Furthermore,the possibility of the free energy having a singularity at some finite value of the chemical potential is not excluded:in fact it can be a transition indicated by a divergence of the correlation length.A singularity of the free energy at finite “exter-nal field”was found in finite-temperature lattice QCD by using theYang-Leeanalysisforthechiral phase tran-sition [14].A possible scenario for such a transition at finite chemical potential,is one in which the particle den-sity consists of two components derived from the regular and singular parts of the free energy.Since we are dealing with a grand canonical ensemble,the particle number can be calculated for a given chem-ical potential as opposed to constraining the chemical potential by a fixed particle number.Hence the chem-ical potential can be thought of as an external field for exploring the behaviour of the free energy.From the mi-croscopic point of view,the critical values of the chemical potential are associated with singularities of the density of states.Transitions related to the singularity of the density of states are known as Lifshitz transitions [15].In metals these transitions only take place at zero tem-perature,while at finite temperatures the singularities are rounded.However,for a small ratio of temperature to the deviation from the critical values of the chemical potential,the singularity can be traced even at finite tem-perature.Lifshitz transitions may result from topological changes of the Fermi surface,and may occur inside the Brillouin zone as well as on its boundaries [16].In the case of strongly correlated electron systems the shape of the Fermi surface is indeed affected,which in turn may lead to an extension of the Lifshitz-type singularities into the finite-temperature regime.In relating the macroscopic quantity of the carrier den-sity to the density of quasiparticle states,we assumed the validity of a single particle excitation picture.Whether strong correlations completely distort this description is beyond the scope of the current study.However,the iden-tification of the criticality using the Yang-Lee analysis,remains valid even if collective excitations prevail.The paper is organised as follows.In Section 2we out-line the essentials of the computational technique used to simulate the grand canonical partition function and present its expansion as a polynomial in the fugacity vari-able.In Section 3we present the Yang-Lee zeros of the partition function calculated on 62–102lattices and high-light their qualitative differences from the 42lattice.In Section 4we analyse the finite size scaling of the Yang-Lee zeros and compare it to the real-space renormaliza-tion group prediction for a second-order phase transition.Finally,in Section 5we present a summary of our resultsand an outlook for future work.II.SIMULATION ALGORITHM AND FUGACITY EXPANSION OF THE GRAND CANONICALPARTITION FUNCTIONThe model we are studying in this work is a two-dimensional single-band Hubbard HamiltonianˆH=−t <i,j>,σc †i,σc j,σ+U i n i +−12 −µi(n i ++n i −)(1)where the i,j denote the nearest neighbour spatial lat-tice sites,σis the spin degree of freedom and n iσis theelectron number operator c †iσc iσ.The constants t and U correspond to the hopping parameter and the on-site Coulomb repulsion respectively.The chemical potential µis introduced such that µ=0corresponds to half-filling,i.e.the actual chemical potential is shifted from µto µ−U412.(5)This transformation enables one to integrate out the fermionic degrees of freedom and the resulting partition function is written as an ensemble average of a product of two determinantsZ ={s i,l =±1}˜z = {s i,l =±1}det(M +)det(M −)(6)such thatM ±=I +P ± =I +n τ l =1B ±l(7)where the matrices B ±l are defined asB ±l =e −(±dtV )e −dtK e dtµ(8)with V ij =δij s i,l and K ij =1if i,j are nearestneigh-boursand Kij=0otherwise.The matrices in (7)and (8)are of size (n x n y )×(n x n y ),corresponding to the spatial size of the lattice.The expectation value of a physical observable at chemical potential µ,<O >µ,is given by<O >µ=O ˜z (µ){s i,l =±1}˜z (µ,{s i,l })(9)where the sum over the configurations of Ising fields isdenoted by an integral.Since ˜z (µ)is not positive definite for Re(µ)=0we weight the ensemble of configurations by the absolute value of ˜z (µ)at some µ=µ0.Thus<O >µ= O ˜z (µ)˜z (µ)|˜z (µ0)|µ0|˜z (µ0)|µ0(10)The partition function Z (µ)is given byZ (µ)∝˜z (µ)N c˜z (µ0)|˜z (µ0)|×e µβ+e −µβ−e µ0β−e −µ0βn (16)When the average sign is near unity,it is safe to as-sume that the lattice configurations reflect accurately thequantum degrees of freedom.Following Blankenbecler et al.[1]the diagonal matrix elements of the equal-time Green’s operator G ±=(I +P ±)−1accurately describe the fermion density on a given configuration.In this regime the adiabatic approximation,which is the basis of the finite-temperature algorithm,is valid.The situa-tion differs strongly when the average sign becomes small.We are in this case sampling positive and negative ˜z (µ0)configurations with almost equal probability since the ac-ceptance criterion depends only on the absolute value of ˜z (µ0).In the simulations of the HSfields the situation is dif-ferent from the case of fermions interacting with dynam-ical bosonfields presented in Ref.[1].The auxilary HS fields do not have a kinetic energy term in the bosonic action which would suppress their rapidfluctuations and hence recover the adiabaticity.From the previous sim-ulations on a42lattice[3]we know that avoiding the sign problem,by updating at half-filling,results in high uncontrolledfluctuations of the expansion coefficients for the statistical weight,thus severely limiting the range of validity of the expansion.It is therefore important to obtain the partition function for the widest range ofµ0 and observe the persistence of the hierarchy of the ex-pansion coefficients of Z.An error analysis is required to establish the Gaussian distribution of the simulated observables.We present in the following section results of the bootstrap analysis[17]performed on our data for several values ofµ0.III.TEMPERATURE AND LATTICE-SIZEDEPENDENCE OF THE YANG-LEE ZEROS The simulations were performed in the intermediate on-site repulsion regime U=4t forβ=5,6,7.5on lat-tices42,62,82and forβ=5,6on a102lattice.The ex-pansion coefficients given by eqn.(14)are obtained with relatively small errors and exhibit clear Gaussian distri-bution over the ensemble.This behaviour was recorded for a wide range ofµ0which makes our simulations reli-able in spite of the sign problem.In Fig.1(a-c)we present typical distributions of thefirst coefficients correspond-ing to n=1−7in eqn.(14)(normalized with respect to the zeroth power coefficient)forβ=5−7.5for differ-entµ0.The coefficients are obtained using the bootstrap method on over10000configurations forβ=5increasing to over30000forβ=7.5.In spite of different values of the average sign in these simulations,the coefficients of the expansion(16)indicate good correspondence between coefficients obtained with different values of the update chemical potentialµ0:the normalized coefficients taken from differentµ0values and equal power of the expansion variable correspond within the statistical error estimated using the bootstrap analysis.(To compare these coeffi-cients we had to shift the expansion by2coshµ0β.)We also performed a bootstrap analysis of the zeros in theµplane which shows clear Gaussian distribution of their real and imaginary parts(see Fig.2).In addition, we observe overlapping results(i.e.same zeros)obtained with different values ofµ0.The distribution of Yang-Lee zeros in the complexµ-plane is presented in Fig.3(a-c)for the zeros nearest to the real axis.We observe a gradual decrease of the imaginary part as the lattice size increases.The quantitative analysis of this behaviour is discussed in the next section.The critical domain can be identified by the behaviour of the density of Yang-Lee zeros’in the positive half-plane of the fugacity.We expect tofind that this density is tem-perature and volume dependent as the system approaches the phase transition.If the temperature is much higher than the critical temperature,the zeros stay far from the positive real axis as it happens in the high-temperature limit of the one-dimensional Ising model(T c=0)in which,forβ=0,the points of singularity of the free energy lie at fugacity value−1.As the temperature de-creases we expect the zeros to migrate to the positive half-plane with their density,in this region,increasing with the system’s volume.Figures4(a-c)show the number N(θ)of zeros in the sector(0,θ)as a function of the angleθ.The zeros shown in thesefigures are those presented in Fig.3(a-c)in the chemical potential plane with other zeros lying further from the positive real half-axis added in.We included only the zeros having absolute value less than one which we are able to do because if y i is a zero in the fugacity plane,so is1/y i.The errors are shown where they were estimated using the bootstrap analysis(see Fig.2).Forβ=5,even for the largest simulated lattice102, all the zeros are in the negative half-plane.We notice a gradual movement of the pattern of the zeros towards the smallerθvalues with an increasing density of the zeros nearθ=πIV.FINITE SIZE SCALING AND THESINGULARITY OF THE DENSITY OF STATESAs a starting point for thefinite size analysis of theYang-Lee singularities we recall the scaling hypothesis forthe partition function singularities in the critical domain[11].Following this hypothesis,for a change of scale ofthe linear dimension LLL→−1),˜µ=(1−µT cδ(23)Following the real-space renormalization group treatmentof Ref.[11]and assuming that the change of scaleλisa continuous parameter,the exponentαθis related tothe critical exponentνof the correlation length asαθ=1ξ(θλ)=ξ(θ)αθwe obtain ξ∼|θ|−1|θ|ναµ)(26)where θλhas been scaled to ±1and ˜µλexpressed in terms of ˜µand θ.Differentiating this equation with respect to ˜µyields:<n >sing =(−θ)ν(d −αµ)∂F sing (X,Y )ν(d −αµ)singinto the ar-gument Y =˜µαµ(28)which defines the critical exponent 1αµin terms of the scaling exponent αµof the Yang-Lee zeros.Fig.5presents the scaling of the imaginary part of the µzeros for different values of the temperature.The linear regression slope of the logarithm of the imaginary part of the zeros plotted against the logarithm of the inverse lin-ear dimension of the simulation volume,increases when the temperature decreases from β=5to β=6.The re-sults of β=7.5correspond to αµ=1.3within the errors of the zeros as the simulation volume increases from 62to 82.As it is seen from Fig.3,we can trace zeros with similar real part (Re (µ1)≈0.7which is also consistentwith the critical value of the chemical potential given in Ref.[22])as the lattice size increases,which allows us to examine only the scaling of the imaginary part.Table 1presents the values of αµand 1αµδ0.5±0.0560.5±0.21.3±0.3∂µ,as a function ofthe chemical potential on an 82lattice.The location of the peaks of the susceptibility,rounded by the finite size effects,is in good agreement with the distribution of the real part of the Yang-Lee zeros in the complex µ-plane (see Fig.3)which is particularly evident in the β=7.5simulations (Fig.4(c)).The contribution of each zero to the susceptibility can be singled out by expressing the free energy as:F =2n x n yi =1(y −y i )(29)where y is the fugacity variable and y i is the correspond-ing zero of the partition function.The dotted lines on these plots correspond to the contribution of the nearby zeros while the full polynomial contribution is given by the solid lines.We see that the developing singularities are indeed governed by the zeros closest to the real axis.The sharpening of the singularity as the temperature de-creases is also in accordance with the dependence of the distribution of the zeros on the temperature.The singularities of the free energy and its derivative with respect to the chemical potential,can be related to the quasiparticle density of states.To do this we assume that single particle excitations accurately represent the spectrum of the system.The relationship between the average particle density and the density of states ρ(ω)is given by<n >=∞dω1dµ=ρsing (µ)∝1δ−1(32)and hence the rate of divergence of the density of states.As in the case of Lifshitz transitions the singularity of the particle number is rounded at finite temperature.However,for sufficiently low temperatures,the singular-ity of the density of states remains manifest in the free energy,the average particle density,and particle suscep-tibility [15].The regular part of the density of states does not contribute to the criticality,so we can concentrate on the singular part only.Consider a behaviour of the typedensity of states diverging as the−1ρsing(ω)∝(ω−µc)1δ.(33)with the valueδfor the particle number governed by thedivergence of the density of states(at low temperatures)in spite of thefinite-temperature rounding of the singu-larity itself.This rounding of the singularity is indeedreflected in the difference between the values ofαµatβ=5andβ=6.V.DISCUSSION AND OUTLOOKWe note that in ourfinite size scaling analysis we donot include logarithmic corrections.In particular,thesecorrections may prove significant when taking into ac-count the fact that we are dealing with a two-dimensionalsystem in which the pattern of the phase transition islikely to be of Kosterlitz-Thouless type[23].The loga-rithmic corrections to the scaling laws have been provenessential in a recent work of Kenna and Irving[24].In-clusion of these corrections would allow us to obtain thecritical exponents with higher accuracy.However,suchanalysis would require simulations on even larger lattices.The linearfits for the logarithmic scaling and the criti-cal exponents obtained,are to be viewed as approximatevalues reflecting the general behaviour of the Yang-Leezeros as the temperature and lattice size are varied.Al-though the bootstrap analysis provided us with accurateestimates of the statistical error on the values of the ex-pansion coefficients and the Yang-Lee zeros,the smallnumber of zeros obtained with sufficient accuracy doesnot allow us to claim higher precision for the critical ex-ponents on the basis of more elaboratefittings of the scal-ing behaviour.Thefinite-size effects may still be signifi-cant,especially as the simulation temperature decreases,thus affecting the scaling of the Yang-Lee zeros with thesystem rger lattice simulations will therefore berequired for an accurate evaluation of the critical expo-nent for the particle density and the density of states.Nevertheless,the onset of a singularity atfinite temper-ature,and its persistence as the lattice size increases,areevident.The estimate of the critical exponent for the diver-gence rate of the density of states of the quasiparticleexcitation spectrum is particularly relevant to the highT c superconductivity scenario based on the van Hove sin-gularities[25],[26],[27].It is emphasized in Ref.[25]thatthe logarithmic singularity of a two-dimensional electrongas can,due to electronic correlations,turn into a power-law divergence resulting in an extended saddle point atthe lattice momenta(π,0)and(0,π).In the case of the14.I.M.Barbour,A.J.Bell and E.G.Klepfish,Nucl.Phys.B389,285(1993).15.I.M.Lifshitz,JETP38,1569(1960).16.A.A.Abrikosov,Fundamentals of the Theory ofMetals North-Holland(1988).17.P.Hall,The Bootstrap and Edgeworth expansion,Springer(1992).18.S.R.White et al.,Phys.Rev.B40,506(1989).19.J.E.Hirsch,Phys.Rev.B28,4059(1983).20.M.Suzuki,Prog.Theor.Phys.56,1454(1976).21.A.Moreo, D.Scalapino and E.Dagotto,Phys.Rev.B43,11442(1991).22.N.Furukawa and M.Imada,J.Phys.Soc.Japan61,3331(1992).23.J.Kosterlitz and D.Thouless,J.Phys.C6,1181(1973);J.Kosterlitz,J.Phys.C7,1046(1974).24.R.Kenna and A.C.Irving,unpublished.25.K.Gofron et al.,Phys.Rev.Lett.73,3302(1994).26.D.M.Newns,P.C.Pattnaik and C.C.Tsuei,Phys.Rev.B43,3075(1991);D.M.Newns et al.,Phys.Rev.Lett.24,1264(1992);D.M.Newns et al.,Phys.Rev.Lett.73,1264(1994).27.E.Dagotto,A.Nazarenko and A.Moreo,Phys.Rev.Lett.74,310(1995).28.A.A.Abrikosov,J.C.Campuzano and K.Gofron,Physica(Amsterdam)214C,73(1993).29.D.S.Dessau et al.,Phys.Rev.Lett.71,2781(1993);D.M.King et al.,Phys.Rev.Lett.73,3298(1994);P.Aebi et al.,Phys.Rev.Lett.72,2757(1994).30.E.Dagotto, A.Nazarenko and M.Boninsegni,Phys.Rev.Lett.73,728(1994).31.N.Bulut,D.J.Scalapino and S.R.White,Phys.Rev.Lett.73,748(1994).32.S.R.White,Phys.Rev.B44,4670(1991);M.Veki´c and S.R.White,Phys.Rev.B47,1160 (1993).33.C.E.Creffield,E.G.Klepfish,E.R.Pike and SarbenSarkar,unpublished.Figure CaptionsFigure1Bootstrap distribution of normalized coefficients for ex-pansion(14)at different update chemical potentialµ0for an82lattice.The corresponding power of expansion is indicated in the topfigure.(a)β=5,(b)β=6,(c)β=7.5.Figure2Bootstrap distributions for the Yang-Lee zeros in the complexµplane closest to the real axis.(a)102lat-tice atβ=5,(b)102lattice atβ=6,(c)82lattice at β=7.5.Figure3Yang-Lee zeros in the complexµplane closest to the real axis.(a)β=5,(b)β=6,(c)β=7.5.The correspond-ing lattice size is shown in the top right-hand corner. Figure4Angular distribution of the Yang-Lee zeros in the com-plex fugacity plane Error bars are drawn where esti-mated.(a)β=5,(b)β=6,(c)β=7.5.Figure5Scaling of the imaginary part ofµ1(Re(µ1)≈=0.7)as a function of lattice size.αm u indicates the thefit of the logarithmic scaling.Figure6Electronic susceptibility as a function of chemical poten-tial for an82lattice.The solid line represents the con-tribution of all the2n x n y zeros and the dotted line the contribution of the six zeros nearest to the real-µaxis.(a)β=5,(b)β=6,(c)β=7.5.。
Introduction to Quantum MechanicsOverviewQuantum Mechanics is a branch of Physics that describes the behavior of matter and energy at a microscopic level. This discipline has had a significant impact on modern science and technology, and its principles have been applied to the development of various fields, such as computing, cryptography and medicine. The study of Quantum Mechanics requires a basic understanding of the principles of Mathematics and Physics. The m of this document is to provide an introduction to Quantum Mechanics and to provide a set of practice exercises with answers that will allow students to test their knowledge and understanding of the subject.Fundamental PrinciplesThe fundamental principles of Quantum Mechanics are based on the concept of a wave-particle duality, which means that particles can behave as both waves and particles simultaneously. The behavior of particles at the microscopic level is probabilistic, and it is described by a wave function. A wave function is a complex function that describes the probability of finding a particle at a givenlocation. The square of the amplitude of the wave function gives the probability density of finding the particle at that point in space. The wave function can be used to calculate various physical quantities, such as the position, momentum and energy of a particle.Operators and ObservablesIn Quantum Mechanics, physical quantities are represented by operators. An operator is a mathematical function that acts on a wave function and generates a new wave function as a result. Operators are used to represent physical observables, such as the position, momentum and energy of a particle. The eigenvalues of an operator correspond to the possible results of a measurement of the corresponding observable. The eigenvectors of an operator correspond to the possible states of a particle. The state of a particle is described by a linear combination of its eigenvectors, which is called a superposition.Schrödinger EquationThe Schrödinger Equation is a mathematical equation that describes the time evolution of a wave function. It is based on the principle of conservation of energy, and it representsthe motion of a quantum system in terms of its wave function. The equation is given by:$$\\hat{H}\\Psi=E\\Psi$$where $\\hat{H}$ is the Hamiltonian operator, $\\Psi$ is the wave function, and E is the energy of the system. The Schrödinger Equation is the foundation of Quantum Mechanics, and it is used to calculate various physical properties of a particle, such as its energy and momentum.Practice Exercises1.Calculate the wave function for a particle that isin a 1D box of length L.–Answer: The wave function for a particle in a 1D box is given by:$$\\Psi(x)=\\sqrt{\\frac{2}{L}}\\sin{\\frac{n\\pi x}{L}}$$where n is a positive integer.2.Derive the time-dependent Schrödinger Equation.–Answer: The time-dependent SchrödingerEquation is given by:$$i\\hbar\\frac{\\partial\\Psi}{\\partialt}=\\hat{H}\\Psi$$3.Calculate the momentum operator for a particle in1D.–Answer: The momentum operator for a particle in 1D is given by:$$\\hat{p_x}=-i\\hbar\\frac{\\partial}{\\partial x}$$4.What is the uncertnty principle?–Answer: The uncertnty principle is afundamental principle of Quantum Mechanics thatstates that the position and momentum of a particlecannot be measured simultaneously with arbitraryprecision. Mathematically, it is given by: $$\\Delta x\\Delta p_x\\geq\\frac{\\hbar}{2}$$5.Calculate the energy of a particle in a 1D box oflength L with quantum number n.–Answer: The energy of a particle in a 1D box is given by:$$E_n=\\frac{n^2\\pi^2\\hbar^2}{2mL^2}$$ConclusionQuantum Mechanics is a fascinating and challenging fieldof study that has provided a deeper understanding of the behavior of matter and energy at the microscopic level. Theprinciples of Quantum Mechanics have been applied to various fields of study, including computing, cryptography and medicine, and they have contributed to significant advances in these fields. The practice exercises provided in this document are intended as a tool for students to test their knowledge and understanding of Quantum Mechanics. By solving these exercises, students will gn a deeper understanding of the fundamental principles of Quantum Mechanics and strengthen their problem-solving skills in this exciting field of study.。
为了进一步研究反应机理英语Delving into the Reaction Mechanism: A Journey into the Unknown.The field of chemistry is vast and ever-evolving, with new discoveries and theories constantly pushing the boundaries of our understanding. At the heart of every chemical reaction lies a complex mechanism, a dance of atoms and molecules that transform one set of substances into another. To further explore these mechanisms is to embark on a journey into the unknown, seeking answers to the fundamental questions of how and why chemical transformations occur.The reaction mechanism is the step-by-step sequence of events that leads to the overall transformation of reactants into products. It involves the breaking and formation of chemical bonds, the transfer of electrons, and the rearrangement of atoms. Understanding these mechanisms is crucial for a variety of reasons. It can help us predictand control the outcome of reactions, optimize reaction conditions, and even design new reactions and synthetic pathways.To delve into the reaction mechanism, one must first identify the reactants and products involved. This involves a thorough analysis of the chemical structures and properties of the substances involved. Once these are established, the next step is to propose a plausible sequence of events that can lead to the formation of the products from the reactants. This often involves the consideration of intermediate species that may form during the reaction and the various energy barriers that need to be overcome.Experimental techniques play a crucial role in elucidating reaction mechanisms. Techniques such as spectroscopy, kinetic studies, and isotopic labeling can provide valuable insights into the reaction pathway. Spectroscopy, for example, can be used to identify the presence of intermediate species and monitor their evolution over time. Kinetic studies can reveal the rate-determining step of the reaction, providing information about the relative rates of different steps in the mechanism. Isotopic labeling, on the other hand, can be used to track the flow of atoms and molecules within the reaction, providing direct evidence for the proposed mechanism.Computational methods have also emerged as powerful tools for studying reaction mechanisms. Quantum chemical calculations can provide detailed insights into the energetics and electronic structure of the reactants, products, and intermediates involved. These calculations can help us understand the energetics of the reaction, identify potential transition states, and predict the reactivity of different species. While computational methods are becoming increasingly accurate and reliable, they must still be validated and corroborated with experimental data.As we delve deeper into the reaction mechanism, we often encounter surprises and unexpected findings. This is the nature of scientific exploration, and it is what makesthe field of chemistry so exciting and challenging. Each new discovery and understanding adds to our knowledge of the universe and our ability to manipulate and create new materials and compounds.In conclusion, to further study reaction mechanisms is to embark on a journey of discovery and understanding. It requires a combination of experimental techniques, computational methods, and a keen eye for detail. As we continue to delve into the mysteries of chemical reactions, we not only expand our knowledge but also contribute to the progress of science and technology.。
GAP-99-028 UV LIDAR System for AtmosphericMonitoring and Cloud DetectionIztok Arˇc on,Andrej Filipˇc iˇc,Marko ZavrtanikNova Gorica Polytechnic,SloveniaJoˇz ef Stefan Institute,Ljubljana,Slovenia10th September1999AbstractHigh-energy cosmic rays produce extensive showers in the atmo-sphere,which can either be detected on the ground or throughflu-orescence light produced by charged particles traversing the atmo-sphere.The latter detection technique used in Pierre Auger Obser-vatory requires continuous monitoring of light absorption betweenthefluorescence source and the detector.The LIDAR system uses UVlaser beam of similar wavelength to probe a specific region of the at-mosphere.The beam back-scatters on haze and aerosols,while thereflected light is collected with a mirror onto a photomultiplier readby a computer.The pulse shape analysis gives absorption-coefficientmap of the sky.A simple system to measure relative absorption coef-ficient is presented in the paper.1IntroductionPierre Augerfluorescent detector will collect nitrogenfluorescence light, which is emitted during the propagation of high-energy cosmic-ray air-showers through the atmosphere.This light can be used to determine the longitudinal shower profile.Most of the nitrogenfluorescence is emitted in a narrow bandwidth between300-400nm.Although a clean atmosphere is relatively transparent in this range,in-situ measurements of atmospheric absorption and scattering parameters are necessary for accurate determi-nation of particle densities in the air-shower.Laser-based spectroscopic techniques have shown to be a very useful tool for this purpose.They exploit different kinds of direct or indirect1scattering of the laser light by atmospheric constituents.In this report we present a simple single-ended and single-wavelength laser system(LI-DAR),which can be used to measure the transparentness of the atmosphere for UV light at355nm in the troposphere where most of the nitrogenfluo-rescence originates from.The same system can also be used for the detec-tion of clouds or presence of other aerosols.2Scattering and Attenuation of Laser Beam in the At-mosphereThe basic idea of such a LIDAR system is to shoot a single,very short(5ns) pulse of laser light into the atmosphere in a desired direction,and measure the backscattered light from the atmosphere constituents,i.e.gas molecules and aerosol particles.From the time interval between the transmition of the pulse and the detection of backscattered light,responses from different parts of the atmosphere along the beam path can be distinguished.The spatial resolution is proportional to the length of the pulse(1.5m).The power of the backscattered light can be expressed by the single scattering lidar equation(1) where is the transmitted power,pulse length,the effective detec-tor area,is a range to the volume element,from which the scattered light is received simultaneously at a given time,the volume backscattering coefficient of that part of the atmosphere and the corresponding vol-ume extinction coefficient.The backscattering coefficient and extinction coefficient can be expressed as a sum of contributions from gaseous and particle(aerosol)phase in the atmosphere.Elastic scattering of UV light by gas molecules is described by Rayleigh scattering where and are directly proportional to number of gas mo-lecules per unit volume.In the case of scattering by aerosol particle with diameters from about to few millimeters,it is very difficult or im-possible to describe scattering properties exactly,since the scattering ampli-tude does not depend only on the number of aerosol particle per unit vol-ume,but also on their size distribution,shape and composition.The use of different approximations is necessary(see for example Mie scattering, where aerosol particles are approximated by a dispersion of homogeneous spheres of varying size).23Homogeneous AtmosphereIn a general case when there are different aerosol particle present in the at-mosphere(water droplets,dust,...)and the density of the particle varies from one part of the atmosphere to the other,it is impossible to determine both,the scattering and the attenuation coefficient from the measured sin-gle wavelength lidar signal.In some special cases however,when one of the parameters,or is known,or some relation between them can be as-sumed,then a solution of lidar equation(1)in terms of and can be found.In the case of propagation of the laser beam trough a homogeneous at-mosphere,the lidar equation(1)has much simpler form.Parameters and are constants,independent on range,so the detected power is propor-tional to(2) and and can be determined from the measured signal by the exponen-tialfit.4Lidar SystemWe have infig-ure1.ItFigure1:Scheme of Lidar systemtripler and a telescope.The laser provides5ns pulses at355nm in the third harmonic.The energy per pulse is6mJ at repetition rate of1-15Hz.Beam stability at room temperature is4%,beam divergence is below3mrad.The telescope is constructed with a parabolic mirror,originally constructed for theˇCerenkov detector at DELPHI experiment at CERN[1].The mirror di-ameter is0.8m with the focal distance of0.41m.Mirror surface is coated with Al and a SiO protection layer.The reflectivity for the UV light(3003-400nm)is about90%or better.Spot size in the focus has a diameter of 1.4mm.A photomultiplier tube(Philips2020Q)operating at is used as a detector.It is mounted in the mirror focus and protected by a broad-band UVfilter(Edmund Scientific UG-1,bandpass300–400nm,60% transmitivity at355nm)to reduce the background light.The acceptance angle,which is in present configuration,can be reduced down to without any significant loss of light collection efficiency.The signal from the PMT is recorded as a function of time by a digital oscilloscope(Tektron-ics754C)to provide a range-resolved measure of atmospheric scattering. The system collects the data in a single-shot mode.Light pulses werefired in vertical direction at the frequency of5Hz.The collection of the signal from the backscattered light is triggered by a laser pulse(Q-switch sync. out).The sampling rate was500MSamples/s(2ns/channel)while the recording length was50.000channels.The oscilloscope was programmed to average15consequent spectra to improve signal to noise ratio,thus pro-viding one trace read every3s.Voltage resolution was8bit in a single-shot spectrum while averaging was performed with a16bit precision.Data ac-quisition and storage of collected spectra was controlled by a PC Pentium II with a GPIB connection to the oscilloscope.Data acquisition software uses the Linux-Lab-Project[2]GPIB driver while GPIB libraries are integrated into the ROOT framework[3]to perform online data analysis.Total power consumption of the whole system is about1kW.The current prototype requires manual alignment of the laser-beam with the telescope axis.Alignment is performed by scanning for the maximum intensity of backscattered light at heights larger than1km.The direction of the laser beam can be adjusted by a beam steering mirror in the angu-lar range of degrees.A system for remote control is being developed. Pointing accuracy of0.1is planned in the range of for azimuthal angle and for horizontal angle.Due to the physical size of the com-ponents and the required precision of the optical alignment,the system is planned to operate at a permanent location.It gives stable operation con-ditions in the range of ambiental temperatures from30Celsius down to4 Celsius.5Measurements and AnalysisTest measurements were performed at one location in local mountains in Slovenia at the height of1000m above the sea level.The fact that the system works in UV region resulted in a high insensitivity to the light pollution produced by the wolfram bulbs used in DAQ area.The background sig-nal,at which the pulse height distribution corresponds to the one of single photon detection,had a rate of2.5MHz showing a reasonable agreement with a clear moonless night simulation for a given detection system.The4LIDAR spectra were taken during a moonless night in November in win-ter conditions.The environmental temperature was around Celsius. The sky was clear,without clouds,and visibility conditions were good.A typical raw signal averaged over15consecutive laser shots directed ver-tically in the night sky in a period of three seconds is shown infigure2. The signal is plotted as a function of distance to a scattering volume ofFigure2:Oscilloscope trace for an average of15shots.Height0and13.5km correspond to channel500and50.000,respectivelly.The graph on the right has-axis enlarged to see resolving power of the system.the atmosphere producing the backscattered signal.The distance is deter-mined directly from the time interval since the transmition of the pulse ().Characteristic decrease of the signal due to the decrease of the solid angle of the receiver is observed.Increased signal,especially in the range from10.5to11.5km(see right picture infigure2),is caused by layers of thin haze.Maximal range of received signal at given atmospheric conditions was about14km.6Attenuation LengthIn the range from about1.5to3.5km the atmosphere was very homoge-neous and the LIDAR signal in that range is well described by the equa-tion2.This can best be seen in the logarithmic plot of the signal multiplied by square of the range(Figure3):(3) In the homogeneous part of the atmosphere the attenuation coefficient can be determined directly from the slope of the measured curve(figure 3,blue line).Obtained attenuation length1/in the1.5to3.5km range was found to be about10km.Variation of the attenuation length during one hour is shown in thefigure4.The accuracy in the determination of the attenuation length is,while itfluctuates for%during one hour.5Figure3:Logarithm of the signal multiplied by square of the rangeFigure4:Variation of the attenuation length during one hour.7Backscattering AmplitudeFor the determination of the backscattering amplitude,a precise calibra-tion of the detection system is necessary as well as permanent monitoring of the power of the transmitted laser pulses.During the test measurements, emmited-power control system was not yet available,therefore,we can only observe relative changes in backscattering amplitude as a function of the range.In our case of almost clear atmosphere,we can presume that the attenuation length does not change significantly at ranges beyond3.5 km.The observed increases of backscattered signal(Figures2and3)is proportional to the increased density of aerosol particles(water droplets). Infigure5the relative changes of density of aerosol particles at different layers of the atmosphere can be observed.The results presented here illustrate the challenging case of detecting6Figure5:Relative aerosol-particle density as a function of time and range.the change in backscattering amplitude caused by a barely visible haze at a distance of11.5km.At least an order of magnitude higher signal is detected when the laser light is scattered on a optically denser clouds giving in this way the possibility to determine the cloud base height with a resolution of 2m.8ConclusionsThe LIDAR system presented in this paper is a suitable technique for ath-mospheric monitoring for the Pierre Auger Observatory.It provides the measurement of thefluorescense light absorbtion up to at least15km with a sampling rate of few Hertz to the precision bellow1%.A desgin of the system to be used for Pierre Auger Project depends on requirements and thefinal design of thefluorescence detector.References[1]P.Beltran et al.[Democritos-Cracow-NBI-NIKHEF-Uppsala-WupertalCollaboration],Contribution to the Int.Symp.on Position Detectors for High Energy Physics,Dubna,USSR,Sep22-25,1987.[2]http://www.llp.fu-berlin.de[3]http://root.cern.ch7。
U1(选修六)1.Abstract1)Adj.深奥的,抽象的Astronomy is an abstract subject. 天文学是一门深奥的学科。
The word “honesty〞is an abstract noun.Beauty is abstract but a house is not .美是抽象的,房子是具体的。
2〕V.○1“提炼〞“抽取〞The workers are abstracting metal from ore.工人们正在由矿砂提炼金属。
Rubber is abstractedfrom trees.橡胶是从树木提取的。
Salt can be abstracted from sea water.盐是从海水中提取出来的。
○2“转移〔注意〕等distract one’s attention from sth从……上转移开某人的注意力Nothing can distract his attention from his work.○3“概括,写摘要〞He is abstracting a story for a book review.他在为一篇书评撰写故事摘要。
3〕n.an abstract of a lecture一个演讲的摘要2.Would you rather have Chinese or Western-style paintings in your home?would rather do sth情愿做….would rather sb did sth情愿sb做…情愿做….而不愿意做…:would rather do sth than do sth= would do sth rather than do sth= prefer to do sth rather than do sth= prefer doing sth to doing sthI would rather stay at home today. 我今天宁愿待在家里。
Central limit theoremHistogram plot of average proportion of heads in a fair coin toss, over a large number ofsequences of coin tosses In probability theory, the central limittheorem (CLT ) states conditionsunder which the mean of a sufficientlylarge number of independent randomvariables, each with finite mean andvariance, will be approximatelynormally distributed (Rice 1995). Thecentral limit theorem (in its commonform) requires the random variables tobe identically distributed. Sincereal-world quantities are often thebalanced sum of many unobservedrandom events, this theorem provides apartial explanation for the prevalenceof the normal probability distribution.The CLT also justifies theapproximation of large-samplestatistics to the normal distribution incontrolled experiments.A simple example of the central limit theorem is given by the problem ofrolling a large number of dice, each ofwhich is weighted unfairly in some unknown way. The distribution of the sum (or average) of the rolled numbers will be well approximated by a normal distribution, the parameters of which can be determined empirically.For other generalizations for finite variance which do not require identical distribution, see Lindeberg's condition ,Lyapunov's condition , Gnedenko and Kolmogorov states.In more general probability theory, a central limit theorem is any of a set of weak-convergence theories. They all express the fact that a sum of many independent random variables will tend to be distributed according to one of a small set of "attractor" (i.e. stable) distributions. When the variance of the variables is finite, the "attractor"distribution is the normal distribution. Specifically, the sum of a number of random variables with power law tail distributions decreasing as 1/|x |α + 1 where 0 < α < 2 (and therefore having infinite variance) will tend to a stable distribution with stability parameter (or index of stability) of α as the number of variables grows.[1] This article is concerned only with the classical (i.e. finite variance) central limit theorem.Classical central limit theoremA distribution being "smoothed out" by summation, showing original density of distribution and three subsequent summations; see Illustration of the central limit theoremfor further details.The central limit theorem is alsoknown as the second fundamentaltheorem of probability.[2] (The Law oflarge numbers is the first.)Let X 1, X 2, X 3, …, X n be a sequence ofn independent and identicallydistributed (iid) random variables eachhaving finite values of expectation µand variance σ2 > 0. The central limittheorem states that as the sample size nincreases, the distribution of thesample average of these randomvariables approaches the normaldistribution with a mean µ andvariance σ2/n irrespective of the shapeof the common distribution of theindividual terms X i .[3]For a more precise statement of thetheorem, let S n be the sum of the nrandom variables, given byThen, if we define new random variablesthen they will converge in distribution to the standard normal distribution N (0,1) as n approaches infinity. N (0,1) is thus the asymptotic distribution of the Z n's. This is often written asZ ncan also be expressed aswhereis the sample mean.Convergence in distribution means that, if Φ(z ) is the cumulative distribution function of N (0,1), then for every real number z, we haveorProofFor a theorem of such fundamental importance to statistics and applied probability, the central limit theorem has a remarkably simple proof using characteristic functions. It is similar to the proof of a (weak) law of large numbers. For any random variable, Y, with zero mean and unit variance (var(Y) = 1), the characteristic function of Y is, by Taylor's theorem,where o (t2 ) is "little o notation" for some function of t that goes to zero more rapidly than t2. Letting Yibe(Xi − μ)/σ, the standardized value of Xi, it is easy to see that the standardized mean of the observations X1, X2, ..., XnisBy simple properties of characteristic functions, the characteristic function of ZnisBut this limit is just the characteristic function of a standard normal distribution N(0, 1), and the central limit theorem follows from the Lévy continuity theorem, which confirms that the convergence of characteristic functions implies convergence in distribution.Convergence to the limitThe central limit theorem gives only an asymptotic distribution. As an approximation for a finite number of observations, it provides a reasonable approximation only when close to the peak of the normal distribution; it requires a very large number of observations to stretch into the tails.If the third central moment E((X1− μ)3) exists and is finite, then the above convergence is uniform and the speed of convergence is at least on the order of 1/n1/2 (see Berry-Esseen theorem).The convergence to the normal distribution is monotonic, in the sense that the entropy of Znincreases monotonically to that of the normal distribution, as proven in Artstein, Ball, Barthe and Naor (2004).The central limit theorem applies in particular to sums of independent and identically distributed discrete random variables. A sum of discrete random variables is still a discrete random variable, so that we are confronted with a sequence of discrete random variables whose cumulative probability distribution function converges towards a cumulative probability distribution function corresponding to a continuous variable (namely that of the normal distribution). This means that if we build a histogram of the realisations of the sum of n independent identical discrete variables, the curve that joins the centers of the upper faces of the rectangles forming the histogram converges toward a Gaussian curve as n approaches infinity. The binomial distribution article details such an application of the central limit theorem in the simple case of a discrete variable taking only two possible values.Relation to the law of large numbersThe law of large numbers as well as the central limit theorem are partial solutions to a general problem: "What is the limiting behavior of S n as n approaches infinity?" In mathematical analysis, asymptotic series are one of the most popular tools employed to approach such questions.Suppose we have an asymptotic expansion of ƒ(n ):Dividing both parts by φ1(n ) and taking the limit will produce a 1, the coefficient of the highest-order term in the expansion, which represents the rate at which ƒ(n ) changes in its leading term.Informally, one can say: "ƒ(n ) grows approximately as a 1 φ(n )". Taking the difference between ƒ(n ) and its approximation and then dividing by the next term in the expansion, we arrive at a more refined statement about ƒ(n ):Here one can say that the difference between the function and its approximation grows approximately as a 2 φ2(n ).The idea is that dividing the function by appropriate normalizing functions, and looking at the limiting behavior of the result, can tell us much about the limiting behavior of the original function itself.Informally, something along these lines is happening when the sum, S n , of independent identically distributed random variables, X 1, ..., X n , is studied in classical probability theory. If each X i has finite mean μ, then by the Law of Large Numbers, S n /n → μ.[4] If in addition each X i has finite variance σ2, then by the Central Limit Theorem,where ξ is distributed as N(0, σ2). This provides values of the first two constants in the informal expansionIn the case where the X i 's do not have finite mean or variance, convergence of the shifted and rescaled sum can also occur with different centering and scaling factors:or informallyDistributions Ξ which can arise in this way are called stable .[5] Clearly, the normal distribution is stable, but there are also other stable distributions, such as the Cauchy distribution, for which the mean or variance are not defined. The scaling factor b n may be proportional to n c , for any c ≥ 1/2; it may also be multiplied by a slowly varying function of n .[6] [7]The Law of the Iterated Logarithm tells us what is happening "in between" the Law of Large Numbers and the Central Limit Theorem. Specifically it says that the normalizing function intermediate in size between n of The Law of Large Numbers and √n of the central limit theorem provides a non-trivial limiting behavior.IllustrationGiven its importance to statistics, a number of papers and computer packages are available that demonstrate the convergence involved in the central limit theorem. [8]Alternative statements of the theoremDensity functionsThe density of the sum of two or more independent variables is the convolution of their densities (if these densities exist). Thus the central limit theorem can be interpreted as a statement about the properties of density functions under convolution: the convolution of a number of density functions tends to the normal density as the number of density functions increases without bound, under the conditions stated above.Characteristic functionsSince the characteristic function of a convolution is the product of the characteristic functions of the densities involved, the central limit theorem has yet another restatement: the product of the characteristic functions of a number of density functions becomes close to the characteristic function of the normal density as the number of density functions increases without bound, under the conditions stated above. However, to state this more precisely, an appropriate scaling factor needs to be applied to the argument of the characteristic function.An equivalent statement can be made about Fourier transforms, since the characteristic function is essentially a Fourier transform.Extensions to the theoremMultidimensional central limit theoremis an independent and We can easily extend proofs using characteristic functions for cases where each individual Xiidentically distributed random vector, with mean vector μ and covariance matrix Σ (amongst the individual components of the vector). Now, if we take the summations of these vectors as being done componentwise, then the Multidimensional central limit theorem states that when scaled, these converge to a multivariate normal distribution.Products of positive random variablesThe logarithm of a product is simply the sum of the logarithms of the factors. Therefore when the logarithm of a product of random variables that take only positive values approaches a normal distribution, the product itself approaches a log-normal distribution. Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the products of different random factors, so they follow a log-normal distribution. Whereas the central limit theorem for sums of random variables requires the condition of finite variance, the corresponding theorem for products requires the corresponding condition that the density function be square-integrable (see Rempala 2002).Lack of identical distributionThe central limit theorem also applies in the case of sequences that are not identically distributed, provided one of a number of conditions apply.Lyapunov conditionLet X n be a sequence of independent random variables defined on the same probability space. Assume that X n has finite expected value μn and finite standard deviation σn . We defineIf for some δ > 0, the expected values are finite for every i ∈ N and the Lyapunov's conditionis satisfied, then the distribution of the random variableconverges to the standard normal distribution N(0, 1).Lindeberg conditionIn the same setting and with the same notation as above, we can replace the Lyapunov condition with the following weaker one (from Lindeberg in 1920). For every ε > 0where 1{…} is the indicator function. Then the distribution of the standardized sum Z n converges towards the standard normal distribution N(0,1).Beyond the classical frameworkAsymptotic normality, that is, convergence to the normal distribution after appropriate shift and rescaling, is a phenomenon much more general than the classical framework treated above, namely, sums of independent random variables (or vectors). New frameworks are revealed from time to time; no single unifying framework is available for now.Under weak dependenceA useful generalization of a sequence of independent, identically distributed random variables is a mixing random process in discrete time; "mixing" means, roughly, that random variables temporally far apart from one another are nearly independent. Several kinds of mixing are used in ergodic theory and probability theory. See especially strong mixing (also called α-mixing) defined by α(n ) → 0 where α(n ) is so-called strong mixing coefficient.A simplified formulation of the central limit theorem under strong mixing is given in (Billingsley 1995, Theorem 27.4):Theorem. Suppose that X 1, X 2, … is stationary and α-mixing with αn = O (n −5) and that E(X n ) = 0 and E(X n 12) < ∞.Denote S n = X 1 + … + X n , then the limit σ2 = lim n n − 1E(S n 2) exists, and if σ ≠ 0 then converges in distribution to N(0, 1).In fact, σ2 = E(X 12) + 2∑k =1∞E(X 1X 1+k ), where the series converges absolutely.The assumption σ ≠ 0 cannot be omitted, since the asymptotic normality fails for X n = Y n − Y n −1 where Y n are another stationary sequence.For the theorem in full strength see (Durrett 1996, Sect. 7.7(c), Theorem (7.8)); the assumption E(X n 12) < ∞ is replaced with E(|X n |2 + δ) < ∞, and the assumption αn = O (n − 5) is replaced with Existence of such δ > 0 ensures the conclusion. For encyclopedic treatment of limit theorems under mixing conditions see (Bradley 2005).Martingale central limit theoremTheorem . Let a martingale M n satisfy •in probability as n tends to infinity,•for every ε > 0,as n tends to infinity,then converges in distribution to N(0,1) as n tends to infinity.See (Durrett 1996, Sect. 7.7, Theorem (7.4)) or (Billingsley 1995, Theorem 35.12).Caution: The restricted expectation E(X ; A ) should not be confused with the conditional expectation E(X |A ) = E(X ;A )/P (A ).Convex bodiesTheorem (Klartag 2007, Theorem 1.2). There exists a sequence εn ↓ 0 for which the following holds. Let n ≥ 1, and let random variables X 1, …, X n have a log-concave joint density f such that ƒ(x 1, …, x n ) = ƒ(|x 1|, …, |x n |) for all x 1, …,x n , and E(X k 2) = 1 for all k = 1, …, n . Then the distribution of is εn -close to N(0, 1) in the total variation distance.These two εn -close distributions have densities (in fact, log-concave densities), thus, the total variance distance between them is the integral of the absolute value of the difference between the densities. Convergence in total variation is stronger than weak convergence.An important example of a log-concave density is a function constant inside a given convex body and vanishing outside; it corresponds to the uniform distribution on the convex body, which explains the term "central limit theorem for convex bodies".Another example: ƒ(x 1, …, x n ) = const · exp ( − (|x 1|α + … + |x n |α)β) where α > 1 and αβ > 1. If β = 1 then ƒ(x 1, …,x n ) factorizes into const · exp ( − |x 1|α)…exp ( − |x n |α), which means independence of X 1, …, X n . In general,however, they are dependent.The condition ƒ(x 1, …, x n ) = ƒ(|x 1|, …, |x n |) ensures that X 1, …, X n are of zero mean and uncorrelated; still, they need not be independent, nor even pairwise independent. By the way, pairwise independence cannot replace independence in the classical central limit theorem (Durrett 1996, Section 2.4, Example 4.5).Here is a Berry-Esseen type result.Theorem (Klartag 2008, Theorem 1). Let X 1, …, X n satisfy the assumptions of the previous theorem, thenfor all a < b ; here C is a universal (absolute) constant. Moreover, for every c 1, …, c n ∈ R such that c 12 + … + c n 2 = 1,A more general case is treated in (Klartag 2007, Theorem 1.1). The condition ƒ(x1, …, xn) = ƒ(|x1|, …, |xn|) is replacedwith much weaker conditions: E(Xk ) = 0, E(Xk2) = 1, E(XkXℓ) = 0 for 1 ≤ k < ℓ≤ n. The distribution ofneed not be approximately normal (in fact, it can be uniform). However, the distributionof c1X1+ … + cnXnis close to N(0,1) (in the total variation distance) for most of vectors (c1, …, cn) according to theuniform distribution on the sphere c12 + … + cn2 = 1.Lacunary trigonometric seriesTheorem (Salem - Zygmund). Let U be a random variable distributed uniformly on (0, 2π), and Xk=r k cos(nkU + ak), where•nk satisfy the lacunarity condition: there exists q > 1 such that nk+1≥ qnkfor all k,•rkare such that•0 ≤ ak< 2π.Thenconverges in distribution to N(0, 1/2).See (Zygmund 1959, Sect. XVI.5, Theorem (5-5)) or (Gaposhkin 1966, Theorem 2.1.13). Gaussian polytopesTheorem (Barany & Vu 2007, Theorem 1.1). Let A1, ..., Anbe independent random points on the plane R2 eachhaving the two-dimensional standard normal distribution. Let Kn be the convex hull of these points, and Xnthe areaof KnThenconverges in distribution to N(0,1) as n tends to infinity.The same holds in all dimensions (2, 3, ...).The polytope Knis called Gaussian random polytope.A similar result holds for the number of vertices (of the Gaussian polytope), the number of edges, and in fact, faces of all dimensions (Barany & Vu 2007, Theorem 1.2).Linear functions of orthogonal matricesA linear function of a matrix M is a linear combination of its elements (with given coefficients), M↦ tr(AM) where A is the matrix of the coefficients; see Trace_(linear_algebra)#Inner product.A random orthogonal matrix is said to be distributed uniformly, if its distribution is the normalized Haar measure on the orthogonal group O(n,R); see Rotation matrix#Uniform random rotation matrices.Theorem (Meckes 2008). Let M be a random orthogonal n×n matrix distributed uniformly, and A a fixed n×n matrix such that tr(AA*) = n, and let X = tr(AM). Then the distribution of X is close to N(0,1) in the total variation metric uptoSubsequencesTheorem (Gaposhkin 1966, Sect. 1.5). Let random variables X 1, X 2, … ∈ L 2(Ω) be such that X n → 0 weakly in L 2(Ω) and X n 2 → 1 weakly in L 1(Ω). Then there exist integers n 1 < n 2 < … such thatconverges in distribution to N (0, 1) as k tends to infinity.Applications and examplesA histogram plot of monthly accidental deaths in the US, between 1973 and 1978 exhibitsnormality, due to the central limit theorem There are a number of useful and interesting examples and applicationsarising from the central limit theorem (Dinov, Christou & Sanchez2008). See e.g. [9], presented as part of the SOCR CLT Activity [10].•The probability distribution for total distance covered in a randomwalk (biased or unbiased) will tend toward a normal distribution.•Flipping a large number of coins will result in a normal distributionfor the total number of heads (or equivalently total number of tails).From another viewpoint, the central limit theorem explains thecommon appearance of the "Bell Curve" in density estimates applied toreal world data. In cases like electronic noise, examination grades, andso on, we can often regard a single measured value as the weightedaverage of a large number of small effects. Using generalisations of thecentral limit theorem, we can then see that this would often (though notalways) produce a final distribution that is approximately normal.In general, the more a measurement is like the sum of independent variables with equal influence on the result, the more normality it exhibits. This justifies the common use of this distribution to stand in for the effects of unobserved variables in models like the linear model.Signal processingSignals can be smoothed by applying a Gaussian filter, which is just the convolution of a signal with an appropriately scaled Gaussian function. Due to the central limit theorem this smoothing can be approximated by several filter steps that can be computed much faster, like the simple moving average.The central limit theorem implies that to achieve a Gaussian of variance σ2 n filters with windows of variances σ12,…, σn 2 with σ2 = σ12 + ⋯ + σn 2 must be applied.HistoryTijms (2004, p. 169) writes:“The central limit theorem has an interesting history. The first version of this theorem was postulated by the French-born mathematicianAbraham de Moivre who, in a remarkable article published in 1733, used the normal distribution to approximate the distribution of the number of heads resulting from many tosses of a fair coin. This finding was far ahead of its time, and was nearly forgotten until the famous French mathematician Pierre-Simon Laplace rescued it from obscurity in his monumental work Théorie Analytique des Probabilités , which waspublished in 1812. Laplace expanded De Moivre's finding by approximating the binomial distribution with the normal distribution. But as with De Moivre, Laplace's finding received little attention in his own time. It was not until the nineteenth century was at an end that the importance of the central limit theorem was discerned, when, in 1901, Russian mathematician Aleksandr Lyapunov defined it in general terms and proved precisely how it worked mathematically. Nowadays, the central limit theorem is considered to be the unofficial sovereign of probability theory.”Sir Francis Galton (Natural Inheritance , 1889) described the Central Limit Theorem as:“I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the "Law of Frequency of Error". The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement, amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshaled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along.”The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used by George Pólya in 1920 in the title of a paper.[11][12] Pólya referred to the theorem as "central" due to its importance in probability theory. According to Le Cam, the French school of probability interprets the word central in the sense that "it describes the behaviour of the centre of the distribution as opposed to its tails".[12] The abstract of the paper On the central limit theorem of calculus of probability and the problem of moments by Pólya[11] in 1920 translates as follows.“The occurrence of the Gaussian probability density e−x2 in repeated experiments, in errors of measurements, which result in the combination of very many and very small elementary errors, in diffusion processes etc., can be explained, as is well-known, by the very same limit theorem, which plays a central role in the calculus of probability. The actual discoverer of this limit theorem is to be named Laplace; it is likely that its rigorous proof was first given by Tschebyscheff and its sharpest formulation can be found, as far as I am aware of, in an article by Liapounoff. [...]”A thorough account of the theorem's history, detailing Laplace's foundational work, as well as Cauchy's, Bessel's and Poisson's contributions, is provided by Hald.[13] Two historical accounts, one covering the development from Laplace to Cauchy, the second the contributions by von Mises, Pólya, Lindeberg, Lévy, and Cramér during the 1920s, are given by Hans Fischer.[14][15] Le Cam describes a period around 1935.[12] See Bernstein (1945) for a historical discussion focusing on the work of Pafnuty Chebyshev and his students Andrey Markov and Aleksandr Lyapunov that led to the first proofs of the CLT in a general setting.A curious footnote to the history of the Central Limit Theorem is that a proof of a result similar to the 1922 Lindeberg CLT was the subject of Alan Turing's 1934 Fellowship Dissertation for King's College at the University of Cambridge. Only after submitting the work did Turing learn it had already been proved. Consequently, Turing's dissertation was never published.[16][17] .[18]Notes[1]Johannes Voit (2003), The Statistical Mechanics of Financial Markets (Texts and Monographs in Physics), Springer-Verlag, p. 124,ISBN 3-540-00978-7[2]p. 325, Introduction to probability, 2nd ed., Charles Miller Grinstead and James Laurie Snell, AMS Bookstore, 1997, ISBN 0821807498.[3]Theorem 27.1, p. 357, Probability and Measure, Patrick Billingsley, 3rd ed., Wiley, 1995, ISBN 0-471-00710-2.[4]Theorem 5.3.4, p. 47, A first look at rigorous probability theory, Jeffrey Seth Rosenthal, World Scientific, 2000, ISBN 9810243227.[5]p. 88, Information theory and the central limit theorem, Oliver Thomas Johnson, Imperial College Press, 2004, ISBN 1860944736.[6]pp. 61–62, Chance and stability: stable distributions and their applications, Vladimir V. Uchaikin and V. M. Zolotarev, VSP, 1999, ISBN9067643017.[7]Theorem 1.1, p. 8, Limit theorems for functionals of random walks, A. N. Borodin, Il'dar Abdulovich Ibragimov, and V. N. Sudakov, AMSBookstore, 1995, ISBN 0821804383.[8]Marasinghe, M., Meeker, W., Cook, D. & Shin, T.S.(1994 August), "Using graphics and simulation to teach statistical concepts", Paperpresented at the Annual meeting of the American Statistician Association, Toronto, Canada.[9]/socr/index.php/SOCR_EduMaterials_Activities_GCLT_Applications[10]/socr/index.php/SOCR_EduMaterials_Activities_GeneralCentralLimitTheorem[11]Pólya, George (1920), "Über den zentralen Grenzwertsatz der Wahrscheinlichkeitsrechnung und das Momentenproblem" (http://www-gdz.sub.uni-goettingen.de/cgi-bin/digbib.cgi?PPN266833020_0008) (in German), Mathematische Zeitschrift8: 171–181,doi:10.1007/BF01206525,[12](Le Cam 1986)[13]Hald, Andreas History of Mathematical Statistics from 1750 to 1930 (http://www.gbv.de/dms/goettingen/229762905.pdf), Ch.17.[14]Hans Fischer " The Central Limit Theorem from Laplace to Cauchy: Changes in Stochastic Objectives and in Analytical Methods" (http://www.ku-eichstaett.de/Fakultaeten/MGF/Mathematik/Didmath/Didmath.Fischer/HF_sections/content/1850.pdf) in Fischer (2010)[15]Hans Fischer "The Central Limit Theorem in the Twenties" (http://www.ku-eichstaett.de/Fakultaeten/MGF/Mathematik/Didmath/Didmath.Fischer/HF_sections/content/twenties_main.pdf) in Fischer (2010)[16]See Andrew Hodges(1983) Alan Turing: the enigma. London: Burnett Books., pp. 87-88.[17]Zabell, S.L., (2005) Symmetry and its discontents: essays on the history of inductive probability,Cambridge University Press. ISBN0521444705. (pp. 199 ff.)[18]See Section 3 of John Aldrich, "England and Continental Probability in the Inter-War Years", Journal Electronique d'Histoire desProbabilités et de la Statistique, vol. 5/2 Decembre 2009 (/decembre2009.html) Journal Electronique d'Histoire des Probabilités et de la Statistique.References•S. Artstein, K. Ball, F. Barthe and A. Naor (2004), "Solution of Shannon's Problem on the Monotonicity of Entropy" (/jams/2004-17-04/S0894-0347-04-00459-X/home.html), Journal of theAmerican Mathematical Society17, 975–982. Also author's site (http://www.math.tau.ac.il/~shiri/publications.html).•Barany, Imre; Vu, Van (2007), "Central limit theorems for Gaussian polytopes", The Annals of Probability (Institute of Mathematical Statistics) 35 (4): 1593–1621, doi:10.1214/009117906000000791. Also arXiv (http:// /abs/math/0610192).•S.N.Bernstein, On the work of P.L.Chebyshev in Probability Theory, Nauchnoe Nasledie P.L.Chebysheva.Vypusk Pervyi: Matematika. (Russian) [The Scientific Legacy of P. L. Chebyshev. First Part: Mathematics] Edited by S. N. Bernstein.] Academiya Nauk SSSR, Moscow-Leningrad, 1945. 174 pp.•Billingsley, Patrick (1995), Probability and Measure (Third ed.), John Wiley & sons, ISBN 0-471-00710-2•Bradley, Richard (2007), Introduction to Strong Mixing Conditions (First ed.), Heber City, UT: Kendrick Press, ISBN 097404279X•Dinov, Ivo; Christou, Nicolas; Sanchez, Juana (2008), "Central Limit Theorem: New SOCR Applet and Demonstration Activity", Journal of Statistics Education (ASA) 16 (2). Also at ASA/JSE (http://www.amstat.org/publications/jse/v16n2/dinov.html).•Durrett, Richard (1996), Probability: theory and examples (Second ed.)•Fischer, H. (2010) A History of the Central Limit Theorem: From Classical to Modern Probability Theory, Springer. ISBN 0387878564•Gaposhkin, V.F. (1966), "Lacunary series and independent functions", Russian Math. Surveys21 (6): 1–82, doi:10.1070/RM1966v021n06ABEH001196.•Klartag, Bo'az (2007), "A central limit theorem for convex sets", Inventiones Mathematicae168, 91–131.doi:10.1007/s00222-006-0028-8 Also arXiv (/abs/math/0605014).•Klartag, Bo'az (2008), "A Berry-Esseen type inequality for convex bodies with an unconditional basis", Probability Theory and Related Fields. doi:10.1007/s00440-008-0158-6 Also arXiv (/abs/0705.0832).•Le Cam, Lucien (1986), "The central limit theorem around 1935" (/euclid.ss/ 1177013818), Statistical Science1:1, 78–91.•Meckes, Elizabeth (2008), "Linear functions on the classical matrix groups", Transactions of the American Mathematical Society360: 5355–5366, doi:10.1090/S0002-9947-08-04444-9. Also arXiv (/abs/ math/0509441).•Rempala, G. and J. Wesolowski, (2002) "Asymptotics of products of sums and U-statistics" (http://www.math./~ejpecp/EcpVol7/paper5.pdf), Electronic Communications in Probability, 7, 47–54.•Rice, John (1995), Mathematical Statistics and Data Analysis (Second ed.), Duxbury Press, ISBN 0-534-20934-3•Tijms, Henk (2004) Understanding Probability: Chance Rules in Everyday Life (/dp/ 0521540364/), Cambridge: Cambridge University Press. ISBN 0521540364•Zygmund, Antoni (1959), Trigonometric series, Volume II, Cambridge. (2003 combined volume I,II: ISBN 0521890535)。
Dielectric dispersion and ac conductivity in—Iron particlesloaded—polymer compositesG.C.Psarras a,*,E.Manolakaki b ,G.M.Tsangaris baDepartment of Materials Science,School of Natural Sciences,University of Patras,Patras 26504,GreecebDepartment Materials Science and Engineering,School of Chemical Engineering,National Technical University of Athens,9Iroon Polytechniou St,Athens 15780,GreeceReceived 23October 2002;revised 30July 2003;accepted 21August 2003AbstractPolymer composites of an epoxy resin matrix with randomly dispersed Iron micro-particles in various amounts were prepared and their dielectric spectra were measured in the frequency range 5Hz–13MHz and temperature interval from ambient to 1408C.Obtained data were analysed by means of electric modulus formalism.Interfacial or Maxwell-Wagner-Sillars relaxation process was revealed in the frequency range and temperature interval of the measurements,which was found to follow the Cole-Davidson approach for the distribution of relaxation times.The examined systems exhibit strong dispersion with frequency.At low frequencies ac conductivity tends to be constant,while at higher becomes frequency dependent varying as a power of frequency.Conductivity increases with temperature in the low frequency regime,remaining almost unaffected at higher frequencies.q 2003Elsevier Ltd.All rights reserved.Keywords:A.Particle-reinforcement;A.Polymer-matrix composites (PMCs);B.Electrical properties;B.Interface/interphase;Electric modulus1.IntroductionThe dispersion of an electrically conductive phase within an insulating host medium,affects the overall performance of the heterogeneous system [1].Furthermore,if the dispersed filler is in sufficient quantity,a conductive or semi-conductive composite is formed [2,3].The interesting properties of such systems make them technologically important and competitive to other alternative materials,not only due to their electrical behaviour but also due to their posite polymeric materials consist-ing of an epoxy matrix and conductive metal particles belong to this category of materials [4].This type of material has been found to possess interesting properties,which are exploited in a variety of applications [4–7].Some of the most common applications refer to electromagnetic interference (EMI)shielding,radio frequency interference (RFI)shield-ing and electrostatic dissipation (ESD)of charges.In addition,conductive polymer composites are used aselectrical conductive adhesives and circuit components in microelectronics [4]and have been reported to possess anti-corrosive behaviour as metal plates coatings [8].Composite materials of an amorphous polymeric matrix and conductive metal particles are considered as hetero-geneous disordered systems [3,9,10].Their electrical performance is directly related to the permittivities and conductivities of their constituents,the volume fraction of the filler and the size and shape of its particles.Other important factors could be the adhesion between the host medium and the inclusions,the method of processing and possible interactions between the conductive and non-conductive phase [1,11–14].The majority of the examined systems in the literature refer to binary mixtures [3,12,15–18]and only a few works concern the electrical performance of hybrid composites [19–22].In almost all cases,the concentration of the conductive phase is initially low and gradually increased aiming to achieve a conductive or semi-conductive behaviour.The insulator to conductor transition,is governed by the laws of percolation theory [23]and a critical volume fraction of the filler is necessary for the onset of electrical conduction.Percolation theory defines1359-835X/$-see front matter q 2003Elsevier Ltd.All rights reserved.doi:10.1016/positesa.2003.08.002Composites:Part A 34(2003)1187–1198/locate/compositesa*Corresponding author.E-mail address:G.C.Psarras@upatras.gr (G.C.Psarras).the insulator to conductor transition according to the equations,ðP2P cÞað1Þwhere P c is the critical concentration or percolation threshold and a is the critical exponent[23,24].Electrical relaxation phenomena are present in these composites and their investigation is essential not only from the practical point of view due to the potential application, but also for the insight information that can provide referring to molecular mobility,polarization and conductivity mech-anisms[25–30].One of the most characteristic features of electrical conduction in disordered solid systems is the dispersion of conductivity with frequency.In the low frequency regime,conductivity remains almost constant, while in the high frequency regime conductivity is frequency dependent varying approximately as a power of frequency [10,31–33].Different approaches,based on analytical functions of the constituents properties[12,20,34],effective medium theory[35–37],percolation theory[2,3]and hopping models[31,38],have been used for the interpret-ation of the dielectric and conductivity performance of these disordered composites.Despite the work done so far,a complete understanding of the electrical behaviour of these materials is still lacking[9,10,31].The present study is the continuation towards the ac response of polymer particulate composites of some previous work[19,33,39]and a more elaborated and detailed examination concerning electrical relaxations and characteristics of the above composites.The composite systems can be regarded as stochastic mixtures of the two phases and are prepared by randomly dispersing Iron micro-particles in an epoxy resin.All reported data were recorded by means of Dielectric Spectroscopy(DS)and analysed using the formalism of electric modulus,with varying parameters the conductivefiller content,the frequency of the appliedfield and temperature.The‘electric modulus’formalism,first introduced by McCrum et al.[40]and intensively used for the investigation of electrical relaxation phenomena by Macedo and Moynihan[41,42]in vitreous ionic conductors,is defined as the inverse quantity of complex permittivity by the following equationM p¼11p¼1102j100¼1012þ12þj10012þ12¼M0þjM00ð2Þwhere M0is the real and M00the imaginary part of electric modulus,10the real and100the imaginary part of permittivity.Recently,it has been adopted for the investigation of electrical relaxation effects in polymers [27,43–46]and in composite polymeric systems[19,28,33]. The use of electric modulus offers some advantages in interpreting bulk relaxation processes since by its definition, variations in the large values of permittivity and conduc-tivity at low frequencies are minimized.Thus,common difficulties like electrode nature and contact,space charge injection phenomena and absorbed impurity conduction effects,which appear to obscure relaxation in the permittiv-ity presentation,can be resolved or even ignored[43].Finally,the ac conductivity of the systems was examined and discussed,with respect to the parameters mentioned above.2.ExperimentalA commercially available bisphenol type epoxy resin (D.E.R321,Dow Chemicals)was used as a prepolymer with epoxy equivalent182–192and viscosity5–7P at 258C.As curing agent a cycloaliphatic amine(Chemmam-mine CA7Henkel)at30phr(parts per hundred)was employed.While the above systems were still in the liquid state,various amounts of metal powder were added for the production of the composite samples.Iron powder from Ferak Art21930was used after drying at1008C for48h, without any treatment and surface modification.The particles had the shape of a spheroid with sizes of microns in order of magnitude.The distribution of grain sizes, assessed by employing an optical microscope,an imaging camera and suitable software,was varying between0.3and 6m m,Fig.1.The Iron particles content in the produced specimens was varying from0to60phr,or from0to17.0% in volume fraction.A detailed description of the preparation procedure can be found elsewhere[47].Dielectric measurements were performed by means of a Video bridge T-2100(Electro-Scientific Industries Inc.)in the frequency range10Hz–3kHz,and by an Impedance Analyser LF4192A(Hewllet Packard)in the frequency range3kHz–13MHz.Both instruments were interfaced to a PC for simultaneous control and automated data acquisition. The test cell was a three terminal guarded system constructed according to the ASTM D150-92specifications[47]. Temperature was varied from ambient to1408C.3.ResultsDistributions of the geometrical characteristics,namely the aspect ratio and the mean diameter,as well as a micro-photograph of the Iron particles before being embedded in the polymer matrix are depicted in Fig.1.As can be observed in the Iron micro-particles,the distribution of the aspect ratio is approximately symmetric.The aspect ratio of the employed particles remaining,in the whole range of sizes,less than one reflects the shape of the inclusions. Since,it is known that for oblate-spheroids the aspect ratio is less than1,for spheres is equal to1and for prolate-spheroids is greater than1[19,20,48].On the other hand,the distribution of the mean diameter of the particles appears to be far from being symmetric,indicating a non-normal distribution of the size of the inclusions.G.C.Psarras et al./Composites:Part A34(2003)1187–1198 1188The recorded dielectric data were first expressed in terms of real and imaginary part of permittivity and then transformed,via Eq.(2)to the electric modulus formalism.Arguments for the resulting benefits of the electric modulus presentation have been exhibited and discussed elsewhere [19,33].The real part of electric modulus ðM 0Þversus frequency of the composites with 10,30,50phr in Iron particles (corresponding to 1.5,5.7and11.0%in volume fraction)is shown in Fig.2(a)–(c)at temperatures varying from 35to 1208C.As it can be seen (M 0)decreases with Iron content and temperature,as result of the increase in the real part of permittivity,an analogous behaviour has been reported for other particu-late composites with conductive inclusions [19,33,39].Fig.1.Iron particles size distribution:(a)aspect ratio,(b)mean diameter and (c)micro-photograph of the inclusions prior to composites fabrication,magnification100X.Fig.2.The real part M 0of the electric modulus versus log f of the composites with (a)10(b)30(c)50phr in Iron content,at various temperatures:(X )358C,(K )508C,(A )708C,(S )808C,(L )908C,(þ)1008C,(W )1208C.G.C.Psarras et al./Composites:Part A 34(2003)1187–11981189The existence of a step-wise transition from low values to high ones is evident for all the examined specimens,at temperatures higher than 508C.This transition implies a relaxation process and should be accompanied by a loss peak in the diagrams of the imaginary part of electric modulus ðM 00Þversus frequency.Plotting the imaginarypart ðM 00Þof electric modulus with respect to frequency,results to the clear formation of the corresponding relaxation peaks,which are depicted in Fig.3(a)–(c).Relaxation peaks move towards higher frequencies as temperature increases and at the same time their maximum diminishes as the filler content increases,at constant temperature,Figs.3(a)–(c)and 4(a)and (b).4.DiscussionIt is well known and documented [15,47–50]that dielectric relaxations are present in polymer matrix compo-sites.Polymer systems are essentially insulating materials and thus can be polarized as a response to an applied electrical field.The occurring dielectric relaxations are the consequence of the efforts carried out by different polar segments of the polymer system,to follow the imposing direction of the applied alternating field.Near the glass transition temperature the segmental mobility of the polymer is enhanced and the orientation of large parts of themolecularFig.3.The imaginary part M 00of the electric modulus versus log f of the composites with (a)10(b)30(c)50phr in Iron content,at various temperatures:(X )358C,(K )508C,(A )708C,(S )808C,(L )908C,(þ)1008C,(W )1208C.Fig.4.The imaginary part M 00of the electric modulus versus log f at two temperatures (a)1008C (b)1208C of the composites with:(W )10,(K )30,(A )50phr in Iron content.G.C.Psarras et al./Composites:Part A 34(2003)1187–11981190chains is facilitated.At lower temperatures,relaxations occur due to the orientation of smaller polar groups such as side groups or local segments of chains[15,26,28].However,the presence of conductive inclusions is critically changing the situation.The concentration of thefiller influences polari-zation,as well as dc and ac conduction inside the composite system.In metal-polymer composites,the existence of interfaces gives rise to interfacial pola-rization or Maxwell-Wagner-Sillars(MWS)effect[1,20,33,47,51,52].This phenomenon appears in heterogeneous media due to the accumulation of charges at the interfaces and the formation of large dipoles on metal particles or clusters.Analysis of the process shows a dipolar relaxation[53–55],where the permittivity components are frequency dependent[48,56]. Interfacial relaxation depends on the conductivity and permittivity of the constituents of the composite material and in polymer composites occurs in the low frequency region due to the inertia of the formed dipoles,being the slowest of all the appearing dielectric processes(a-relaxation,b-relaxation,etc.)[1,48].Interfacial polarization results,in the permittivity mode,in high values of dielectric permittivity and loss which decrease rapidly with frequency [19,33,39].In the case of the composites studied in the present work and by employing the electric modulus presentation,a relaxation behaviour is clearly depicted in Figs.2and3,which is attributed to MWS effect.The frequency–temperature superposition is evident,since the relaxation peaks inðM00Þare moving towards higher frequencies as temperature increases(Fig.3(a)–(c))and at the same time their maximum diminishes,at constant temperature,as thefiller content increases(Fig.4(a) and(b)).This behaviour implies the MWS type of relaxation and it is consistent with theory[1]and previous experimental results in similar systems[18,19,33,39,47,52].Using normalized variablesðM00=M00mÞandðlog f=f mÞfor the composites with10,30and50phr in Iron content (Fig.5(a)–(c)),it can be seen that the loss process retains a constant form over the range of the examined temperatures. HereðM00mÞis the maximum value of the imaginary part of the electric modulus andðf mÞis the frequency where this maximum appears in each case.The asymmetric shape of the plots is a strong evidence that the dielectric relaxation process deviates from the pure Debye behaviour,and a non-symmetric distribution of relaxation times exists.Aiming to analyse further the experimental results obtained,data were fitted according to the Cole-Davidson approach,which anticipates an asymmetric distribution of relaxation times.In the electric modulus formalism the Cole-Davidson equations have the following form[19]M0¼M1M s½M sþðM12M sÞðcos fÞg cos gfM2sþðM12M sÞðcos fÞg½2M s cos gfþðM12M sÞðcos fÞgð3ÞFig.5.Normalized plots of M00=M00m versus log f=f m of the composites with(a)10(b)30(c)50phr in Iron content,at various temperatures:(S)808C,(L)908C,(þ)1008C,(W)1208C.G.C.Psarras et al./Composites:Part A34(2003)1187–11981191where 0,g #1;tg f ¼vt ;v max t ¼tg1g þ1p2ð5ÞIn Fig.6the produced fitting curves are compared with experimental data from specimens with varying amount of Iron particles,at 1008C.The suppressed semi-circles in the Cole–Cole diagram correspond to the relaxation processes occurring in each of the specimens.It is worth noting that at low frequencies both data and fitted curves exhibit a common origin almost identical with the origin of the graph.The coincidence of the beginning of the semicircles with the origin of the graph is a clear indication that no other relaxation process is present,at lower frequencies,in the studied systems.On the other hand,the variation of the semicircles radius reflects the influence of the filler content.At the high frequency end,experimental points deviate from the produced curves,since another type relaxation process starts developing.The same behaviour has been found in every examined temperature.The evaluated parameters from the fitting procedure are listed in Tables 1–3.The exponent g is a measure of the width of the distribution of relaxation times and the value g ¼1corresponds to a single relaxation time process or a pure Debye type relaxation.The obtained values of the parameter g are all higher than 0.7indicating a rather narrow distribution of relaxation times.The non-symmetrical distribution of relaxation times is in accordance not only with the normalized plots of loss versus frequency (Fig.5(a)–(c)),but also with the distribution of particles sizes (Fig.1(b)).Furthermore,the limiting range of values for the exponent g could be related with the shape of the inclusions and their aspect ratio distribution,Fig.1(a).In Fig.7,the dependence of relaxation time upon the reciprocal of temperature for the examined systems,is presented (values are also listed in Tables 1–4).Relaxation times are diminishing with increasing temperature for all systems,as the dissipated thermal energy assists the formed dipoles to follow the motion of the alternating electric field.Furthermore,enhancement of the Iron particles volume fraction results in an increase of relaxation times,since the loss peak position,in the permittivity mode,shifts to lower frequencies [39,52].This general trend is supported by the values listed in Tables 1–4(specimens with Fe content varying from 10to 60phr,or from 1.5to 17.0%in volume fraction),with one exception the transition from 30to 50phr Fe loaded composite.The evaluated relaxations times for the resin-50phr Fe system are lower with respect to those of the resin-30phr Fe system,the occurring inconsistency could be related to a volume fraction close to the percolation threshold.However,it is difficult at this stage to assert the above mentioned point since further studies,including dc measurements and determination of percolation threshold,are needed.As it can be observed (Fig.7)theexaminedFig.6.Cole–Cole plots of the composites with (W )10,(K )30,(S )40and (A )50phr in Iron content,at 1008C.The solid curves are produced by best-fitting experimental points to the Cole-Davidson approach.Table 2Parameters evaluated by fitting data according to the Cole-Davidson approach (Eqs.(3)–(5))for the composite with 30phr content or 5.7%volume fraction in Feu (8C)M s M 1t M (s)gE M (eV)800.0100.178 1.71£10230.837 1.049900.0050.1827.06£10240.8231000.0050.184 2.22£10240.8531200.0080.1925.35£10250.870M 00¼M 1M s ðM 12M s Þðcos f Þg sin gfM 2s þðM 12M s Þðcos f Þg ½2M s cos gf þðM 12M s Þðcos f Þgð4ÞTable 1Parameters evaluated by fitting data according to the Cole-Davidson approach (Eqs.(3)–(5))for the composite with 10phr content or 1.5%volume fraction in Feu (8C)M s M 1t M (s)gE M (eV)800.0040.242 1.45£10230.8280.978900.0040.247 4.58£10240.8311000.0050.250 1.75£10240.8351200.0040.2605.32£10250.867Table 3Parameters evaluated by fitting data according to the Cole-Davidson approach (Eqs.(3)–(5))for the composite with 50phr content or 11.0%volume fraction in Feu (8C)M s M 1t M (s)gE M (eV)800.0050.135 1.70£10230.8320.864900.0050.138 4.78£10240.8321000.0050.140 2.04£10240.8321200.0080.1456.07£10250.859G.C.Psarras et al./Composites:Part A 34(2003)1187–11981192systems exhibit Arrhenious type behaviour according to equationt M ¼t 0exp E MkT ð6Þwhere E M is the activation energy of the relaxation process,k the Boltzmann constant and T the temperature.Values of the activation energies calculated via a linear regression and by employing Eq.(6)are shown in Tables 1–4.Obtained values are close to others reported previously,for similar systems [19,39]and exhibit small scatter.The activation energy reflects the microstructure of the composites being a function of the mean radius of metal islands and mean interparticle separation [25].Thus,the formation of metal clusters inside the specimens and the reduction of interparticle separation by increasing the volume fraction of the conductive phase is critical and could be considered as responsible for lowering the obtained values in the case of the 50and 60phr Fe loaded resin specimens.The ac conductivity of all samples has been calculated from the dielectric losses according to the relations p ¼j 10v 1p ðv Þ¼j 10v ð102j 100Þ¼10v 100þj 10v 10ð7ÞThe real part of s p ðv Þis given bys ac ¼10v 100ð8Þwhere 10¼8:85£10212Fm 21is the permittivity of the free space and v ¼2p f the angular frequency.Fig.8(a)–(c)depict the dependence of ac conductivity from the frequency of the applied field at various temperatures and Iron content concentrations.At low frequencies,a frequency independent conductivity is recorded,which is attributed to resistive conduction through thebulkFig.7.Arrhenius plot of relaxation times versus 1=T and the respective linear fits of the composites with W (···)10,K (---)30,A (–––)50phr in Iron content.Table 4Parameters evaluated by fitting data according to the Cole-Davidson approach (Eqs.(3)–(5))for the composite with 60phr content or 17.0%volume fraction in Feu (8C)M s M 1t M (s)gE M (eV)800.0230.146 1.31£10230.7130.725900.0230.152 4.69£10240.7131000.0200.152 1.75£10240.7501200.0230.1521.14£10240.770Fig.8.The ac conductivity ðs ac Þversus log f of the composites with (a)10(b)30(c)50phr in Iron content,at various temperatures:(X )358C,(K )508C,(A )708C,(S )808C,(L )908C,(þ)1008C,(W )1208C.Inlets are magnification of the same plots depicting more clearly the influence of temperature.G.C.Psarras et al./Composites:Part A 34(2003)1187–11981193composite.On the other hand,at high frequencies,conductivity appears to be proportional to frequency due to the capacitance of the host medium between the conducting particles or clusters.Inlets in Fig.8(a)–(c)provide a more detailed image of the influence of temperature on the transition of conductivity from the low to the high frequency region.The log s ðv Þversus log f plots given in Fig.9(a)–(c),for the composites with 10,30,50phr in Iron content,help to reveal further the ac behaviour of the systems.It is evident that ac conductivity is both frequency and temperature dependent and increases,up to five orders of magnitude,with increasing frequency and temperature.However,the influence of temperature is more pronounced in the low frequency range,while at the high frequencies edge the values of s ac ðv Þdisplay proximity.At low frequencies where the applied electric field,forces the charge carriers to drift over large distances,as temperature is increased,a tendency to retain almost constant values is recorded.When frequency is raised,the mean displacement of the charge carriers is reduced and the real part of conductivity,after reaching a certain critical frequency f c ;follows the law s ac ðv Þ,v s with 0#s #1characterizing hopping conduction [10,57,58].The critical frequency f c has been found to be dependent on temperature and conductive filler volume fraction [21,24,31,33].In general at a constant temperature ac conductivity can be expressed as [57,59]s ðv Þ¼s dc þA ðv Þsð9Þwhere s dc is the v !0limiting value of s ðv Þand A ;s parameters depending on temperature and filler content [24,30,60].Eq.(9)is often called ‘the ac universality law’since it has been found to satisfactorily describe the ac response of numerous different types of materials,which can be classified as disordered solids [10,31,59].The frequency temperature superposition helps to reveal three distinct regions in the curves of Fig.9(a)–(c),the exponential part above f c ;the tendency to obtain constant values when frequency is low and the intermediate dispersion which is more evident at higher temperatures.Eq.(9)satisfactorily describes experimental data at the low and high frequency regions,deviating in the intermediate frequencies where the relaxation processes are recorded.This is not an unexpected point since,the experimentally determined ac conductivity includes both conduction and polarization processes.Fig.10shows the evaluated ac conductivity at different frequencies,of the examined specimens,as a function of the reciprocal temperature.A temperature independent ten-dency can be envisaged for higher frequencies,which as frequency is decreasing converts to a strong dependence on temperature.In addition,at higher temperatures,conduc-tivity values are converging.The resulting form of the s ac ¼f ð1=T Þplots,is a first hand indication that the observed conductivity does not correspond to a single thermally activated process and cannot be described by a simple exponential relationship.Under this point of view,it can be concluded that ac conductivity activation energy depends on frequency and temperature and it is reasonable to assume that a range of activation energies is involved.One of the most interesting models,available in the literature,for ac conduction in disordered solids,is the random free-energy barrier model (also referred to as the symmetric hopping model)proposed by Dyre [10,31].Fig.9.The ac conductivity ðlog s ac Þversus log f of the composites with (a)10(b)30(c)50phr in Iron content,at various temperatures:(X )358C,(K )508C,(A )708C,(S )808C,(L )908C,(þ)1008C,(W )1208C.G.C.Psarras et al./Composites:Part A 34(2003)1187–11981194This model is based on the ascertainment that dc conductivity is thermally activated,s dc /exp ð2D E dc =kT Þ;and ac conductivity is less temperature depen-dent.The latter suggests that ac conduction is dominated by processes with activation energies smaller than D E dc :By employing a continuous time random walk approximation and the assumption of a distribution of energy barriers overwhich only jumps to the nearest-neighbour sites with equal probability are allowed,Dyre [31]derived the following equation for ac conductivity in disordered solidss ac ðv Þ¼s dc j vtln ð1þj vt Þð10Þwhere s dc ;v and t are the direct current conductivity,the angular frequency and the relaxation time,respectively.The random free-energy barrier model has been found to be in agreement with experimental data for a large number of disordered solids [10,31].In Fig.11Eq.(10)is used in a predictive way for the composite with 10phr Iron content.As it can be seen,it qualitatively follows the dispersion of ac conductivity.In the low frequency edge reveals the tendency of conductivity to achieve a constant value,while in the high frequency region verifies the exponential law of conductivity.In the low and intermediate frequency regime and at high temperatures,the produced curves deviate from experimental data by not being able to describe the recorded relaxation and pointing out that in the vicinity of the relaxation peaks the power law is not applicable [24,33,61–63].Alternating current conductivity sums all dissipative effects including an actual ohmic conductivity caused my migrating charge carriers as well as a frequency dielectric dispersion [64].Any solid consisting of phases with different conductivities has an overall conductivity,which increases with frequency [53,65].This is because at high frequencies localized charge carrier motion makes it possible to take maximum advantage of conducting regions,while at lower frequencies charge transport must extend over longer distances and is limited by the presence of isolated conducting regions.In Fig.11,three distinct regions of ac conductivity are marked.In the low frequency edge (region I,Fig.11)conductivity corresponds to thedcFig.10.The ac conductivity ðlog s ac Þversus 1=T of the composites with (a)10(b)30(c)50phr in Iron content,at various frequencies:(W )102Hz,(K )103Hz,(A )104Hz,(S )105Hz,(L )106Hz.Dashed lines are visualaid.Fig.11.The ac conductivity ðlog s ac Þversus log f of the composite with 10phr in Iron content,at various temperatures:(S )808C,(þ)1008C,(W )1208C.Dashed lines are produced by the random free-energy barrier model,the used values of dc conductivity were measured and found to be 1.7£1028,8.5£1028and 2.2£1027(V m)21,respectively,to each of the examined temperatures.G.C.Psarras et al./Composites:Part A 34(2003)1187–11981195。