Reconstruction of a Deceleration Parameter from the Latest Type Ia Supernovae Gold Dataset
- 格式:pdf
- 大小:126.94 KB
- 文档页数:5
Modelling of fractal coding schemesBernd H¨URTGEN and Frank M¨ULLER†Institut f¨u r Elektrische Nachrichtentechnik,RWTH Rheinisch-Westf¨a lische Technische Hochschule Aachen 52056Aachen,Germany,Phone:+49–241–807677,Fax:+49–241–8888196,Email:huertgen@ient.rwth-aachen.de †Institut f¨ur Elektrische Nachrichtentechnik,RWTH Rheinisch-Westf¨a lische Technische Hochschule Aachen 52056Aachen,Germany,Phone:+49–241–807681,Fax:+49–241–8888196,Email:mueller@ient.rwth-aachen.deAbstract-Fractal techniques applied in the area of signal coding suffer from the lack of an analytical description,especially if the question of convergence at the decoder is addressed.For this purpose an expensive eigenvalue calculation of the transformation matrix during the encoding process is necessary which is in general computational infeasible.In this paper the conditional equation for the eigenvalues is determined for a rather general coding scheme.This allows formulation of the probability density function of the largest eigenvalue which in turn determines whether or not the reconstruction converges.These results are not only important for evaluation of the convergence properties of various coding schemes but are also valuable for optimizing their appropriate encoding parameters.1.IntroductionFractal techniques are known for several years,especially in three distinctfields of applications namely segmentation,signal mod-elling and coding.Our attention is focused on the aspects of signal coding for which originally Barnsley[1,2]contributed major ideas.The lack of a practical algorithm suitable for auto-matic encoding and modelling of gray-level images at common compression ratios has beenfilled by Jacquin[3],who proposed a block-based implementation.A recent review on fractal cod-ing schemes may be found in[4]and an excellent mathemati-cal foundation based on the theory offinite-dimensional vector spaces in[5].Numerous supplements and improvements of Jacquin’s scheme have been reported.The objective of this paper is to pro-vide a method for judging the different proposals and to estimate the influence of distinct design parameters on the convergence property of the decoding process.For this purpose a probabilistic approach based upon the transformation matrix is introduced. As derived below,a necessary and sufficient condition for a convergent decoding process is that all eigenvalues of the trans-formation matrix lie within the unit circle.Since the calculation of the eigenvalues is a great expenditure,it is in general infea-sible to perform this during the encoding process.Nevertheless quantification of the probability of divergence,given the type of coding scheme and its pertinent design parameters,is desirable. This way one can quantify the probability of divergent trans-formations without actually determining the eigenvalues.This is performed by regarding the eigenvalues as random variables and modelling their probability density function(pdf)which de-pends upon the used coding scheme and its appropriate design parameters.The paper is organized as follows:The mathematical back-ground of fractal coding is introduced in section2.Section3is concerned with the distribution of the eigenvalues and is divided into three parts.While the topics3.1and3.2treat two special cases for which a rather simple analytical solution of the charac-teristic equation is possible,topic3.3deals with a more general case,for which no analytical description has been found yet.The paper concludes with a short summary.2.TheoryThe basic coding principle emerges from a blockwise defined affine transformation which composes a signal by parts of itself in order to exploit the natural self-similarities as a special form of redundancy for compression purposes.Fractal coding schemes can be viewed as some sort of vector quantization with a signal dependent codebook.Let be a signal vector consisting of samples.This signal is segmented into non-overlapping blocks with elements. Then for each of these blocks one codebook entry from a setof entries is selected which after scaling with and adding an offset minimizes some predefined distortion measure(1) for all blocks.The codebook is generated from the signal itself by use of a codebook construction matrix which is mainly determined by the type of coding scheme.If denotes the’fetch-operation’of the codebook entry from the codebook and the’put-operation’of the modified codebook entry into the approximation,the mapping process for the entire image may be formulated by(2)This represents an affine transformation consisting of a linear part and a non-linear offset which together form the fractal code for the original signal.A very simple coding scheme maybe constructed if the mapping is contractive or at least eventually contractive.This is if there exists a number so that for some the contractivity condition(3) holds.Then the decoder can construct a uniquefixed point from the fractal code without any knowledge about the codebook. This is guaranteed by Banach’sfixed point theorem which states that the sequence of iterates with converges for any arbitrary initial signal to a uniquefixed point of.If contractivity is assumed,the collage theorem (e.g.[5,6])ensures that thefixed point itself is close to the original signal,since(4)As can be seen from(4)the collage theorem also motivates the mapping process at the encoder which minimizes the approxima-tion error.Because not the original signal itself,but a fixed point,which is close to the original signal,is encoded, fractal coding schemes sometimes are also termed attractor cod-ing.A coding gain in this way can be achieved,if the fractal code,which serves as representation of thefixed point in the fractal domain,can be expressed with fewer bits than the original signal itself.For affine transformations can be shown that a necessary and sufficient condition for contractivity is that the spectral radius of the linear part,which is the largest eigenvalue in magnitude,is smaller than one[5,7].One can show that this demand is equivalent with the statement of eventual contractivity.Control about the convergence of can therefore be ob-tained by determining the eigenvalues of the linear part which is in general a very difficult task and analytical solutions are given only for some rather simple coding schemes,e.g.[7,8,5].Con-tractivity is always ensured,if the magnitude of all scaling coeffi-cients is strictly smaller than one.As reported by several au-thors,e.g.[9,10],a less stringent restriction for the improves reconstruction quality and convergence speed.On the other hand contractivity of the transformation is no longer guaranteed. Our investigations have shown,that the scaling coefficients may be regarded as statistically independent and inuniformly distributed random variables.Since the eigenvalues are solely determined by the scaling coefficients and the structure of the linear part they are also random variables.The following section illustrates by means of three distinct applications how the probability density function(pdf)of the eigenvalues for various choices of the design parameters can be derived.Whereas for thefirst two schemes,described in topic3.1and3.2,an analyt-ical solution of the characteristic equation is given,for the later and more general scheme the pdf is approxi-mated.The pdf of the eigenvalues determines the probability for divergent transformations and so the influence of various design parameters on the contractivity can be quantified.3.ApplicationsAs mentioned above,contractivity is determined by the largest eigenvalue in magnitude of the transformation matrix.Due to its huge dimension straightforward determination by solving the characteristic equation is infeasible.In-stead the specific structure of the matrix must be considered in order tofind an exact and quick solution.For a rather general coding approach this is done in[8],so this paper only concisely summarizes the results.We emerge from a coding approach published in[11].The basic idea is tofind for consecutive blocks within the sig-nal each consisting of samples another block of size samples,so that after some sort of geo-metric transformation,scaling and an additional offset the distor-tion measure(1)after the mapping processbecomes as small as possible.As shown in[8]the largest eigen-value of the transformation matrix for the entire signal is then determined by(5)The index denotes one of mapping cycles.Each of these cycles can be treated independently from all others.This is due to the fact,that those part of the signal belonging to one cycle is not regarded as codebook entry from all other cycles and vice versa.The number is the length of cycle,which equals the number of codebook entries involved in this cycle.For simplification this paper considers two important special cases from the literature.Thefirst one,published in[12],treats each blocks of the signal independently from all others.There-fore the length of mapping cycles for all mappings equals one.We call these schemes non-concatenated coding schemes. The second one does not geometrically scale the codebook en-tries,or only does this by subsampling.One can show that this results in the parameter being equal to one.Those schemes have been thoroughly investigated in[5]and are called decimat-ing coding schemes.3.1Non-concatenated coding schemeThe coding schemes described in this section are characterized by cycles with length one.Then eq.(5)can be simplified,so that the eigenvalue for one cycle is determined bycycle=k(6)As presumed above,the scaling coefficients are uniformly dis-tributed in.If they are also statistically indepen-dent,the pdf of the eigenvalues is equal to the-fold convolution of the pdf of the scaling parameter.Introduc-ing the rect-function withrect for(7)the pdf of the random variable can be written asrect(8)The-fold convolution of with itself then yields the pdf of the eigenvalues,which ism-times(9) The mapping results in a divergent recon-Figure1Probability density function forsome common values of the design parameterstruction,if and only if the largest eigenvalue of the linear part is outside the unit circle.Since is an even function,the probability for divergence can easily be determined byProb(10) Figure2shows this probability for some different parameters(12) Since Prob Prob the probability for diver-gence can be obtained by integrating the pdf so thatfinally Prob(13) As can be seen fromfigure3long mapping cycles are advanta-geous for a convergent reconstruction process.Summarizing the results of topic3.1and3.2,one can state that a fractal coding scheme which is optimized with respect to the contractivity of the transformation should combine a geometrical scaling of the codebook entries()with long mapping cycles().A typical representative of such a scheme isdescribed in the following topic.101010101010101010101003.3General coding schemeJacquin’s original proposal for encoding of gray-level images isthe most general one as far as the possible variations of the trans-formation matrix are concerned.To the knowledge of the authors no exact formulation of the eigenvalues has been foundyet,but only an upper bound derived from the euclidean oper-ator norm[13].So an analytical derivation of the pdf of the eigenvalues,as has been performed in the previous two topics, is not possible.Instead the pdf of the largest eigenvalue is ap-proximated.This has been done by generating the transformation matrix and evaluating its largest eigenvalue.For a large num-ber of experiments the probability of divergence approximatesProb large(14)with denoting the number of experiments where the mag-nitude of the largest eigenvalue exceeds one.Results of our com-puter simulations are depicted infigure4.One can see that a simple decimation matrix results in significantly more divergent transformations compared to a matrix which performs averaging of two or more samples.The differences be-tween larger are less significant,the choice of a suited aver-aging parameter is therefore mainly determined by the associated computational burden of the encoding process.Figure4Probability for divergentreconstruction of Jacquin’s coding scheme4.SummaryIn this paper the transformation matrices of fractal coding schemes are examined.Since all eigenvalues of the transfor-mation matrix must lie within the unit circle,the probability for divergent reconstruction sequences at the decoder can be quantified by determining the pdf of its eigenvalues.The conditional equation for the eigenvalues has been given for a rather general class of coding schemes.By modelling the scaling parameters as uniformly distributed,statistically indepen-dent random variables the eigenvalues itself are functions of ran-dom variables.Their pdf’s have been modelled for the two spe-cial cases of non-concatenated and decimating coding schemes.This allows specification of the probability for divergent recon-struction at the decoder and an appropriate choice for some de-sign parameters.Up to now no simple conditional equation for the eigenvalues of Jacquin’s original proposal has been found. Therefore their appropriate pdf is approximated by the relative frequency of eigenvalues being larger than one as result of an experiment performed many times.References[1]M.F.Barnsley,V.Ervin,D.Hardin,and ncaster(1986),“Solution of an inverse problem for fractals and other sets,”in A,vol.83,pp.1975–1977,Apr.1986.[2]M.F.Barnsley and J.H.Elton(1988),“A new class ofmarkov processes for image encoding,”Advances in applied probability,vol.20,pp.14–22,1988.[3] A. E.Jacquin(1990),“Fractal image coding based ona theory of iterated contractive image transformations,”in Proceedings SPIE Visual Communications and Image Processing’90,vol.1360,pp.227–239,1990.[4] A.E.Jacquin(1993),“Fractal image coding:A review,”Proceedings of the IEEE,vol.81,pp.1451–1465,Oct.1993.[5]L.Lundheim(1992),Fractal signal modelling for sourcecoding.PhD thesis,Universitetet I Trondheim Norges Tekniske Høgskole,1992.[6]Y.Fisher,E.W.Jacobs,and R.D.Boss(1992),“Fractalimage compression using iterated transforms,”in Image and text compression(J.A.Storer,ed.),ch.2,pp.35–61,Kluwer Academic Publishers,1992.[7] B.H¨urtgen(1993),“Contractivity of fractal transforms forimage coding,”Electronics Letters,vol.29,pp.1749–1750, Sept.1993.[8] B.H¨urtgen and S.F.Simon(1994),“On the problemof convergence in fractal coding schemes,”in Proceedings of the IEEE International Conference on Image Processing ICIP’94,vol.3,(Austin,Texas,USA),pp.103–106,Nov.1994.[9]Y.Fisher,E.W.Jacobs,and R.D.Boss(1991),“Iteratedtransform images compression,”Tech.Rep.1408,Naval Ocean Systems Center,San Diego,CA,Apr.1991. [10] E.W.Jacobs,Y.Fisher,and R.D.Boss(1992),“Imagecompression:A study of iterated transform method,”Signal Processing,Elsevier,vol.29,pp.251–263,1992.[11] D.M.Monro(1993),“Class of fractal transforms,”Electronics Letters,vol.29,no.4,pp.362–363,1993. [12] D.M.Monro and F.Dudbridge(1992),“Fractal approxi-mation of image blocks,”in Proceedings of the IEEE Inter-national Conference on Acoustics Speech and Signal Pro-cessing ICASSP’92,vol.3,pp.485–488,1992.[13] B.H¨urtgen and T.Hain(1994),“On the convergence offractal transforms,”in Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing ICASSP’94,vol.5,pp.561–564,1994.。
专利名称:RECEPTION ARRANGEMENT FOR A RADIO SIGNAL发明人:KOERNER, HEIKO申请号:EP03787718.0申请日:20030722公开号:EP1525729A1公开日:20050427专利内容由知识产权出版社提供摘要:The invention relates to a reception arrangement for a radio signal, comprising a downlink mixer (3) with a downstream signal processing chain and a demodulator (6). An RSSI signal is obtained between the downlink mixer (3) and the demodulator (6), for example in a limiter (5). Said signal is filtered by a high-pass filter, averaged and supplied to a comparator (15). According to the level of the RSSI signal thus processed, the recognition unit (8) can easily detect whether there are any useful signals, amplitude-modulated signals or frequency-modulated signals. The demodulator (6) is correspondingly controlled by a control unit (9). According to the cited principle, a recognition process can be especially easily and precisely carried out and the demodulator can be correspondingly adjusted, according to whether an amplitude-modulated signal or a frequency-modulated signal is received.申请人:INFINEON TECHNOLOGIES AG地址:St.-Martin-Strasse 53 81669 München DE国籍:DE代理机构:Epping Hermann & Fischer更多信息请下载全文后查看。
Homomorphic Evaluation of the AES CircuitCraig Gentry IBM ResearchShai HaleviIBM ResearchNigel P.SmartUniversity of Bristol February16,2012AbstractWe describe a working implementation of leveled homomorphic encryption(without bootstrapping) that can evaluate the AES-128circuit.Our current implementation takes about a week to evaluate anentire AES encryption operation,using NTL(over GMP)as our underlying software platform,andrunning on a large-memory ing SIMD techniques,we can process close to100blocks ineach evaluation,yielding an amortized rate of roughly2hours per block.For this implementation we developed both AES-specific optimizations as well as several“generic”tools for FHE evaluation.These last tools include(among others)a different variant of the Brakerski-Vaikuntanathan key-switching technique that does not require reducing the norm of the ciphertext vector,and a method of implementing the Brakerski-Gentry-Vaikuntanathan modulus-switching transformationon ciphertexts in CRT representation.Keywords.AES,Fully Homomorphic Encryption,ImplementationThefirst and second authors are sponsored by DARPA under agreement number FA8750-11-C-0096.The ernment is authorized to reproduce and distribute reprints for Governmental purposes notwithstand-ing any copyright notation thereon.The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements,either expressed or implied,of DARPA or the ernment.Distribution Statement“A”(Approved for Public Release, Distribution Unlimited).The third author is sponsored by DARPA and AFRL under agreement number FA8750-11-2-0079.The same disclaimers as above apply.He is also supported by the European Commission through the ICT Programme under Contract ICT-2007-216676ECRYPT II and via an ERC Advanced Grant ERC-2010-AdG-267188-CRIPTO,by EPSRC via grant COED–EP/I03126X,and by a Royal Society Wolfson Merit Award.The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements,either expressed or implied,of the European Commission or EPSRC.Contents1Introduction1 2Background22.1Notations and Mathematical Background (2)2.2BGV-type Cryptosystems (3)2.3Computing on Packed Ciphertexts (5)3General-Purpose Optimizations63.1A New Variant of Key Switching (6)3.2Modulus Switching in Evaluation Representation (7)3.3Dynamic Noise Management (8)3.4Randomized Multiplication by Constants (8)4Homomorphic Evaluation of AES94.1Homomorphic Evaluation of the Basic Operations (9)4.2Implementing The Permutations (11)4.3Performance Details (12)References12 A More Details13A.1Plaintext Slots (14)A.2Canonical Embedding Norm (14)A.3Double CRT Representation (15)A.4Sampling From A q (15)A.5Canonical embedding norm of random polynomials (16)B The Basic Scheme16B.1Our Moduli Chain (16)B.2Modulus Switching (17)B.3Key Switching (18)B.4Key-Generation,Encryption,and Decryption (19)B.5Homomorphic Operations (20)C Security Analysis and Parameter Settings21C.1Lower-Bounding the Dimension (22)C.1.1LWE with Sparse Key (23)C.2The Modulus Size (24)C.3Putting It Together (26)D Further AES Implementation Methods27E Scale(c,q t,q t−1)in dble-CRT Representation281IntroductionIn his breakthrough result[11],Gentry demonstrated that fully-homomorphic encryption was theoretically possible,assuming the hardness of some problems in integer lattices.Since then,many different improve-ments have been made,proposing new variants,improving efficiency,suggesting other hardness assump-tions,etc.Some of these works were accompanied by implementation[20,12,7,21,16],but all the imple-mentations so far were either“proofs of concept”that can compute only one basic operation at a time(at great cost),or special-purpose implementations limited to evaluating very simple functions.In this work we report on thefirst implementation powerful enough to support an“interesting real world circuit”.Specifi-cally,we implemented a variant of the leveled FHE-without-bootstrapping scheme of Brakerski,Gentry,and Vaikuntanathan[4](BGV),with support for deep enough circuits so that we can evaluate an entire AES-128 encryption operation.Why AES?We chose to shoot for an evaluation of AES since it seems like a natural benchmark:AES is widely deployed and used extensively in security-aware applications(so it is“practically relevant”to imple-ment it),and the AES circuit is nontrivial on one hand,but on the other hand not astronomical.Moreover the AES circuit has a regular(and quite“algebraic”)structure,which is amenable to parallelism and optimiza-tions.Indeed,for these same reasons AES is often used as a benchmark for implementations of protocols for secure multi-party computation(MPC),for example[19,8,14,15].Using the same yardstick to measure FHE and MPC protocols is quite natural,since these techniques target similar application domains and in some cases both techniques can be used to solve the same problem.Beyond being a natural benchmark,homomorphic evaluation of AES decryption also has interesting applications:When data is encrypted under AES and we want to compute on that data,then homomorphic AES decryption would transform this AES-encrypted data into an FHE-encrypted data,and then we could perform whatever computation we wanted.(Such applications were alluded to in[16,21]).Our Contributions.Our implementation is based on a variant of the ring-LWE scheme of BGV[4,6,5], using the techniques of Smart and Vercauteren(SV)[21]and Gentry,Halevi and Smart(GHS)[13],and we introduce many new optimizations.Some of our optimizations are specific to AES,these are described in Section4.Most of our optimization,however,are more general-purpose and can be used for homomorphic evaluation of other circuits,these are described in Section3.Many of our general-purpose optimizations are aimed at reducing the number of FFTs and CRTs that we need to perform,by reducing the number of times that we need to convert polynomials between coef-ficient and evaluation representations.Since the cryptosystem is defined over a polynomial ring,many of the operations involve various manipulation of integer polynomials,such as modular multiplications and additions and Frobenius maps.Most of these operations can be performed more efficiently in evaluation representation,when a polynomial is represented by the vector of values that it assumes in all the roots of the ring polynomial(for example polynomial multiplication is just point-wise multiplication of the evalu-ation values).On the other hand some operations in BGV-type cryptosystems(such as key switching and modulus switching)seem to require coefficient representation,where a polynomial is represented by listing all its coefficients.1Hence a“naive implementation”of FHE would need to convert the polynomials back and forth between the two representations,and these conversions turn out to be the most time-consuming part of the execution.In our implementation we keep ciphertexts in evaluation representation at all times, converting to coefficient representation only when needed for some operation,and then converting back.1The need for coefficient representation ultimately stems from the fact that the noise in the ciphertexts is small in coefficient representation but not in evaluation representation.1We describe variants of key switching and modulus switching that can be implemented while keeping almost all the polynomials in evaluation representation.Our key-switching variant has another advantage, in that it significantly reduces the size of the key-switching matrices in the public key.This is particularly important since the main limiting factor for evaluating deep circuits turns out to be the ability to keep the key-switching matrices in memory.Other optimizations that we present are meant to reduce the number of modulus switching and key switching operations that we need to do.This is done by tweaking some operations(such as multiplication by constant)to get a slower noise increase,by“batching”some operations before applying key switching,and by attaching to each ciphertext an estimate of the“noisiness”of this ciphertext,in order to support better noise bookkeeping.Our Implementation.Our implementation was based on the NTL C++library running over GMP,we utilized a machine which consisted of a processing unit of Intel Xeon CPUs running at2.0GHz with18MB cache,and most importantly with256GB of RAM.2Memory was our main limiting factor in the implemen-tation.With this machine it took us just under eight days to compute a single block AES encryption using an implementation choice which minimizes the amount of memory required;this is roughly two orders of magnitude faster than what could be done with the Gentry-Halevi implementation[12].The computation was performed on ciphertexts that could hold1512plaintext slots each;where each slot holds an element of F28.This means that we can compute 1512/16 =94AES operations in parallel,which gives an amortize time per block of roughly two hours.We note that there are a multitude of optimizations that one can perform on our basic implementation. Most importantly,we believe that by using the“bootstrapping as optimization”technique from BGV[4]we can speedup the AES performance by an additional order of magnitude.Also,there are great gains to be had by making better use of parallelism:Unfortunately,the NTL library(which serves as our underlying software platform)is not thread safe,which severely limits our ability to utilize the multi-core functionality of modern processors(our test machine has24cores).We expect that by utilizing many threads we can speed up some of our(higher memory)AES variants by as much as a16x factor;just by letting each thread compute a different S-box lookup.Organization.In Section2we review the main features of BGV-type cryptosystems[5,4],and briefly survey the techniques for homomorphic computation on packed ciphertexts from SV and GHS[21,13]. Then in Section3we describe our“general-purpose”optimizations on a high level,with additional details provided in Appendices A and B.A brief overview of AES and a high-level description(and performance numbers)of one of our AES-specific implementations is provided in Section4,with details of alternative implementations being provided in Appendix D.2Background2.1Notations and Mathematical BackgroundFor an integer q we identify the ring Z/q Z with the interval(−q/2,q/2]∩Z,and we use[z]q to denote the reduction of the integer z modulo q into that interval.Our implementation utilizes polynomial rings defined by cyclotomic polynomials,A=Z[X]/Φm(X).The ring A is the ring of integers of a the m th cyclotomic numberfield Q(ζm).We let A q def=A/q A=Z[X]/(Φm(X),q)for the(possibly composite)integer q,and we identify A q with the set of integer polynomials of degree uptoφ(m)−1reduced modulo q.2This machine was BlueCrystal Phase2;and the authors would like to thank the University of Bristol’s Advanced Computing Research Centre(https:///)for access to this facility2Coefficient vs.Evaluation Representation.Let m,q be two integers such that Z /q Z contains a primitive m -th root of unity,and denote one such primitive m -th root of unity by ζ∈Z /q Z .Recallthat the m ’th cyclotomic polynomial splits into linear terms modulo q ,Φm (X )= i ∈(Z /m Z )∗(X −ζi )(mod q ).For an element a ∈A q ,we consider two ways of representing it:Viewing a as a degree-(φ(m )−1)poly-nomial,a (X )= i<φ(m )a i X i ,we can just list all the coefficients in order a = a 0,a 1,...,a φ(m )−1 ∈(Z /q Z )φ(m ).We call a the coefficient representation of a .For the other representation we consider the values that the polynomial a (X )assumes on all primitive m -th roots of unity modulo q ,b i =a (ζi )mod q for i ∈(Z /m Z )∗.The b i ’s in order also yield a vector b ∈(Z /q Z )φ(m ),which we call the evaluation representation of a .Clearly these two representations are related via b =V m ·a ,where V m is the Van-dermonde matrix over the primitive m -th roots of unity modulo q .We remark that for all i we have the equality (a mod (X −ζi ))=a (ζi )=b i ,hence the evaluation representation of a is just a polynomial Chinese-Remaindering representation.In both evaluation and coefficient representations,an element a ∈A q is represented by a φ(m )-vector of integers in Z /q Z .If q is a composite then each of these integers can itself be represented either using the standard binary encoding of integers or using Chinese-Remaindering relative to the factors of q .We usually use the standard binary encoding for the coefficient representation and Chinese-Remaindering for the evaluation representation.(Hence the latter representation is really a double CRT representation,relative to both the polynomial factors of Φm (X )and the integer factors of q .)2.2BGV-type CryptosystemsOur implementation uses a variant of the BGV cryptosystem due to Gentry,Halevi and Smart,specifically the one described in [13,Appendix D](in the full version).In this cryptosystem both ciphertexts and secret keys are vectors over the polynomial ring A ,and the native plaintext space is the space of binary polynomials A 2.(More generally it could be A p for some fixed p ≥2,but in our case we will always use A 2.)At any point during the homomorphic evaluation there is some “current integer modulus q ”and “current secret key s ”,that change from time to time.A ciphertext c is decrypted using the current secret key s by taking inner product over A q (with q the current modulus)and then reducing the result modulo 2in coefficient representation .Namely,the decryption formula isa ←[[ c ,s mod Φm (X )]q noise ]2.(1)The polynomial [ c ,s mod Φm (X )]q is called the “noise”in the ciphertext c .Informally,c is a valid ciphertext with respect to secret key s and modulus q if this noise has “sufficiently small norm”relative to q .The meaning of “sufficiently small norm”is whatever is needed to ensure that the noise does not wrap around q when performing homomorphic operations,in our implementation we keep the norm of the noise always below some pre-set bound (which is determined in Appendix C.2).The specific norm that we use to evaluate the magnitude of the noise is the “canonical embedding norm reduced mod q ”,as described in [13,Appendix D](in the full version).This is useful to get smaller parameters,but for the purpose of presentation the reader can think of the norm as the Euclidean norm of the noise in coefficient representation.More details are given in the Appendices.We refer to the norm of the noise as the noise magnitude .The central feature of BGV-type cryptosystems is that the current secret key and modulus evolve as we apply operations to ciphertexts.We apply five different operations to ciphertexts during homomorphic evaluation.Three of them —addition,multiplication,and automorphism —are “semantic operations”that we use to evolve the plaintext data which is encrypted under those ciphertexts.The other two operations3—key-switching and modulus-switching —are used for “maintenance”:These operations do not change the plaintext at all,they only change the current key or modulus (respectively),and they are mainly used to control the complexity of the evaluation.Below we briefly describe each of these five operations on a high level.For the sake of self-containment,we also describe key generation and encryption in Appendix B.More detailed description can be found in [13,Appendix D].Addition.Homomorphic addition of two ciphertext vectors with respect to the same secret key and mod-ulus q is done just by adding the vectors over A q .If the two arguments were encrypting the plaintext polynomials a 1,a 2∈A 2then the sum will be an encryption of a 1+a 2∈A 2.This operation has no effect on the current modulus or key,and the norm of the noise is at most the sum of norms from the noise in the two arguments.Multiplication.Homomorphic multiplication is done via tensor product over A q .In principle,if the two arguments have dimension n over A q then the product ciphertext has dimension n 2,each entry in the output computed as the product of one entry from the first argument and one entry from the second.3This operation does not change the current modulus,but it changes the current key:If the two input ciphertexts are valid with respect to the dimension-n secret key vector s ,encrypting the plaintext polynomi-als a 1,a 2∈A 2,then the output is valid with respect to the dimension-n 2secret key s which is the tensor product of s with itself,and it encrypt the polynomial a 1·a 2∈A 2.The norm of the noise in the product ciphertext can be bounded in terms of the product of norms of the noise in the two arguments.The specific bound depends on the norm in use,for our choice of norm function the norm of the product is no larger than the product of the norms of the two arguments.Key Switching.The public key of BGV-type cryptosystems includes additional components to enable converting a valid ciphertext with respect to one key into a valid ciphertext encrypting the same plaintext with respect to another key.For example,this is used to convert the product ciphertext which is valid with respect to a high-dimension key back to a ciphertext with respect to the original low-dimension key.To allow conversion from dimension-n key s to dimension-n key s (both with respect to the same modulus q ),we include in the public key a matrix W =W [s →s ]over A q ,where the i ’th column of W is roughly an encryption of the i ’th entry of s with respect to s (and the current modulus).Then given a valid ciphertext c with respect to s ,we roughly compute c =W ·c to get a valid ciphertext with respect to s .In some more detail,the BGV key switching transformation first ensures that the norm of the ciphertext c itself is sufficiently low with respect to q .In [4]this was done by working with the binary encoding of c ,and one of our main optimization in this work is a different method for achieving the same goal (cf.Section 3.1).Then,if the i ’th entry in s is s i ∈A (with norm smaller than q ),then the i ’th column of W [s →s ]is an n -vector w i such that [ w i ,s mod Φm (X )]q =2e i +s i for a low-norm polynomial e i ∈A .Denoting e =(e 1,...,e n ),this means that we have s W =s +2e over A q .For any ciphertext vector c ,setting c =W ·c ∈A q we get the equation[ c ,s mod Φm (X )]q =[s W c mod Φm (X )]q =[ c ,s +2 c ,e mod Φm (X )]qSince c ,e ,and [ c ,s mod Φm (X )]q all have low norm relative to q ,then the addition on the right-hand side does not cause a wrap around q ,hence we get [[ c ,s mod Φm (X )]q ]2=[[ c ,s mod Φm (X )]q ]2,as needed.The key-switching operation changes the current secret key from s to s ,and does not change the current modulus.The norm of the noise is increased by at most an additive factor of 2 c ,e .3It was shown in [6]that over polynomial rings this operation can be implemented while increasing the dimension only to 2n −1rather than to n 2.4Modulus Switching.The modulus switching operation is intended to reduce the norm of the noise,to compensate for the noise increase that results from all the other operations.To convert a ciphertext c with respect to secret key s and modulus q into a ciphertext c encrypting the same thing with respect to the same secret key but modulus q ,we roughly just scale c by a factor q /q (thus getting a fractional ciphertext),then round appropriately to get back an integer ciphertext.Specifically c is a ciphertext vector satisfying(a)c =c (mod 2),and (b)the “rounding error term”τdef =c −(q /q )c has low norm.Converting cto c is easy in coefficient representation,and one of our optimizations is a method for doing the same in evaluation representation (cf.Section 3.2)This operation leaves the current key s unchanged,changes the current modulus from q to q ,and the norm of the noise is changed as n ≤(q /q ) n + τ·s .Note that if the key s has low norm and q is sufficiently smaller than q ,then the noise magnitude decreases by this operation.A BGV-type cryptosystem has a chain of moduli,q 0<q 1···<q L −1,where fresh ciphertexts are with respect to the largest modulus q L −1.During homomorphic evaluation every time the (estimated)noise grows too large we apply modulus switching from q i to q i −1in order to decrease it back.Eventually we get ciphertexts with respect to the smallest modulus q 0,and we cannot compute on them anymore (except by using bootstrapping).Automorphisms.In addition to adding and multiplying polynomials,another useful operation is convert-ing the polynomial a (X )∈A to a (i )(X )def =a (X i )mod Φm (X ).Denoting by κi the transformationκi :a →a (i ),it is a standard fact that the set of transformations {κi :i ∈(Z /m Z )∗}forms a group under composition (which is the Galois group G al (Q (ζm )/Q )),and this group is isomorphic to (Z /m Z )∗.In [4,13]it was shown that applying the transformations κi to the plaintext polynomials is very useful,some more examples of its use can be found in our Section 4.Denoting by c (i ),s (i )the vector obtained by applying κi to each entry in c ,s ,respectively,it was shown in [4,13]that if s is a valid ciphertext encrypting a with respect to key s and modulus q ,then c (i )is a valid ciphertext encrypting a (i )with respect to key s (i )and the same modulus q .Moreover the norm of noise remains the same under this operation.We remark that we can apply key-switching to c (i )in order to get an encryption of a (i )with respect to the original key s .2.3Computing on Packed CiphertextsSmart and Vercauteren observed [20,21]that the plaintext space A 2can be viewed as a vector of “plaintext slots”,by an application the polynomial Chinese Remainder Theorem.Specifically,if the ring polynomial Φm (X )factors modulo 2into a product of irreducible factors Φm (X )= −1j =0F j (X )(mod 2),then a plaintext polynomial a (X )∈A 2can be viewed as encoding different small polynomials,a j =a mod F j .Just like for integer Chinese Remaindering,addition and multiplication in A 2correspond to element-wise addition and multiplication of the vectors of slots.The effect of the automorphisms is a little more involved.When i is a power of two then the transforma-tions κi :a →a (i )is just applied to each slot separately.When i is not a power of two the transformation κi has the effect of roughly shifting the values between the different slots.For example,for some parameters we could get a cyclic shift of the vector of slots:If a encodes the vector (a 0,a 1,...,a −1),then κi (a )(for some i )could encode the vector (a −1,a 0,...,a −2).This was used in [13]to devise efficient procedures for applying arbitrary permutations to the plaintext slots.We note that the values in the plaintext slots are not just bits,rather they are polynomials modulo the irreducible F j ’s,so they can be used to represents elements in extension fields GF (2d ).In particular,in some of our AES implementations we used the plaintext slots to hold elements of GF (28),and encrypt one5byte of the AES state in each slot.Then we can use an adaption of the techniques from [13]to permute the slots when performing the AES row-shift and column-mix.3General-Purpose OptimizationsBelow we summarize our optimizations that are not tied directly to the AES circuit and can be used also in homomorphic evaluation of other circuits.Underlying many of these optimizations is our choice of keeping ciphertext and key-switching matrices in evaluation (double-CRT)representation.Our chain of moduli is defined via a set of primes of roughly the same size,p 0,...,p L −1,all chosen such that Z /p i Z has a m ’th roots of unity.(In other words,m |p i −1for all i .)For i =0,...,L −1we then define our i ’th modulus as q i = i j =0p i .The primes p 0and p L −1are special (p 0is chosen to ensure decryption works,and p L −1is chosen to control noise immediately after encryption),however all other primes p i are of size 217≤p i ≤220if L <100,see Appendix C.In the t -th level of the scheme we have ciphertexts consisting of elements in A q t (i.e.,polynomialsmodulo (Φm (X ),q t )).We represent an element c ∈A q t by a φ(m )×(t +1)“matrix”of its evaluationsat the primitive m -th roots of unity modulo the primes p 0,...,p t .Computing this representation from the coefficient representation of c involves reducing c modulo the p i ’s and then t +1invocations of the FFT algorithm,modulo each of the p i (picking only the FFT coefficients corresponding to (Z /m Z )∗).To convert back to coefficient representation we invoke the inverse FFT algorithm t +1times,each time padding the φ(m )-vector of evaluation point with m −φ(m )zeros (for the evaluations at the non-primitive roots of unity).This yields the coefficients of t +1polynomials modulo (X m −1,p i )for i =0,...,t ,we then reduce each of these polynomials modulo (Φm (X ),p i )and apply Chinese Remainder interpolation.We stress that we try to perform these transformations as rarely as we can.3.1A New Variant of Key SwitchingAs described in Section 2,the key-switching transformation introduces an additive factor of 2 c ,e in the noise,where c is the input ciphertext and e is the noise component in the key-switching matrix.To keep the noise magnitude below the modulus q ,it seems that we need to ensure that the ciphertext c itself has low norm.In BGV [4]this was done by representing c as a fixed linear combination of small vectors,i.e.c = i 2i c i with c i the vector of i ’th bits in c .Considering the high-dimension ciphertextc ∗=(c 0|c 1|c 2|···)and secret key s ∗=(s |2s |4s |···),we note that we have c ∗,s ∗ = c ,s ,and c ∗has low norm (since it consists of 0-1polynomials).BGV therefore included in the public key the matrix W =W [s ∗→s ](rather than W [s →s ]),and had the key-switching transformation computes c ∗from c and sets c =W ·c ∗.When implementing key-switching,there are two drawbacks to the above approach.First,this increases the dimension (and hence the size)of the key switching matrix.This drawback is fatal when evaluating deep circuits,since having enough memory to keep the key-switching matrices turns out to be the limiting factor in our ability to evaluate these deep circuits.Another drawback is it seems that this key-switching procedure requires that we first convert c to coefficient representation in order to compute the c i ’s,then convert each of the c i ’s back to evaluation representation before multiplying by the key-switching matrix.In level t of the circuit,this seem to require Ω(t log q t )FFTs.In this work we propose a different variant:Rather than manipulating c to decrease its norm,we instead temporarily increase the modulus q .To that end we recall that for a valid ciphertext c ,encrypting plaintext a with respect to s and q ,we have the equality c ,s =2e +a over A q ,for a low-norm polynomial e .6This equality,we note,implies that for every odd integer p we have the equality c ,p s =2e +a ,holding over A pq ,for the “low-norm”polynomial e (namely e =p ·e +p −12a ).Clearly,when considered relativeto secret key p s and modulus pq ,the noise in c is p times larger than it was relative to s and q .However,since the modulus is also p times larger,we maintain that the noise has norm sufficiently smaller than the modulus.In other words,c is still a valid ciphertext that encrypts the same plaintext a with respect to secret key p s and modulus pq .By taking p large enough,we can ensure that the norm of c (which is independent of p )is sufficiently small relative to the modulus pq .We therefore include in the public key a matrix W =W [p s →s ]modulo pq for a large enough odd integer p .(Specifically we need p ≈q √m .)Given a ciphertext c ,valid with respect to s and q ,we apply the key-switching transformation simply by setting c =W ·c over A pq .The additive noise term c ,e that we get is now small enough relative to our large modulus pq ,thus the resulting ciphertext c is valid with respect to s and pq .We can now switch the modulus back to q (using our modulus switching routine),hence getting a valid ciphertext with respect to s and q .We note that even though we no longer break c into its binary encoding,it seems that we still need to recover it in coefficient representation in order to compute the evaluations of c mod p .However,since we do not increase the dimension of the ciphertext vector,this procedure requires only O (t )FFTs in level t (vs.O (t log q t )=O (t 2)for the original BGV variant).Also,the size of the key-switching matrix is reduced by roughly the same factor of log q t .Our new variant comes with a price tag,however:We use key-switching matrices relative to a larger modulus,but still need the noise term in this matrix to be small.This means that the LWE problem under-lying this key-switching matrix has larger ratio of modulus/noise,implying that we need a larger dimension to get the same level of security than with the original BGV variant.In fact,since our modulus is more than squared (from q to pq with p >q ),the dimension is increased by more than a factor of two.This translates to more than doubling of the key-switching matrix,partly negating the size and running time advantage that we get from this variant.We comment that a hybrid of the two approaches could also be used:we can decrease the norm of c only somewhat by breaking it into digits (as opposed to binary bits as in [4]),and then increase the modulus somewhat until it is large enough relative to the smaller norm of c .We speculate that the optimal setting in terms of runtime is found around p ≈√q ,but so far did not try to explore this tradeoff.3.2Modulus Switching in Evaluation RepresentationGiven an element c ∈A q t in evaluation (double-CRT)representation relative to q t = t j =0p j ,we wantto modulus-switch to q t −1–i.e.,scale down by a factor of p t ;we call this operation Scale (c,q t ,q t −1)The output should be c ∈A ,represented via the same double-CRT format (with respect to p 0,...,p t −1),such that (a)c ≡c (mod 2),and (b)the “rounding error term”τ=c −(c/p t )has a very low norm.As p t is odd,we can equivalently require that the element c †def=p t ·c satisfy(i)c †is divisible by p t ,(ii)c †≡c (mod 2),and(iii)c †−c (which is equal to p t ·τ)has low norm.Rather than computing c directly,we will first compute c †and then set c ←c †/p t .Observe that once we compute c †in double-CRT format,it is easy to output also c in double-CRT format:given the evaluations for c †modulo p j (j <t ),simply multiply them by p −1t mod p j .The algorithm to output c †in double-CRT format is as follows:7。
连续微段样条曲线重构加工算法陶佳安;陈胜;黄宇亮;施群【摘要】针对复杂曲面在采用连续微段模式加工的过程中合成速度波动大导致加工效率降低的问题,提出了适用于微段加工的样条曲线重构新算法,该方法包含建立一种具有快速递推性质的样条曲线,及基于该曲线的速度规划和快速递推插补加工的方法。
实验表明,算法在保证加速度连续的条件下,通过样条重构及速度规划减少了频繁加减速,提高了加工效率;快速递推则提高了插补计算的速度,插补点精确通过微段节点,保证了加工精度,提升了数控系统的性能。
%In complex contour machining process with continuous micro-lines mode, machining efficiency was reduced by big fluctuation of composite velocity. To solve this problem, a curve reconstruction micro-line machining algo- rithm was presented. This algorithm included a new spline which had rapid recursive formula, and a method of ve- locity planning and recursive interpolation machining algorithm based on this spline. The experiment result showed the algorithm could reduce the situation of frequentacceleration/deceleration and improve the machining efficiency through spline reconstruction and velocity planning on the conditions that ensure continuity of acceleration in machi- ning process. Meantime, computational speed of interpolation was improved by fast recursion, and interpolation point could pass knot of micro line precisely, so machining accuracy was ensured and performance of CNC was promoted.【期刊名称】《计算机集成制造系统》【年(卷),期】2012(018)006【总页数】5页(P1195-1199)【关键词】样条曲线;微段;加减速;合成速度;插补;算法【作者】陶佳安;陈胜;黄宇亮;施群【作者单位】上海大学通信与信息工程学院,上海200072;上海大学机电工程与自动化学院,上海200072;上海大学机电工程与自动化学院,上海200072;上海大学机电工程与自动化学院,上海200072【正文语种】中文【中图分类】TG6590 引言复杂型面数控加工的计算机辅助制造(Computer Aided Manuf act uring,CA M)系统中用直线逼近曲线,生成大量微小直线段G代码,再经过系统解释执行控制轴运动。
Accrue:自然增加,产生Deplete:耗尽,使衰竭Deterioration:恶化,退化Duration:持续时间,为期Elimination:消除Fragile:易碎的,脆的Hammering:捶打,击打Humid:潮湿的,湿润的,多湿气的Impetus:推动力,促进Irrefutable:不能反驳的,不能驳倒的No-tillage:免耕salination:盐化(作用)Muloh:覆盖)Stubble-muloh: 秸秆覆盖Amenity :使人愉快的事物或环境Deceleration:减速Elasticity: 弹性,伸缩性Incremental:增长性Infrastructure:基础,基本建设Ingredient:成分,调料Pronounced:明显的Scenario:方案Strident:尖锐的Subsector:亚区,分布,部分Substantially:实质的Welfare:幸福,健康,福利Acquisition:获得,获得物Cytoplasmic: 细胞质的Cytoplasmic male sterility :细胞质雄性不育Genotype:基因型Hybrid:杂交的Hybridization:杂交Pollinate:授粉Since the 1940s,andparticularly during the last thirty years, maize and wheat have become increasingly important in developing countries. In the early 1960s,the developing world accounted for just over a third of global maize production and less than 30% of global wheat output . By the early 1990s , developing countries accounted for more than 45% of total world production of both cereals. Maize production in developing countries grew at an annual rate of 3.6% from 1961 to 1995; wheat production at an even faster rat自1940年代以来,特别是在过去的三十年,玉米和小麦在发展中国家中发挥了变的越来越重要。
In‡ation T argets and Debt Accumulation in aMonetary Union¤Roel BeetsmaUniversity of Amsterdam and CEPR yns BovenbergTilburg University and CEPR zOctober1999AbstractThis paper explores the interaction between centralized monetary policyand decentralized…scal policy in a monetary union.Discretionary mone-tary policy su¤ers from a failure to commit.Moreover,decentralized…scalpolicymakers impose externalities on each other through the in‡uence oftheir debt policies on the common monetary policy.These imperfectionscan be alleviated by adopting state-contingent in‡ation targets(to com-bat the monetary policy commitment problem)and shock-contingent debttargets(to internalize the externalities due to decentralized…scal policy).Keywords:discretionary monetary policy,decentralized…scal policy, monetary union,in‡ation targets,debt targets.JEL Codes:E52,E58,E61,E62.¤We thank David Vestin and the participants of the EPRU Workshop“Structural Change and European Economic Integration”for helpful comments on an earlier version of this paper. The usual disclaimer applies.y Mailing address Roel Beetsma:Department of Economics,University of Amsterdam, Roetersstraat11,1018WB Amsterdam,The Netherlands(phone:+31.20.5255280;fax: +31.20.5254254;e-mail:Beetsma@fee.uva.nl).z Mailing address Lans Bovenberg:Department of Economics,Tilburg University,P.O.Box 90153,5000LE Tilburg,The Netherlands(phone:+31.13.4662912;fax:+31.13.4663042;e-mail: A.L.Bovenberg@kub.nl).1.IntroductionAlthough the Maatricht Treaty has laid the institutional foundations for European Monetary Union(EMU),how these institutions can best be operated in practice remains to be seen in the coming years.For example,the European Central Bank (ECB)has announced a two-tier monetary policy strategy based on a reference value for money growth and an indicator that is based on a number of other measures,such as output gaps,in‡ation expectations,etcetera(see European Central Bank,1999).Over time the ECB may well shift to implicit targeting of in‡ation.Indeed,a number of economists has argued(e.g,see Svensson,1998) that also the Bundesbank has pursued such a strategy.Furthermore,how the Excessive De…cit Procedure and the Stability and Growth Pact(see Beetsma and Uhlig,1999)will work in practice is not yet clear.This paper deals with the interaction between in‡ation targets and constraints on decentralized…scal policy in a monetary union.To do so,we extend our earlier work on the interaction between a common monetary policy and decentralized …scal policies in a monetary union.In particular,in Beetsma and Bovenberg (1999)we showed that monetary uni…cation raises debt accumulation,because in a monetary union countries only partly internalize the e¤ects of their debt policies on future monetary policy.This additional debt accumulation is actually welfare enhancing(if the governments share societal preferences).We showed that,in the absence of shocks,making the central bank su¢ciently conservative(in the sense of Rogo¤,1985,that is by imposing on the central bank a loss function that attaches a su¢ciently high weight to price stability)can lead the economy to the second-best equilibrium.However,this is no longer the case in the presence of common shocks,as the economies are confronted with a trade o¤between credibility and‡exibility.While Beetsma and Bovenberg(1999)emphasized the e¤ects of lack of com-mitment in monetary policy,this paper introduces another complication in the form of strategic interactions between decentralized…scal policymakers who have di¤erent views on the stance of the common monetary policy.1These di¤erent views originate in di¤erences among the economies in the monetary union.In particular,we allow for systematic di¤erences in labour and product market dis-tortions,public spending requirements and initial public debt levels.We also allow for idiosyncratic stochastic shocks hitting the countries.In combination with the decentralization of…scal policy these di¤erences lead to con‡icts about the preferred future stance of the common monetary policy.In particular,coun-tries that su¤er from severe distortions in labor and commodity markets,feature 1Our earlier model incorporated another potential distortion:the possibility that govern-ments discount the future at a higher rate than their societies do.We ignore this distortion throughout the current paper.2higher public spending or initial debt levels or are hit by worse shocks prefer a laxer future stance of monetary policy.These con‡icts about monetary policy induce individual governments to employ their debt policy strategically,so as to induce the union’s central bank to move monetary policy into the direction they prefer.This strategic behavior imposes negative externalities on other countries, thereby producing welfare losses.In contrast to Beetsma and Bovenberg(1999),we do not address the distor-tions in the model by making the common central bank su¢ciently conservative. Instead,we focus on state-contingent in‡ation targets which,in contrast to a con-servative central bank,can lead the economy to the second-best equilibrium if countries are identical.Hence,as stressed by Svensson(1997)in a model with-out…scal policy and debt accumulation,in‡ation targets eliminate the standard credibility-‡exibility trade-o¤.If…scal policy is decentralized to heterogeneous countries,however,the optimal state-contingent in‡ation targets need to be com-plemented by(country-speci…c)debt targets to establish the second best.In this way,in‡ation targets address the lack of commitment in monetary policy,while debt targets eliminate strategic interaction among heterogeneous governments with di¤erent views about the common monetary policy stance.The remainder of this paper is structured as follows.Section2presents the model.Section3discusses the second-best equilibrium in which not only monetary but also…scal policy is centralized and in which monetary policy is conducted under commitment.This is the second-best optimum that can be attained under monetary uni…cation,assuming that the supranational authorities attach an equal weight to the preferences of each of the participating countries.Section4derives the equilibrium for the case of a common,discretionary monetary policy with decentralized…scal policies.Section5explores institutional arrangements(i.e. in‡ation targets and public debt targets)that may alleviate the welfare losses arising from the lack of monetary policy commitment and the wasteful strategic interaction among the decentralized governments.Finally,Section6concludes the main body of this paper.The derivations are contained in the appendix.2.The modelA monetary union,which is small relative to the rest of the world,is formed by n countries.2A common central bank(CCB)sets monetary policy for the entire union,while…scal policy is determined at a decentralized,national level by the n governments.There are two periods.2Monetary uni…cation is taken as given.Hence,we do not explore the incentives of countries to join a monetary union.3Workers are represented by trade unions who aim for some target real wage rate(e.g.see Alesina and Tabellini,1987,and Jensen,1994).They set nominal wages so as to minimize the expected squared deviation of the realized real wage rate from this target.Monetary policy(i.e.,the in‡ation rate)is selected after nominal wages have been…xed.In each country,…rms face a standard production function with decreasing returns to scale in labour.Output in period t is taxed at a rate¿it.Therefore,output in country i in periods1and2,respectively,is given by3x i1=º(¼1¡¼e1¡¿i1)¡¹¡²i;(2.1)x i2=º(¼2¡¼e2¡¿i2);(2.2) where¹represents a common union-wide shock,while²i stands for an idiosyn-cratic shock that solely hits country i.¼et denotes the in‡ation rate for period texpected at the start of period t(that is,before period t shocks have materialized, but after period t¡1;t¡2;::shocks have hit).We assume that E[²i]=0;8i; E[¹]=0;E[²i²j]=0;8j=i;and that¹²´1P n i=1²i=0.4The variances of¹and ²i are given by¾2¹and¾2²,respectively.We abstract from shocks in the secondperiod,because they would not a¤ect debt accumulation.Each country features a social welfare function which is shared by the govern-ment of that country.Hence,governments are benevolent.In particular,the loss function of government i is de…ned over in‡ation,output and public spending:V S;i=12X t=1¯t¡1h®¼¼2t+(x it¡~x it)2+®g(g it¡~g it)2i;0<¯ 1;®¼;®g>0:(2.3)Welfare losses increase in the deviations of in‡ation,(log)output and government spending(g it is government spending as a share of output in the absence of distor-tions)from their targets(or…rst-best levels or“bliss points”).For convenience, the target level for in‡ation corresponds to price stability.The target level for output is denoted by~x it>0.Two distortions reduce output below this optimal level.First,the output tax¿it drives a wedge between the social and private bene…ts of additional output.Second,market power enables unions to drive the real wage above its level in the absence of distortions.Hence,even in the ab-sence of taxes,output is below the…rst-best output level~x it>0.The…rst-best 3Details on the derivations of these output equations can be found in Beetsma and Bovenberg (1999).4Without this assumption,the mean¹²of the²’s would play the same role as¹does.In the outcomes given below,¹would then be replaced by^¹´¹+¹².For convenience,we assume that ¹²=0.4level of government spending,~g it,can be interpreted as the optimal share of non-distortionary output to be spent on public goods if(non-distortionary)lump-sum taxes would be available(see Debelle and Fischer,1994).The target levels for output and government spending can di¤er across countries.Parameters®¼and ®g correspond to the weights of the price stability and government spending ob-jectives,respectively,relative to the weight of the output objective.Finally,¯denotes society’s subjective discount factor.Government i’s budget constraint can be approximated by(e.g.,see Appendix A in Beetsma and Bovenberg,1999):g it+(1+½)d i;t¡1=¿it+¼t+d it;(2.4) where d i;t¡1represents the amount of public debt carried over from the previous period into period t,while d it stands for the amount of debt outstanding at the end of period t.All public debt is real,matures after one period,and is sold on the world capital market against a real rate of interest of½.This interest rate is exogenous because the countries making up the monetary union are small relative to the rest of the world.5¿it andÂ(a constant)stand for,respectively,distor-tionary tax revenue and real holdings of base money as shares of non-distortionary output.All countries share equally in the seigniorage revenues of the CCB,so that the seigniorage revenues accruing to country i amount to¼t.We combine(2.4)with the expression for output,(2.1)or(2.2),to eliminate ¿it.The resulting equation can be rewritten to yield the government…nancing requirement of period t:GF R it=~K it+(1+½)d i;t¡1¡d it+±t(¹+²i)=º=[(~x it¡x it)=º]+¼t+(~g it¡g it)+(¼t¡¼e t);(2.5) where±t is an indicator function,such that±1=1and±2=0,and where~Kit´~g it+~x it=º.The government…nancing requirement,GF R it,consists of three components.The …rst component,~K it,amounts to the government spending target,~g it,and an out-put subsidy aimed at o¤setting the implicit output tax due to labor-or product-market distortions,~x it=º.The second component involves net debt-servicing costs, 5In the following,we will occasionally explore what happens when the number of union participants becomes in…nitely large(i.e.n!1)in order to strengthen the intuition behind our results.In these exercises the real interest rate remains beyond the control of union-level policymakers.5(1+½)d i;t¡1¡d it.The…nal component(in period1only)is the stochastic shock (scaled byº),(¹+²i)=º.The last right-hand side of(2.5)represents the sources of…nance:the shortfall(scaled byº)of output from its target(henceforth re-ferred to as the output gap),(~x it¡x it)=º,seigniorage revenues,¼t,the shortfall of government spending from its target(henceforth referred to as the spending gap),~g it¡g it,and the in‡ation surprise,¼t¡¼e t.All public debt is paid o¤at the end of the second period(d i2=0;i= 1;::;n).Under this assumption,while taking the discounted(to period one)sums of the left-and right-hand sides of(2.5)(t=1;2),we obtain the intertemporal government…nancing requirement:IGF R i=~F i+(¹+²i)=º=2X t=1(1+½)¡(t¡1)[(~x it¡x it)=º+¼t+(~g it¡g it)+(¼t¡¼e t)];(2.6)where~F i´~K i1+(1+½)d i0+~K i2=(1+½)stands for the deterministic component of the intertemporal government…nancing requirement.Monetary policy is delegated to a common central banker(CCB),who has direct control over the union’s in‡ation rate.One could assume that the CCB has certain intrinsic preferences regarding the policy outcomes.Alternatively,and this is the interpretation we prefer,one could assume that the CCB is assigned a loss function by means of an appropriate contractual agreement.More speci…cally,this agreement shapes the CCB’s incentives in such a way(by appropriately specifying its salary and other bene…ts–for example,possible reappointment–conditional on its performance)that it chooses to maximize the following loss function:V CCB=12X t=1¯t¡1(®¼(¼t¡¼¤t)2+1n X i=1h(x it¡~x it)2+®g(g it¡~g it)2i);(2.7)where¼¤t is the in‡ation target in period t,which may be di¤erent from the socially-optimal in‡ation rate,which was set at zero.If¼¤1=¼¤2=0,the CCB’s objective function corresponds to an equally-weighted average of the individual societies’objective functions.We assume that ¼¤2is a linear function of d i1;i=1;::;n.This linearity assumption su¢ces for our purposes:we will see later on that the optimal second-period in‡ation target is indeed a linear function of d i1;i=1;::;n.The optimal…rst-period in‡ation target will be a function of d i0,which is exogenous.63.The second-best equilibriumAs a benchmark for the remainder of the analysis,we discuss the equilibrium resulting from centralized…scal and monetary policies under commitment.Mon-etary policy is set by the CCB.Fiscal policy is conducted by a centralized…scal authority,which minimizes:V U´1n X i=1V S;i;(3.1)where the V S;i are given by(2.3),i=1;::;n.Equation(3.1)assumes that coun-tries have equal bargaining power as regards to the…scal policy decisions taken at the union ernment spending is residually determined,so that the CCB,when it selects monetary policy,internalizes the government budget con-straints.The resulting equilibrium is Pareto optimal.In the sequel,we refer to this equilibrium as the second-best equilibrium.In the absence of…rst-best policies (such as the use of lump-sum taxation and the elimination of product-and labor-market distortions),it is the equilibrium with the smallest possible welfare loss (3.1),given monetary uni…cation.The derivation of the second-best equilibrium is contained in Appendix A.3.1.In‡ation,the output gap and the public spending gapTable1contains the outcomes for in‡ation,the output gap,6~x it¡x it,and the spending gap,~g it¡g it.We write each of these outcomes as the sum of two deterministic and two stochastic components.~F¢i is the deviation of country i’s deterministic component of its intertemporal government…nancing requirement from the cross-country average,de…ned by~F.Formally,~F´1n P n j=1~F j and ~F¢i´~F i¡~F.The factor between square brackets in each of the entries of Table1 makes clear how,within a given period,the government…nancing requirement is distributed over the…nancing sources(seigniorage,the output gap,the spending gap and an in‡ation surprise).Indeed,for each period these factors add up to unity,both across the deterministic and across the stochastic components.For example,for the…rst period one has:6Throughout,we present the outcome for the output gap instead of the outcome for the tax rate.The reason is that,in contrast to the latter,the former directly enters the welfare loss functions.7[(~x i1¡x i1)=º]+¼1+(~g i1¡g i1)+(¼1¡¼e1)=h¯¤(1+½)1+¯¤(1+½)i³~F+~F¢i´+h¯¤(1+½)(P¤=P)1+¯¤(1+½)(P=P)i³¹º+²iº´=~K i1+(1+½)d i0¡d S i1+(¹+²i)=º;(3.2)where d Si1is the second-best debt level.The last equality can be checked bysubstituting(3.4)-(3.7)into(3.3)(all given below)and substituting the resulting expression into the last line of(3.2).For each of the outcomes,the terms that follow the factor in square brackets regulate the inter temporal allocation of the intertemporal government…nancing requirement.The coe¢cients of the common stochastic shock¹º(in the fourth column ofTable1,°2)di¤er in two ways from the coe¢cients of the common determinis-tic component of the intertemporal government…nancing requirement~F(in thesecond column of Table1,°0).The…rst di¤erence is with respect to the…rst-period,intra temporal,allocation of the government…nancing requirement overthe…nancing sources.The deterministic components of the government…nancing requirement are anticipated and thus correctly incorporated in expected in‡a-tion.The common shock,in contrast,is unanticipated and,hence,not taken intoaccount when in‡ation expectations are formed.The predetermination of the in-‡ation expectation is exploited by the central policymakers so as to…nance part ofthis common shock through an in‡ation surprise.Indeed,whereas the coe¢cientof¼1¡¼e1is zero in the second column in Table1,this coe¢cient is positive in the fourth column,indicating that part of the common shock is…nanced throughan in‡ation surprise in the…rst period.With surprise in‡ation absorbing part ofthe common shock,the output gap and the spending gap have to absorb a smallershare of this shock.In the second period,the allocation over the…nancing sources for the stochastic component¹is the same as for the deterministic component~F.The reason is that the…rst-period shock¹has materialized before second-period in‡ation expectations are formed.The e¤ect of¹on the second-period outcomes will thus be perfectly anticipated.Indeed,the share of¹that is transmitted into the second period through debt policy becomes part of the deterministic component of the second-period government…nancing requirement(when viewed from the start of the second period).The second way in which the coe¢cient of the stochastic shock¹di¤ers from the coe¢cient of~F,involves the inter temporal allocation of the government…-nancing requirement.In particular,the share of¹absorbed in the…rst period (relative to the second period)is larger than that of~F(¯¤(P¤=P)c1>¯¤c0and c1<c0,where c0and c1are de…ned in Table1).The reason is again that…rst-8period in‡ation expectations are predetermined when the stochastic shock hits. This enables the policymakers to absorb a relatively large share of the stochastic shock in the…rst period through an in‡ation surprise.The responses of the output and government spending gaps to~F¢i and²i di¤er from the responses to~F and¹.Since in‡ation is attuned to cross-country averages,it cannot respond to country-speci…c circumstances as captured by~F¢i and²i.Accordingly,taxes(the output gap)and the government spending gap have to fully absorb these country-speci…c components of the government…nancing requirements.3.2.Public debt policyThe solution for debt accumulation in the second-best equilibrium can be written as:d S i1=¹d e;S1+d¢;e;Si1+¹d d;S1+d±;S i1;(3.3) where¹d e;S 1=h~K1+(1+½)¹d0¡~K2i+(1¡¯¤)~K2¤;(3.4)d¢;e;S i1=h~K¢i1+(1+½)d¢i0¡~K¢i2i+(1¡¯¤)~K¢i2¤;n>1;(3.5) =0;n=1;¹d d;S 1="11+¯¤(1+½)(P¤=P)#¹º;(3.6)d±;S i1="11+¯¤(1+½)#²iº;n>1;(3.7) =0;n=1;where the superscript“S”stands for“second-best equilibrium”,the superscript “e”denotes the expectation of a variable,an upperbar above a variable indicates its cross-country average(except for variables carrying a tilde,like~K1,where the cross-country average is indicated by dropping the country-index),a super-script“¢”denotes an idiosyncratic deviation of a deterministic variable from its cross-country average(for example,~K¢i1´~K i1¡~K1),a superscript“d”denotes9the response to a common shock,a superscript“±”indicates the response to an idiosyncratic shock,and where¯¤´¯(1+½);(3.8)P´Â2=®¼+1=º2+1=®g;P¤´(Â+1)2=®¼+1=º2+1=®g:Hence,optimal debt accumulation(3.3)is the sum of two deterministic compo-nents and two stochastic components.The component¹d e;S1optimally distributes over time the absorption of the cross-country averages of the deterministic compo-nents of the government…nancing requirements.Therefore,it is common acrosscountries.The country-speci…c components d¢;e;Si1intertemporally distribute theidiosyncratic deterministic components of the government…nancing requirements. The common(across countries)component¹d d;S1represents the optimal debt re-sponse to the common shock¹,while d±;S i1stands for the optimal debt response to the country-speci…c shock,²i.The debt response to the common shock is less active than the response to the idiosyncratic shock(since P¤=P>1).The common in‡ation rate can exploit the predetermination of in‡ation expectations only in responding to the common shock,because the common in‡ation rate can not be attuned to idiosyncratic shocks.Hence,the share of the common shock that can be absorbed in the…rst period can be larger than the corresponding share of the idiosyncratic shock. Public debt thus needs to respond less vigorously to the common shock.4.Discretionary monetary policy with decentralized…scalpolicyThis section introduces two distortions compared with the second-best equilib-rium explored in the previous section.First,the CCB is no longer able to commit to monetary policy announcements.Second,…scal policy is decentralized to in-dividual governments,which may result in wasteful strategic interaction among heterogeneous governments.From now on,the timing of events in each period is as follows.At the start of the period,the institutional parameters are set.That is,an in‡ation target is imposed on the CCB for the coming period and,if applicable,the debt targets on the individual governments are set.The in‡ation target may be conditioned on the state of the world.In particular,the in‡ation target may depend on the average debt level in the union.7Furthermore,the debt target,which represents 7The optimal in‡ation target can either be optimally reset at the start of each period,or10the amount of public debt that a government has to carry over into the next period,may be shock-contingent.8After the institutional parameters have been set,in‡ation expectations are determined(through the nominal wage-setting pro-cess).Third,the shock(s)materialize.Fourth,taking in‡ation expectations as given,the CCB selects the common in‡ation rate and the…scal authorities simul-taneously select taxes and,in the absence of a debt target,public debt.Each of the players takes the other players’policies at this stage as given.Finally,public spending levels are residually determined.As a result,the CCB internalizes the e¤ect of its policies on the government budget constraints.This section explores the outcomes under pure discretion,i.e.in the absence of both in‡ation targets(i.e.,¼¤1=¼¤2=0)and debt targets.The complete derivation of the equilibrium is contained in Appendix B.The suboptimality of the resulting equilibrium compared to the second best motivates the exploration of in‡ation and debt targets in Section5.4.1.In‡ation,the output gap and the public spending gapTable2contains the solutions for the in‡ation rate,the output gap and the spend-ing gap.The main di¤erence compared to the outcomes under the second-best equilibrium(see Table1)is that,for a given amount of debt d i1to be carried over into the second period,expected…rst-period in‡ation(and,hence,seignior-age ifÂ>0)will be higher(compare the term between the square parenthe-ses in the second column and the second row of Table2with the correspond-ing term in Table1and observe that[Â(Â+1)=®¼]=S>(Â2=®¼)=P,where S´Â(Â+1)=®¼+1=º2+1=®g).The source of the higher expected in‡ation rate under pure discretion is the inability to commit to a stringent monetary policy,which yields the familiar in‡ation bias(Barro and Gordon,1983).The outcomes for in‡ation,the output gap and the spending gap deviate from the outcomes under the second-best equilibrium also because debt accumulation un-der pure discretion di¤ers from debt accumulation under the second best.These di¤erences are discussed below.4.2.Public debt policyGovernment i’s debt can,analogous to(3.3),be written as:d D i1=¹d e;D1+d¢;e;Di1+¹d d;D1+d±;D i1;(4.1)be determined according to a state-contingent rule selected at the beginning of the…rst period. These two alternative interpretations yield equivalent results.8Debt at the end of the second period is restricted to be zero.Hence,the second period features a debt target of zero.11where the superscript“D”is used to indicate the solution of the purely discre-tionary equilibrium with decentralized…scal policies and where¹d e;D 1=h~K1+(1+½)d0¡~K2i+[1¡¯¤(S¤=S)]~K2¤;(4.2)d¢;e;D i1=h~K¢i1+(1+½)d¢i0¡~K¢i2i+[1¡¯¤(Q=S)]~K¢i21+¯¤(1+½)(Q=S);if n>1;(4.3) =0;if n=1;¹d d;D 1="11+¯¤(1+½)(S¤=S)(P¤=S)#¹º;(4.4)d±;D i1="11+¯¤(1+½)(Q=S)#²iº;if n>1;(4.5) =0;if n=1;and whereS´Â(Â+1)=®¼+1=º2+1=®g;(4.6)S¤´Â(Â+1)=®¼+(Â+1)=(n®¼)+1=º2+1=®g;Q´[(n¡1)=n][Â(Â+1)=®¼]+1=º2+1=®g:4.2.1.Response to the common deterministic components of the gov-ernment…nancing requirementsPositive analysis:This subsection explores the solution for expected average debt¹d e;D1in(4.2). Whereas current in‡ation expectations are predetermined at the moment that debt is selected,future in‡ation expectations still need to be determined.A re-duction in debt reduces the future government…nancing requirement and,thus, the tax rate in the future.This,in turn,weakens the CCB’s incentive to raise future in‡ation in order to protect employment.Hence,by restraining debt ac-cumulation,governments help to reduce future in‡ation expectations,which are endogenous from a…rst-period perspective.The reduction in future in‡ation expectations implies a lower in‡ation bias in the future.In other words,asset accumulation is an indirect way to enhance the commitment of a central bank to low future in‡ation.12。
What is Locus of Control?What is Locus of Control?Within psychology, Locus of Control is considered to be an important aspect of personality. The concept was developed originally Julian Rotter in the 1950s (Rotter, 1966).Locus of Control refers to an individual's perception about the underlying main causes of events in his/her life. Or, more simply:Do you believe that your destiny is controlled by yourself or by external forces (such as fate, god, or powerful others)?The full name Rotter gave the construct was Locus of Control of Reinforcement. In giving it this name, Rotter was bridging behavioural and cognitive psychology. Rotter's view was that behaviour was largely guided by "reinforcements" (rewards and punishments) and that through contingencies such as rewards and punishments, individuals come to hold beliefs about what causes their actions. These beliefs, in turn, guide what kinds of attitudes and behaviours people adopt. This understanding of Locus of Control is consistent, for example, with Philip Zimbardo (a famous psychologist):A locus of control orientation is a belief about whether the outcomes of our actions are contingent on what we do (internal control orientation) or on events outside our personal control (external control orientation)." (Zimbardo, 1985, p. 275)Thus, locus of control is conceptualised as referring to a unidimensional continuum, ranging from external to internal:Is an internal locus of control desirable?In general, it seems to be psychologically healthy to perceive that one has control over those things which one is capable of influencing.In simplistic terms, a more internal locus of control is generally seen as desirable. Having an Internal locus of control can also be referred to as "self-agency", "personal control", "self-determination", etc. Research has found the following trends:Males tend to be more internal than femalesAs people get older they tend to become more internalPeople higher up in organisational structures tend to be more internal (Mamlin, Harris, & Case, 2001)However, its important to warn people against lapsing in the overly simplistic view notion that internal is good and external is bad (two legs good, four legs bad?). There are important subtleties and complexities tobe considered. For example:Internals can be psychologically unhealthy and unstable. An internalorientation usually needs to be matched by competence, self-efficacyand opportunity so that the person is able to successfully experiencethe sense of personal control and responsibility. Overly internal people who lack competence, efficacy and opportunity can become neurotic,anxious and depressed. In other words, internals need to have arealistic sense of their circle of influence in order to experience'success'.Externals can lead easy-going, relaxed, happy lives.Despite these cautions, psychological research has found that people with a more internal locus of control seem to be better off, e.g., they tend to be more achievement oriented and to get better paid jobs. However, thought regarding causality is needed here too. Do environmental circumstances (such as privilege and disadvantage) cause LOC beliefs or do the beliefs cause the situation?Sometimes Locus of Control is seen as a stable, underlying personality construct, but this may be misleading, since the theory and research indicates that that locus of control is largely learned. There is evidence that, at least to some extent, LOC is a response to circumstances. Some psychological and educational interventions have been found to produce shifts towards internal locus of control (e.g., outdoor education programs; Hans, 2000; Hattie, Marsh, Neill & Richards, 1997).。
a r X i v :q u a n t -p h /0105127v 3 19 J u n 2003DECOHERENCE,EINSELECTION,AND THE QUANTUM ORIGINS OF THE CLASSICALWojciech Hubert ZurekTheory Division,LANL,Mail StopB288Los Alamos,New Mexico 87545Decoherence is caused by the interaction with the en-vironment which in effect monitors certain observables of the system,destroying coherence between the pointer states corresponding to their eigenvalues.This leads to environment-induced superselection or einselection ,a quantum process associated with selective loss of infor-mation.Einselected pointer states are stable.They can retain correlations with the rest of the Universe in spite of the environment.Einselection enforces classicality by imposing an effective ban on the vast majority of the Hilbert space,eliminating especially the flagrantly non-local “Schr¨o dinger cat”states.Classical structure of phase space emerges from the quantum Hilbert space in the appropriate macroscopic limit:Combination of einse-lection with dynamics leads to the idealizations of a point and of a classical trajectory.In measurements,einselec-tion replaces quantum entanglement between the appa-ratus and the measured system with the classical corre-lation.Only the preferred pointer observable of the ap-paratus can store information that has predictive power.When the measured quantum system is microscopic and isolated,this restriction on the predictive utility of its correlations with the macroscopic apparatus results in the effective “collapse of the wavepacket”.Existential in-terpretation implied by einselection regards observers as open quantum systems,distinguished only by their abil-ity to acquire,store,and process information.Spreading of the correlations with the effectively classical pointer states throughout the environment allows one to under-stand ‘classical reality’as a property based on the rela-tively objective existence of the einselected states:They can be “found out”without being re-prepared,e.g,by intercepting the information already present in the envi-ronment.The redundancy of the records of pointer states in the environment (which can be thought of as their ‘fit-ness’in the Darwinian sense)is a measure of their clas-sicality.A new symmetry appears in this setting:Envi-ronment -assisted invariance or envariance sheds a new light on the nature of ignorance of the state of the system due to quantum correlations with the environment,and leads to Born’s rules and to the reduced density matri-ces,ultimately justifying basic principles of the program of decoherence and einselection.ContentsI.INTRODUCTION2A.The problem:Hilbert space is big 21.Copenhagen Interpretation 22.Many Worlds Interpretation 3B.Decoherence and einselection 3C.The nature of the resolution and the role of envariance4D.Existential Interpretation and ‘Quantum Darwinism’4II.QUANTUM MEASUREMENTS5A.Quantum conditional dynamics51.Controlled not and a bit-by-bit measurement 62.Measurements and controlled shifts.73.Amplificationrmation transfer in measurements91.Action per bit9C.“Collapse”analogue in a classical measurement 9III.CHAOS AND LOSS OF CORRESPONDENCE11A.Loss of the quantum-classical correspondence 11B.Moyal bracket and Liouville flow 12C.Symptoms of correspondence loss131.Expectation values 132.Structure saturation13IV.ENVIRONMENT –INDUCED SUPERSELECTION 14A.Models of einselection141.Decoherence of a single qubit152.The classical domain and a quantum halo 163.Einselection and controlled shifts16B.Einselection as the selective loss of information171.Mutual information and discord18C.Decoherence,entanglement,dephasing,and noise 19D.Predictability sieve and einselection20V.EINSELECTION IN PHASE SPACE21A.Quantum Brownian motion21B.Decoherence in quantum Brownian motion231.Decoherence timescale242.Phase space view of decoherence 25C.Predictability sieve in phase space 26D.Classical limit in phase space261.Mathematical approach (¯h →0)272.Physical approach:The macroscopic limit 273.Ignorance inspires confidence in classicality 28E.Decoherence,chaos,and the Second Law281.Restoration of correspondence 282.Entropy production293.Quantum predictability horizon 30VI.EINSELECTION AND MEASUREMENTS30A.Objective existence of einselected states 30B.Measurements and memories31C.Axioms of quantum measurement theory321.Observables are Hermitean –axiom (iiia)322.Eigenvalues as outcomes –axiom (iiib)333.Immediate repeatability,axiom (iv)334.Probabilities,einselection and records342D.Probabilities from Envariance341.Envariance352.Born’s rule from envariance363.Relative frequencies from envariance384.Other approaches to probabilities39 VII.ENVIRONMENT AS A WITNESS40A.Quantum Darwinism401.Consensus and algorithmic simplicity412.Action distance413.Redundancy and mutual information424.Redundancy ratio rate43B.Observers and the Existential Interpretation43C.Events,Records,and Histories441.Relatively Objective Past45 VIII.DECOHERENCE IN THE LABORATORY46A.Decoherence due to entangling interactions46B.Simulating decoherence with classical noise471.Decoherence,Noise,and Quantum Chaos48C.Analogue of decoherence in a classical system48D.Taming decoherence491.Pointer states and noiseless subsystems492.Environment engineering493.Error correction and resilient computing50 IX.CONCLUDING REMARKS51 X.ACKNOWLEDGMENTS52 References52I.INTRODUCTIONThe issue of interpretation is as old as quantum the-ory.It dates back to the discussions of Niels Bohr, Werner Heisenberg,Erwin Schr¨o dinger,(Bohr,1928; 1949;Heisenberg,1927;Schr¨o dinger,1926;1935a,b;see also Jammer,1974;Wheeler and Zurek,1983).Perhaps the most incisive critique of the(then new)theory was due to Albert Einstein,who,searching for inconsisten-cies,distilled the essence of the conceptual difficulties of quantum mechanics through ingenious“gedankenexper-iments”.We owe him and Bohr clarification of the sig-nificance of the quantum indeterminacy in course of the Solvay congress debates(see Bohr,1949)and elucidation of the nature of quantum entanglement(Einstein,Podol-sky,and Rosen,1935;Bohr,1935,Schr¨o dinger,1935a,b). Issues identified then are still a part of the subject. Within the past two decades the focus of the re-search on the fundamental aspects of quantum theory has shifted from esoteric and philosophical to more“down to earth”as a result of three developments.To begin with, many of the old gedankenexperiments(such as the EPR “paradox”)became compelling demonstrations of quan-tum physics.More or less simultaneously the role of de-coherence begun to be appreciated and einselection was recognized as key in the emergence of st not least,various developments have led to a new view of the role of information in physics.This paper reviews progress with a focus on decoherence,einselection and the emergence of classicality,but also attempts a“pre-view”of the future of this exciting and fundamental area.A.The problem:Hilbert space is bigThe interpretation problem stems from the vastness of the Hilbert space,which,by the principle of superposi-tion,admits arbitrary linear combinations of any states as a possible quantum state.This law,thoroughly tested in the microscopic domain,bears consequences that defy classical intuition:It appears to imply that the familiar classical states should be an exceedingly rare exception. And,naively,one may guess that superposition principle should always apply literally:Everything is ultimately made out of quantum“stuff”.Therefore,there is no a priori reason for macroscopic objects to have definite position or momentum.As Einstein noted1localization with respect to macrocoordinates is not just independent, but incompatible with quantum theory.How can one then establish correspondence between the quantum and the familiar classical reality?1.Copenhagen InterpretationBohr’s solution was to draw a border between the quantum and the classical and to keep certain objects–especially measuring devices and observers–on the clas-sical side(Bohr,1928;1949).The principle of superposi-tion was suspended“by decree”in the classical domain. The exact location of this border was difficult to pinpoint, but measurements“brought to a close”quantum events. Indeed,in Bohr’s view the classical domain was more fundamental:Its laws were self-contained(they could be confirmed from within)and established the framework necessary to define the quantum.Thefirst breach in the quantum-classical border ap-peared early:In the famous Bohr–Einstein double-slit debate,quantum Heisenberg uncertainty was invoked by Bohr at the macroscopic level to preserve wave-particle duality.Indeed,as the ultimate components of classical objects are quantum,Bohr emphasized that the bound-ary must be moveable,so that even the human nervous system could be regarded as quantum provided that suit-able classical devices to detect its quantum features were available.In the words of John Archibald Wheeler(1978; 1983)who has elucidated Bohr’s position and decisively contributed to the revival of interest in these matters,“No[quantum]phenomenon is a phenomenon until it is a recorded(observed)phenomenon”.3 This is a pithy summary of a point of view–known asthe Copenhagen Interpretation(CI)–that has kept manya physicist out of despair.On the other hand,as long as acompelling reason for the quantum-classical border couldnot be found,the CI Universe would be governed by twosets of laws,with poorly defined domains of jurisdiction.This fact has kept many a student,not to mention theirteachers,in despair(Mermin1990a;b;1994).2.Many Worlds InterpretationThe approach proposed by Hugh Everett(1957a,b)and elucidated by Wheeler(1957),Bryce DeWitt(1970)and others(see DeWitt and Graham,1973;Zeh,1970;1973;Geroch,1984;Deutsch,1985,1997,2001)was toenlarge the quantum domain.Everything is now repre-sented by a unitarily evolving state vector,a gigantic su-perposition splitting to accommodate all the alternativesconsistent with the initial conditions.This is the essenceof the Many Worlds Interpretation(MWI).It does notsuffer from the dual nature of CI.However,it also doesnot explain the emergence of classical reality.The difficulty many have in accepting MWI stems fromits violation of the intuitively obvious“conservation law”–that there is just one Universe,the one we perceive.But even after this question is dealt with,,many a con-vert from CI(which claims allegiance of a majority ofphysicists)to MWI(which has steadily gained popular-ity;see Tegmark and Wheeler,2001,for an assessment)eventually realizes that the original MWI does not ad-dress the“preferred basis question”posed by Einstein1(see Wheeler,1983;Stein,1984;Bell1981,1987;Kent,1990;for critical assessments of MWI).And as long asit is unclear what singles out preferred states,perceptionof a unique outcome of a measurement and,hence,of asingle Universe cannot be explained either2.In essence,Many Worlds Interpretation does not ad-dress but only postpones the key question.The quantum-classical boundary is pushed all the way towards theobserver,right against the border between the materialUniverse and the“consciousness”,leaving it at a veryuncomfortable place to do physics.MWI is incomplete:It does not explain what is effectively classical and why.Nevertheless,it was a crucial conceptual breakthrough:4ment)can be in principle carried out without disturbing the system.Only in quantum mechanics acquisition of information inevitably alters the state of the system–the fact that becomes apparent in double-slit and related experiments(Wootters and Zurek,1979;Zurek,1983). Quantum nature of decoherence and the absence of classical analogues are a source of misconceptions.For instance,decoherence is sometimes equated with relax-ation or classical noise that can be also introduced by the environment.Indeed,all of these effects often ap-pear together and as a consequence of the“openness”. The distinction between them can be briefly summed up: Relaxation and noise are caused by the environment per-turbing the system,while decoherence and einselection are caused by the system perturbing the environment. Within the past few years decoherence and einselection became familiar to many.This does not mean that their implications are universally accepted(see comments in the April1993issue of Physics Today;d’Espagnat,1989 and1995;Bub,1997;Leggett,1998and2002;Stapp, 2001;exchange of views between Anderson,2001,and Adler,2001).In afield where controversy reigned for so long this resistance to a new paradigm is no surprise. C.The nature of the resolution and the role of envariance Our aim is to explain why does the quantum Universe appear classical.This question can be motivated only in the context of the Universe divided into systems,and must be phrased in the language of the correlations be-tween them.In the absence of systems Schr¨o dinger equa-tion dictates deterministic evolution;|Ψ(t) =exp(−iHt/¯h)|Ψ(0) ,(1.1) and the problem of interpretation seems to disappear. There is no need for“collapse”in a Universe with no systems.Yet,the division into systems is imperfect.As a consequence,the Universe is a collection of open(in-teracting)quantum systems.As the interpretation prob-lem does not arise in quantum theory unless interacting systems exist,we shall also feel free to assume that an environment exists when looking for a resolution. Decoherence and einselectionfit comfortably in the context of the Many Worlds Interpretation where they define the“branches”of the universal state vector.De-coherence makes MWI complete:It allows one to ana-lyze the Universe as it is seen by an observer,who is also subject to decoherence.Einselection justifies elements of Bohr’s CI by drawing the border between the quan-tum and the classical.This natural boundary can be sometimes shifted:Its effectiveness depends on the de-gree of isolation and on the manner in which the system is probed,but it is a very effective quantum-classical border nevertheless.Einselectionfits either MWI or CI framework:It sup-plies a statute of limitations,putting an end to the quantum jurisdiction..It delineates how much of the Universe will appear classical to observers who monitor it from within,using their limited capacity to acquire, store,and process information.It allows one to under-stand classicality as an idealization that holds in the limit of macroscopic open quantum systems.Environment imposes superselection rules by preserv-ing part of the information that resides in the correlations between the system and the measuring apparatus(Zurek, 1981,1982).The observer and the environment compete for the information about the system.Environment–because of its size and its incessant interaction with the system–wins that competition,acquiring information faster and more completely than the observer.Thus,a record useful for the purpose of prediction must be re-stricted to the observables that are already monitored by the environment.In that case,the observer and the en-vironment no longer compete and decoherence becomes unnoticeable.Indeed,typically observers use environ-ment as a“communication channel”,and monitor it to find out about the system.Spreading of the information about the system through the environment is ultimately responsible for the emer-gence of the“objective reality”.Objectivity of a state can be quantified by the redundancy with which it is recorded throughout Universe.Intercepting fragments of the environment allows observers tofind out(pointer) state of the system without perturbing it(Zurek,1993a, 1998a,and2000;see especially section VII of this pa-per for a preview of this new“environment as a witness”approach to the interpretation of quantum theory). When an effect of a transformation acting on a system can be undone by a suitable transformation acting on the environment,so that the joint state of the two remains unchanged,the transformed property of the system is said to exhibit“environment assisted invariance”or en-variance(Zurek,2002b).Observer must be obviously ignorant of the envariant properties of the system.Pure entangled states exhibit envariance.Thus,in quantum physics perfect information about the joint state of the system-environment pair can be used to prove ignorance of the state of the system.Envariance offers a new fundamental view of what is information and what is ignorance in the quantum world. It leads to Born’s rule for the probabilities and justifies the use of reduced density matrices as a description of a part of a larger combined system.Decoherence and ein-selection rely on reduced density matrices.Envariance provides a fundamental resolution of many of the inter-pretational issues.It will be discussed in section VI D.D.Existential Interpretation and‘Quantum Darwinism’What the observer knows is inseparable from what the observer is:The physical state of his memory implies his information about the Universe.Its reliability de-pends on the stability of the correlations with the exter-nal observables.In this very immediate sense decoher-5ence enforces the apparent“collapse of the wavepacket”: After a decoherence timescale,only the einselected mem-ory states will exist and retain useful correlations(Zurek, 1991;1998a,b;Tegmark,2000).The observer described by some specific einselected state(including a configu-ration of memory bits)will be able to access(“recall”) only that state.The collapse is a consequence of einse-lection and of the one-to-one correspondence between the state of his memory and of the information encoded in it. Memory is simultaneously a description of the recorded information and a part of the“identity tag”,defining observer as a physical system.It is as inconsistent to imagine observer perceiving something else than what is implied by the stable(einselected)records in his posses-sion as it is impossible to imagine the same person with a different DNA:Both cases involve information encoded in a state of a system inextricably linked with the physical identity of an individual.Distinct memory/identity states of the observer(that are also his“states of knowledge”)cannot be superposed: This censorship is strictly enforced by decoherence and the resulting einselection.Distinct memory states label and“inhabit”different branches of the Everett’s“Many Worlds”Universe.Persistence of correlations is all that is needed to recover“familiar reality”.In this manner,the distinction between epistemology and ontology is washed away:To put it succinctly(Zurek,1994)there can be no information without representation in physical states. There is usually no need to trace the collapse all the way to observer’s memory.It suffices that the states of a decohering system quickly evolve into mix-tures of the preferred(pointer)states.All that can be known in principle about a system(or about an observer, also introspectively,e.g.,by the observer himself)is its decoherence-resistant‘identity tag’–a description of its einselected state.Apart from this essentially negative function of a cen-sor the environment plays also a very different role of a“broadcasting agent”,relentlessly cloning the informa-tion about the einselected pointer states.This role of the environment as a witness in determining what exists was not appreciated until now:Throughout the past two decades,study of decoherence focused on the effect of the environment on the system.This has led to a mul-titude of technical advances we shall review,but it has also missed one crucial point of paramount conceptual importance:Observers monitor systems indirectly,by in-tercepting small fractions of their environments(e.g.,a fraction of the photons that have been reflected or emit-ted by the object of interest).Thus,if the understand-ing of why we perceive quantum Universe as classical is the principal aim,study of the nature of accessibil-ity of information spread throughout the environment should be the focus of attention.This leads one away from the models of measurement inspired by the“von Neumann chain”(1932)to studies of information trans-fer involving branching out conditional dynamics and the resulting“fan-out”of the information throughout envi-ronment(Zurek,1983,1998a,2000).This new‘quantum Darwinism’view of environment selectively amplifying einselected pointer observables of the systems of interest is complementary to the usual image of the environment as the source of perturbations that destroy quantum co-herence of the system.It suggests the redundancy of the imprint of the system in the environment may be a quantitative measure of relative objectivity and hence of classicality of quantum states.It is introduced in Sec-tions VI and VII of this review.Benefits of recognition of the role of environment in-clude not just operational definition of the objective exis-tence of the einselected states,but–as is also detailed in Section VI–a clarification of the connection between the quantum amplitudes and probabilities.Einselection con-verts arbitrary states into mixtures of well defined possi-bilities.Phases are envariant:Appreciation of envariance as a symmetry tied to the ignorance about the state of the system was the missing ingredient in the attempts of ‘no collapse’derivations of Born’s rule and in the prob-ability interpretation.While both envariance and the “environment as a witness”point of view are only begin-ning to be investigated,the extension of the program of einselection they offer allowes one to understand emer-gence of“classical reality”form the quantum substrate as a consequence of quantum laws.II.QUANTUM MEASUREMENTSThe need for a transition from quantum determinism of the global state vector to classical definiteness of states of individual systems is traditionally illustrated by the example of quantum measurements.An outcome of a “generic”measurement of the state of a quantum sys-tem is not deterministic.In the textbook discussions this random element is blamed on the“collapse of the wavepacket”,invoked whenever a quantum system comes into contact with a classical apparatus.In a fully quan-tum discussion this issue still arises,in spite(or rather because)of the overall deterministic quantum evolution of the state vector of the Universe:As pointed out by von Neumann(1932),there is no room for a‘real collapse’in the purely unitary models of measurements.A.Quantum conditional dynamicsTo illustrate the ensuing difficulties,consider a quan-tum system S initially in a state|ψ interacting with a quantum apparatus A initially in a state|A0 :|Ψ0 =|ψ |A0 = i a i|s i |A0−→ i a i|s i |A i =|Ψt .(1)Above,{|A i }and{|s i }are states in the Hilbert spaces of the apparatus and of the system,respectively,and a i6are complex coefficients.Conditional dynamics of such premeasurement(as the step achieved by Eq.(2.1)isoften called)can be accomplished by means of a unitary Schr¨o dinger evolution.Yet it is not enough to claim thata measurement has been achieved:Equation(2.1)leads to an uncomfortable conclusion:|Ψt is an EPR-like en-tangled state.Operationally,this EPR nature of the state emerging from the premeasurement can be made moreexplicit by re-writing the sum in a different basis:|Ψt = i a i|s i |A i = i b i|r i |B i .(2)This freedom of basis choice–basis ambiguity–is guar-anteed by the principle of superposition.Therefore,if one were to associate states of the apparatus(or the ob-server)with decompositions of|Ψt ,then even before en-quiring about the specific outcome of the measurement one would have to decide on the decomposition of|Ψt ; the change of the basis redefines the measured quantity.1.Controlled not and a bit-by-bit measurementThe interaction required to entangle the measured sys-tem and the apparatus,Eq.(2.1),is a generalization of the basic logical operation known as a“controlled not”or a c-not.Classical,c-not changes the state a t of the target when the control is1,and does nothing otherwise: 0c a t−→0c a t;1c a t−→1c¬a t(2.3) Quantum c-not is a straightforward quantum version of Eq.(2.3).It was known as a“bit by bit measurement”(Zurek,1981;1983)and used to elucidate the connection between entanglement and premeasurement already be-fore it acquired its present name and significance in the context of quantum computation(see e.g.Nielsen and Chuang,2000).Arbitrary superpositions of the control bit and of the target bit states are allowed:(α|0c +β|1c )|a t−→α|0c |a t +β|1c |¬a t (3) Above“negation”|¬a t of a state is basis dependent;¬(γ|0t +δ|1t )=γ|1t +δ|0t (2.5) With|A0 =|0t ,|A1 =|1t we have an obvious anal-ogy between the c-not and a premeasurement.In the classical controlled not the direction of informa-tion transfer is consistent with the designations of the two bits:The state of the control remains unchanged while it influences the target,Eq.(2.3).Classical measurement need not influence the system.Written in the logical ba-sis{|0 ,|1 },the truth table of the quantum c-not is essentially–that is,save for the possibility of superpo-sitions–the same as Eq.(2.3).One might have antici-pated that the direction of information transfer and the designations(“control/system”and“target/apparatus”)of the two qubits will be also unambiguous,as in the clas-sical case.This expectation is incorrect.In the conjugate basis{|+ ,|− }defined by:|± =(|0 ±|1 )/√2|1 1|S⊗(1−(|0 1|+|1 0|))A(5) Above,g is a coupling constant,and the two operators refer to the system(i.e.,to the former control),and to the apparatus pointer(the former target),respectively. It is easy to see that the states{|0 ,|1 }S of the system are unaffected by H int,since;[H int,e0|0 0|S+e1|1 1|S]=0(2.10) The measured(control)observableˆǫ=e0|0 0|+e1|1 1| is a constant of motion under H int.c-not requires inter-action time t such that gt=π/2.The states{|+ ,|− }A of the apparatus encode the information about phase between the logical states.They have exactly the same“immunity”:[H int,f+|+ +|A+f−|− −|A]=0(2.11) Hence,when the apparatus is prepared in a definite phase state(rather than in a definite pointer/logical state),it will pass on its phase onto the system,as Eqs.(2.7)-(2.8),show.Indeed,H int can be written as:H int=g|1 1|S|− −|A=g7 phases between the possible outcome states of the ap-paratus.This leads to loss of phase coherence:Phasesbecome“shared property”as we shall see in more detailin the discussion of envariance.The question“what measures what?”(decided by thedirection of the informationflow)depends on the initialstates.In“the classical practice”this ambiguity does notarise.Einselection limits the set of possible states of theapparatus to a small subset.2.Measurements and controlled shifts.The truth table of a whole class of c-not like trans-formations that includes general premeasurement,Eq.(2.1),can be written as:|s j |A k −→|s j |A k+j (2.13)Equation(2.1)follows when k=0.One can thereforemodel measurements as controlled shifts–c-shift s–generalizations of the c-not.In the bases{|s j }and{|A k },the direction of the informationflow appears tobe unambiguous–from the system S to the apparatus A.However,a complementary basis can be readily defined(Ivanovic,1981;Wootters and Fields,1989);|B k =N−1N kl)|A l .(2.14a)Above N is the dimensionality of the Hilbert space.Anal-ogous transformation can be carried out on the basis{|s i }of the system,yielding states{|r j }.Orthogonality of{|A k }implies:B l|B m =δlm.(2.15)|A k =N−1N kl)|B l (2.14b)inverts of the transformtion of Eq.(2.14a).Hence:|ψ = lαl|A l = kβk|B k ,(2.16)where the coefficientsβk are:βk=N−1Nkl)αl.(2.17)Hadamard transform of Eq.(2.6)is a special case of themore general transformation considered here.To implement the truth tables involved in premeasure-ments we define observableˆA and its conjugate:ˆA=N−1k=0k|A k A k|;ˆB=N−1 l=0l|B l B l|.(2.18a,b)The interaction Hamiltonian:H int=gˆsˆB(2.19)is an obvious generalization of Eqs.(2.9)and(2.12),withg the coupling strength andˆs:ˆs=N−1l=0l|s l s l|(2.20)In the{|A k }basisˆB is a shift operator,ˆB=iN∂ˆA.(2.21)To show how H int works,we compute:exp(−iH int t/¯h)|s j |A k =|s j N−1。
The U.S. recessionEconomic downturn (Recession): (by U.S. standards) when the economy in total output, income and employment for 6 months to a year of significant decline in general economic contraction in many sectors there is this economic decline as recession. Sustained more severe economic downturn into a depression. Keynes believed that the reduction in aggregate demand for goods is the main reason for the recession.The reasons for U.S. economic recessionSecond half of 2000, the U.S. economy since the end of March 1991 has been 10 years of rapid growth, entered a period of slow growth. Economic growth in 2001 declined quarter by quarter, the sudden "9.11" is a serious blow to consumer and investor confidence, the U.S. economy accelerated rate of decline. Third quarter of 2001, the U.S. economy has negative growth of 1.3%. U.S. government to save the economy has taken a series of policy measures, including: $ 4 million for emergency anti-terrorism and economic reconstruction plan, to aviation, the insurance industry to provide $ 15 billion in assistance; lowered the federal funds rate four times (2001 The Fed cut interest rates 11 times); proposed $ 100 billion economic stimulus package and other programs to save the U.S. economy. After many interventions, revive the U.S. economy: 2001 fourth-quarter GDP stabilized slight, the economic growth rate reached 1.4%, the economic indicators seem to indicate that the U.S. economy is bottoming out signs. However, the U.S. economy is indeed recovering, how much strength of the recovery is still controversial. The main reason the U.S. economic recessionOverall, the U.S. recession is the result of cyclical adjustment of economic development, the U.S. economy after 10 years of rapid growth for a structural repair. Since 1991, out of the recession, the U.S. economy began rapid expansion and continued for 10 years. However, the economic cycle, the internal laws of the objective requirements of the U.S. economy must be an appropriate adjustment, so high-tech industries in the industrial restructuring of the U.S. economy into a cyclical fluctuations when it appears a very necessary.1. "New economy" will not * eliminate cyclical fluctuations, but also needs its own structural adjustment"New economy" although there are high growth, low inflation, globalization and speed-oriented development and other new features, but because of its traditional economic and born out of long-term coexistence with the traditional economy, so in this development process, the traditional law of the economic cycle is still will play a role. Meanwhile, information technology and network development, and the overall economy more closely linked to economic adjustment from one department to another department of the transfer speed, leading to changes in speed and overall economic growth than in the past is more sensitive to economic changes . "New economy" on the amplification of fluctuations in the economy is the U.S. economy into recession this * one of the main.With the spread of information industry investment boom, the United States in the late 1990s, the international capital markets to absorb a large number of foreign investment, resulting in large-scale expansion of the U.S. stock market. In the United States in 1999 foreign direct investment and portfolio investment (excluding U.S. Treasury bonds) is $ 607 billion in 2000 to $ 782.4 billion. Because information asymmetry, resulting inscrambling of the group effect of over-expansion of investment demand, there has been a lot of duplication. Once the fluctuations in the economy, technology investments resulting in excess supply immediately apparent, the direct consequence of a large number of Internet companies closed down. According to the U.S. Internet company merger investigation, the end of 2000 there are about 210 listed companies to stop operating the network, the network companies listed about 60% of the total.(2) stock market bubble burst, the negative wealth effect into effect, so that over-consumption suddenly dropped to the bottomAccording to the U.S. Revenue Service estimates that 1995-1999 period the U.S. stock market in a bull market, the actual average annual growth rate of capital gains 34% of the residents increased by 1.7 trillion U.S. dollars in revenue. Stock market wealth effect stimulates consumption than ever before, more than a year to promote economic growth of about 1 percentage point. However, after 1999, the stock market crash of the financial assets has shrunk dramatically, resulting to a negative wealth effect. At the same time, by the economic downturn, a large number of companies have laid off employees or reduce salaries, consumer income levels were significantly decreased.3 oil prices on the U.S. economic slowdown played a role in fuelingNow the proportion of U.S. oil imports up 54%, international oil prices from the end of 1998 less than $ 10 per barrel rose to $ 14.9 per barrel (September 2000), which enables the operating costs of enterprises and consumers in the energy expenditure a significant increase in severely inhibit economic growth. According to the American Manufacturers Association estimated that in 1999 and 2000 oil prices between the U.S. economy lost more than 1150 billion dollars, the equivalent of U.S. GDP by one percentage point, higher oil prices on U.S. economic recession had a negative impact.4 U.S. economy, there are still three unresolved issuesHuge foreign trade deficit, the dollar value, and negative growth in personal savings is the current troubled U.S. economy, the three main issues. U.S. economic growth mainly rely on domestic demand, in general, a huge foreign trade deficit little negative impact on the economy. But when the sharp drop in domestic demand, the economic downturn, the huge deficit will accelerate the economic recession. Strong U.S. dollar will affect exports, further expand the trade deficit; personal savings will result in negative growth in the money market decline, stock market, especially in the capital market fell, the currency market can not play its due role, to a certain extent, will affect economic development.5. "9.11" exacerbated the extent of U.S. economic recession, the U.S. economy and the potential impact of direct"9.11" incident, the U.S. economic slowdown is limited to the information industry and related industries, accounting for a large proportion of U.S. GDP, the third industry and aviation industries are still struggling to support. Subjected to "9.11" is precisely the hardest hit areas of strength the U.S. economy, such as aviation, insurance, finance, tourism and commerce. In addition, "9.11", compared with natural disasters, its people more far-reaching psychological impact, the American people on economic and political sense of security has been seriously weakened.U.S. Federal Reserve System (Federal Reserve System, referred to as the Fed) is responsible for fulfilling the duties of the Central Bank of the United States, this system isbased on "Federal Reserve Act" (Federal Reserve Act) established in 1913. Its main responsibilities are: 1, responsible for developing and implementing the monetary policy. 2, monitoring the implementation of banking institutions, credit and protect consumers' legitimate rights. 3, to maintain financial system stability. 4, the U.S. government, the public, financial institutions, foreign institutions to provide reliable financial services. This system consists of the Federal Reserve Board, Federal Reserve Bank and the Federal Open Market Committee and other components. U.S. economy accounts for about 30% of the global economy, is the world leader in the economy. It will directly lead to the deceleration of world economic growth slowed from the perspective of capital flows, the adjustment of the new U.S. economy will lead to global capital restructuring. First of all economies to adjust the level of investment. Investors to re-recognize the new economy, changes in investment philosophy, no longer blind investment in information technology, which makes investment in the new economy to the traditional economic segregation. Second, the adjustment of capital between countries. The world economic slowdown highlights the role of the dollar's safe haven, may increase capital flows into the U.S., creating the current strong dollar. While some Latin American countries, if the world economic slowdown led to capital flight to a safe place, some countries will be large-scale outflows. The U.S. recession, according to the overall impact on the size of the basis, HSBC made a sort of relevant countries, Mexico, Canada, first to Japan and the euro area came in the end. Rain friends the world economy, Japan: rising unemployment, less work, more unemployment, Asia: reduced exports to the U.S., Latin America: debt increased one billion, Argentina, Brazil, Mexico, slowing GDP growth, the United States: private investment reduce Europe: consumer confidence index plunged。
Cramer-Rao Bounds forNonparametric Surface Reconstruction from Range Data Tolga Tasdizen Ross Whitaker University of Utah,School of Computing Technical Report UUCS-03-006Also submitted to the4th Int.Conference on3-D Digital Imaging and Modeling for reviewSchool of ComputingUniversity of UtahSalt Lake City,UT84112USAApril18,2003AbstractThe Cramer-Rao error bound provides a fundamental limit on the expected performance of a statistical estimator.The error bound depends on the general properties of the system,but not on the specific properties of the estimator or the solution.The Cramer-Rao error bound has been applied to scalar-and vector-valued estimators and recently to parametric shape estimators.However,nonparametric,low-level surface representations are an important important tool in3D reconstruction,and are particularly useful for representing complex scenes with arbitrary shapes and topologies.This paper presents a generalization of the Cramer-Rao error bound to nonparametric shape estimators.Specifically,we derive the error bound for the full3D reconstruction of scenes from multiple range images.Chapter1IntroductionA confluence of several technologies has created new opportunities for reconstructing3D models of complex objects and scenes.More precise and less expensive range measure-ment systems combined with better computing capabilities enable us to build,visualize, and analyze3D models of the world.The difficulty of reconstructing surfaces from range images stems from inadequacies in the data.Range measurements presents several signifi-cant problems,such as measurement noise,variations in measurement density,occlusions, and errors in the registration of multiple range images.Hence,the reconstructed surfaces are not perfect,they are merely estimates of the true surfaces.As the use of measured 3D models becomes more commonplace,there will be a greater need for quantifying the errors associated with these models.For instance,the use of3D models in forensics,to model crime scenes[1],will invariably raise the question,“How much can we trust these models?”1Signal processing,and estimation theory in particular,provides a tool,the Cramer-Rao error bound(CRB),for quantifying the performance of statistical estimators.However,the CRB has traditionally been applied to parameter estimation problems.That is,problems in which the number of parameters and their relationship to the physical measurements is fixed and known.In order apply these tools to surface reconstruction,we mustfirst define a notion of error for surfaces and then adapt these tools to a3D geometric setting.The analysis of reconstruction errors depends on the surface representation.For this discus-sion we divide the space of surface models into two classes:parametric and nonparametric. Parametric models are those that represent shapes indirectly via afinite set of variables thatat Conf.htmcontrol the local or global position of the surface.Parametric models range from simple primitives that have a few parameters to more complicated algebraic polynomial surfaces and piecewise smooth models,such as splines.Parametric approaches are particularly well suited to higher-level tasks such as object recognition.In the context of estimation,the number of parameters and their relationship to the shape is not usually considered as a ran-dom variable.Therefore,parametric models restrict the solution to the space of shapes that are spanned by the associated parameters.The alternative is a nonparametric model,which,for the purposes of this paper,refers to those representations in which the position of any point on the surface is controlled directly and is independent(to within afinite resolution)from the positions of other points on the surface.According to this definition,surface meshes,volumes,and level sets are examples of nonparametric shape representations.Nonparametric models typically have many more free parameters(e.g.each surface point,their number,and their configuration)and they represent a much broader class of shapes.However,nonparametric models impose other limitations such asfinite resolution and,in the case of implicit models,closed boundaries. Nevertheless,the literature has shown that nonparametric models are preferred when re-constructing surfaces of complex objects or scenes with arbitrarily topology and very little a-priori knowledge about shape[2,3,4,5].This paper introduces a novel formulation for computing expected errors of nonparametric surface estimates using point-wise Cramer-Rao bounds.The rest of this report is organized as follows.Chapter2discusses related work,Chapter3 summarizes the maximum likelihood nonparametric surface estimation process,and Chap-ter4derives a CRB for nonparametric surface estimators and gives results for synthetic data.Chapter5presents results for real data.Chapter6summarizes the contributions of this paper and discusses possibilities for future research directions.Chapter2Related WorkThe CRB states the minimum achievable error for an estimator,and therefore,provides fundamental limits on the performance of any estimation process.The expression for the CRB is independent of the specific form of the estimator;it depends only on the statistics of the input measurements and the bias of the estimator.Moreover,for asymptotically efficient estimators,such as the maximum likelihood estimator(MLE),the CRB is a tight lower bound,i.e.for MLEs the CRB is achievable.Thus,the CRB quantifies the expected error of the output of an estimation process in the absence of ground truth.In the context of surface reconstruction,it provides a well-founded,systematic mechanism for computing the error of a reconstructed surface.Researchers have extensively used CRBs for problems where the estimator is relatively simple,such as scalar or vector quantities.For instance,parameter estimation to deter-mine the location,size and orientation of a target has been studied using CRB analysis[6]. More recently,several authors have derived CRB expressions for parametric shape estima-tors.Hero et al.[7]compute the CRB for B-spline parameters of star-shapes estimated from magnetic resonance imagery.Ye et al.[8]compute the CRB for more general parametric shape estimators.Confidence intervals for shape estimators can be computed using CRBs[9],which provides an important computational advantage over using a Monte-Carlo simulation[10].However,these results apply only to parametric shape estimators. The goal of this paper is tofill an gap in3D surface reconstruction by deriving the CRB for nonparametric shape estimators and expressing the error in terms of a statistical model of a scanning laser rangefinder.Chapter3Maximum Likelihood Surface ReconstructionThis chapter describes a particular formulation for a nonparametric MLE surface estimator. The results in this paper establish a bound that applies to any nonparametric surface estima-tor.However,these results provide a tight bound for MLE estimators,and the formulation for the MLE estimator introduces some basic concepts that are important for the CRB. We begin by describing a mathematical model of a range image.A rangefinder is a device that measures distances to the closest point on an object along a particular line of sight.A range scanner produces a2D array(image)of range measurements,through a scanning mechanism that aims the line of sight accordingly,see Figure3.1.Therefore each element or pixel of a range image consists of two things:a line of sight and a range measurement, which together describe a3D point.We denote a single range image and a collection of range images taken from different scanner locations as.The object or scene also requires a precise specification.We define the surface as the closure of a compact subset of3D,.Thus,is the“skin”that covers the solid.The range measurements are random variables,but if we know the sensor model,we can compute the probability of a particular set of range image conditional on the scene as .This is the likelihood.An MLE estimator is defined as(3.1) Any estimator that minimizes the likelihood is asymtotically efficient and unbiased.ThatFigure3.1:A rangefinder produces a dense range map of a scene.is,as the number of measurements goes to infinity,the estimator is correct on the average and is as good as any other unbiased estimator.Whitaker[5]shows that the maximum likelihood estimator of such a collection of range images can be computed as the set of zero crossings of scalar function.That is(3.2) Curless[3]uses a similar implicit formulation to reconstruct surfaces from multiple range scans.For the MLE formulation is(3.3) where is the distance from the’th scanner location to the point,and is the derivative of the logarithm of the pdf for the range measurement error model.The model assumes that the range measurements within a single scan are sufficiently close and are suitable for interpolation.Notice that if the sensor model is Gaussian is linear.Certain classes of range measurements,such as ladar,have been shown to have noise characteristics that can be described as Gaussian with outliers[11].Chapter4Cramer-Rao Error BoundsThe only description of the surface offered by nonparametric estimators,(3.2),is the set ofpoints in3D that lie on the surface.Therefore we formulate the error as a separate boundfor each surface point.Errors on points are directional,but without any correspondencebetween the estimate and the true surface,the only important aspect of the error is how fareach point is from the nearest point on the actual surface.Given a point,we can computethe CRB as,where denotes the shortest Euclidean distance be-tween and S,the actual surface,and denotes the expected value for all possible.Thelocal error bound gives us a map of errors over the entire surface estimate.This is a moreuseful and general result than a global error bound.Let be the number of scanners(range images)to which the point is visible.Each ofthe scanners has one line of sight,,associated with.This is the vector from thescanner location to.Let denote the range measurement for the i’th scanner taken forthe line of sight.In principle,is a function only of the ing theCramer-Rao error bound formula for unbiased estimators[12],wefindFigure4.1:(a)The relationship between perturbations of the surface and its true distance from the scanner.(b)The2D geometry of line of sight error.the true distance from the scanner to the surface along.The geometric relationship between the surface perturbation and the change in the true distance from the scanner to the surface,see Figure4.1(a),dictates(4.3) Substituting this result into(4.1)yieldsthe angular error in aiming the line of sight,andthe error in the distance measurement along the actual line of sight.The uncertainty in the line-of-sight can be used to describe several sources of error.First, the scanner measures range along a discrete grid of line-of-sights,and therefore,it intro-duces a sampling error.Moreover,given an intended line-of-sight on the discrete grid, there is an error in aiming the rangefinder.We will refer to this discrepancy between the intended and the actual line of sights as pointing error.Finally,when estimating surfaces from multiple range images,error is introduced by imperfections in the registration of the different range images to each other.For most range scanners,such as Ladar,the pointing error is small compared to the error in the distance measurement.Hence,it is common to assume a perfectly aimed line of sight,to simplify the formulation of the conditional pdf.In this case,depends only on the true distance from the i’th scanner to along the vector.We can assume a Gaussian distribution for the noise in the distance measurement[5],and therefore(4.5) Using results for Gaussian pdf’s from[12],wefind that(4.6) Substituting this result into(4.4),we get(4.7) This result states that if any of the scanners have a line of sight that is perpendicular to the normal vector at(),the error bound for that point is zero.Figure4.2 demonstrates this result with a sphere.We compute the CRB for estimating a surface from six noisy range images of a sphere with unit radius taken along the six cardinal directions. Figure4.2shows the CRB as a colormap on the surface;the units are the radius of the sphere.The scanners are located on the axis along the purple regions on the estimator. As predicted by(4.7)these are the regions of highest expected error.The red regions, where the CRB is0,form six circles on the sphere.These circles are the silhouettes of the sphere as seen from the scanner locations.Therefore,according to this incomplete CRB derivation,it should be possible to determine the location of any desired point exactly by(a)0 0.015(b)Figure4.2:[Color]The incomplete CRB shown as a colormap on the sphere,and(b)the color map for the CRB.The radius of the sphere is1unit.repositioning the scanner.This counter-intuitive result is due to ignoring the angular error in the line of sight.In practice,this error is non-zero,and we can not determine any point on an object error-free.We derive a complete conditional pdf and CRB in the rest of this section.4.1Error bound in2DWe can derive an accurate conditional pdf for the range measurement if we take the pointing error in the line of sight into account.Let usfirst examine the simpler2D case,where and is a curve.The vector from the i’th scanner to on the surface estimator, ,is now the intended line of sight.Figure4.1(b)illustrates and which represent the actual line of sight(random variable A)and the angle it makes with the intended line of sight,respectively.We assume that the pdf for is a Gaussian with zero mean(there is no constant offset error in aiming the scanner)and standard deviation(4.8) Given this actual line of sight,we assume a Gaussian distribution for the distance measure-ment(random variable B)(4.9) where is the actual distance to the surface along.Random variables A and B are independent;therefore,their joint probability is the product of(4.8)and(4.9).Integrat-ing this joint probability over the domain of,we compute the marginal distribution(4.10) To evaluate this probability,we still need to determine the expression for in(4.9).Without loss of generality,define the scanner location and to be the origin of the coordinate frame and the y-axis,respectively.Then,we have and,see Figure4.1(b).Using the equation for the tangent line(4.11) the distance can be found aswhere(4.16) Equation(4.15)is in the form of convolution of two Gaussians.Consequently,probability theory states that the result is the Gaussian pdf(4.17) For the purposes of differentiating this pdf with respect to,we ignore the dependence of on.Hence,the derivation(4.3)also applies ing results for Gaussian pdf’s from[12],wefind that(4.18) Finally,using these result in(4.1)(a)(b)Figure4.3:(a)The3D geometry of line of sight error,(b)the u-v-z coordinate frame. The two angles of deviation for the line of sight can be defined as rotations of the intended line of sight around any pair of orthogonal pair of vectors in the plane perpendicular to the z-axis.Without loss of generality,we choose the angles of deviation and to be rotations around the and axes,respectively,as shown in Figure4.3(b).The random variables and are independent and identically distributed with the pdf.The intended line of sight is in the u-v-z coordinate frame.The actual lines of sight can be expressed in the u-v-z coordinate frame as(4.21) where we have used the small angle approximation and.We can also express the surface normal in the same coordinate frame.By definition,,and;therefore,We can use the equation for the tangent plane,which is the same as equation(4.11)for thetangent in2D,tofind the distanceChapter5Results and DiscussionFigure5.1shows the results of the experiment from Figure4.2with the complete CRBformulation(4.19),which includes angular error.As expected the CRB does not go to zeroalong occlusion boundaries.This is in contrast to the incomplete CRB formulation,whichpredicts a zero error bound for the silhouette of a sphere.As an alternative to the CRB results in this paper,consider the method of computing errormeasures on simple scalar estimators,which averages all measurements.For such an es-timator to be the error,where is the number of measurements and is the variance of any one measurement.An naive application of this scheme to surface recon-struction would produce an error measure that depended only on the number of scannersthat are visible from any particular surface point.Unlike the CRB derived in this paper,this trivial result does not take the sensor-model geometry into account and is not correct.In Figure5.1,the CRB is highest(purple-blue)in regions seen by a single scanner,andlowest(yellow)in regions seen by three scanners.However,the CRB varies significantlywithin these regions which can not be predicted by the trivial approach that discounts thesensor-model geometry.Next,we compute an actual estimator using a level-set surface representation[5].Fig-ure5.1(b)shows a close-up view of the CRB colormapped on to this actual estimator.Ifwe consider the roughness of the estimated surface as a subjective indicator of error,we ob-serve that the actual estimation errors are approximately proportional to the error predictedby the CRB.In other words,the estimator is indeed more noisy in blue-purple regions ofthe CRB compared to the yellow regions.Finally,we demonstrate the importance of error bounds in a real surface reconstruction14(a)(b)0 0.015(c)Figure5.1:[Color](a)The CRB colormapped on the sphere,(b)a close-up view of the CRB shown on an instance of the estimator,and(c)the color map for the CRB.The radius of the sphere is1unit.(a)(b)3(c)Figure5.2:[Color]MLE reconstruction from(a)4range images,(b)12range images,and (c)CRB colorbar.The units of the CRB is in millimeters(mm).The diameter of the barrel in the scene is approximately500mm.The black regions have infinite CRB;these are the points not seen by any scanner.problem.Figure5.2illustrates the CRBs computed for reconstructions of an office scene. Twelve range images were taken and registered with the methods described in[14].Then using a level set representation,we reconstruct a surface model[5].In thefirst reconstruc-tion,we use only4out of the12range images.The occlusion shadows of the barrel and the chair are observed as the black regions on the reconstructed surface in Figure5.2(a).Very high CRB values(purple)are also observed at various locations including the top of the desk,and on the bookshelves due to the occlusions of objects placed on it.Unlike the oc-clusion shadows of the chair and the barrel,these artifacts are not immediately observable from the reconstructed surface.Hence,the CRB image brings out useful information that can be used to choose further scanning locations.In the second reconstruction,we use all 12range images.Overall,the average CRB is lower as expected and there are much fewer occluded regions.However,notice that certain parts of the desk and bookshelves still have infinite CRB values(black),indicating that these parts are occluded in all12range images. This result can be used to add another range image from a scanner location that can see these parts.Or alternatively,it can inform users(or some subsequent processing)not to trust the surface estimate in these locations.Chapter6ConclusionThis paper shows the derivation of a systematic error measure for nonparametric surface reconstruction that uses the Cramer-Rao bound.The CRB is a tight lower error bound for unbiased estimators such as the maximum likelihood.However,there are some limitations in this formulation.We have assumed no knowledge of surface shape other than that given by the measurements.However,in practice shape reconstruction often includes some a-priori knowledge about surface shape,such as smoothness.The inclusion of such priors corresponds to a maximum posteriori estimation process.The current formulation still gives meaningful results—it tells us to what extent a particular estimate is warranted by the data.That is,it gives us some idea of the relative weighting of the data and the prior at each point on the surface.Future work will include a study of how to incorporate priors and estimator bias into these error bounds.18AcknowledgementsThis work is supported by the Office of Naval Research under grant#N00014-01-10033 and the National Science Foundation under grant#CCR0092065.Bibliography[1]L.Nyland,stra,D.McAllister,V.Popescu,and C.McCue,“Capturing,Processingand Rendering Real-World Scenes”,Videometrics and Optical Methods for3D Shape Measurement,Electronic Imaging2001.[2]G.Turk and M.Levoy,“Zippered polygon meshes from range images”,Proc.SIG-GRAPH’94,pp.311–318,1994.[3]B.Curless and M.Levoy,”A volumetric method for building complex models fromrange images”,Proc.SIGGRAPH’96,pp.303–312,1996.[4]H.Hoppe,T.DeRose,T.Duchamp,J.McDonald,and W.Stuetzle,“Surface Recon-struction from Unorganized Points”,Computer Graphics,26(2),pp.71–78,1992. [5]R.T.Whitaker,“A Level-Set Approach to3D Reconstruction From Range Data”,IJCV,29(3),pp.203–231,1998.[6]D.J.Rossi and A.S.Willsky,“Reconstruction from projections based on detection andestimation of objects-Part I:Performance Analysis”,IEEE Trans.Acoustic Speech and Signal Processing,pp.886–897,1984.[7]A.O.Hero,R.Piramuthu,J.A.Fessler,and S.R.Titus,“Minimax emission computedtomography using high resolution anatomical side information and B-spline models”, IEEE rmation Theory,pp.920–938,1999.[8]J.C.Ye,Y.Bresler and P.Moulin,“Cramer-Rao bounds for2D target shape estima-tion in nonlinear inverse scattering problems with applications to passive radar”,IEEE Trans.Antennas and Propagation,pp.771–783,2001.[9]J.C.Ye,Y.Bresler and P.Moulin,“Asymptotic Global Confidence Regions in Para-metric Shape Estimation Problems”,IEEE rmation Theory,46(5),pp.1881–1895,2000.[10]K.M.Hanson,G.S.Gunningham,and R.J.McKee,“Uncertainty assesment forreconstructions based on deformable geometry”,I.J.Imaging Systems&Technology, pp.506–512,1997.[11]J.Gregor and R.Whitaker,“Reconstructing Indoor Scene Models From Sets of NoisyRange Images”,Graphical Models,63(5),pp.304–332,2002.[12]N.E.Nahi,“Estimation Theory and Applications”,John Wiley&Sons Inc.,1969.[13]S.Osher and J.Sethian,”Fronts Propogating with Curvature-Dependent Speed:Al-gorithms Based on Hamilton-Jacobi Formulations”,p.Physics,79,pp.12–49, 1988.[14]R.Whitaker and J.Gregor,“A Maximum Likelihood Surface Estimator For DenseRange Data”,IEEE Trans.Pattern Analysis and Machine Intelligence,24(10),2002.。
a r X i v :a s t r o -p h /0701490v 1 17 J a n 2007TP-DUT/2007-1Reconstruction of a Deceleration Parameter from the Latest Type Ia SupernovaeGold DatasetLixin Xu ∗,†Chengwu Zhang,Baorong Chang,and Hongya LiuSchool of Physics &Optoelectronic Technology,Dalian University of Technology,Dalian,116024,P.R.ChinaIn this paper,a parameterized deceleration parameter q (z )=1/2−a/(1+z )b is reconstructedfrom the latest type Ia supernovae gold dataset.It is found out that the transition redshift from decelerated expansion to accelerated expansion is at z T =0.35+0.14−0.07with 1σconfidence level inthis parameterized deceleration parameter.And,the best fit values of parameters in 1σerrors are a =1.56+0.99−0.55and b =3.82+3.70−2.27.PACS numbers:98.80.-k,98.80.Es Keywords:Cosmology;dark energy I.INTRODUCTION Recent observations of High redshift Type Ia Supernova indicate that our universe is undergoing accelerated ex-pansion which is one of the biggest challenges in present cosmological research,now [1,2,3,4,5,6].Meanwhile,this suggestion is strongly confirmed by the observations from WMAP [7,8,9,10]and Large Scale Structure survey [11].To understand the late-time accelerated expansion of the universe,a large part of models are proposed by assuming the existence of an extra energy component with negative pressure and dominated at late time pushing the universe to accelerated expansion.In principle,a natural candidate for dark energy could be a small cosmological constant Λwhich has the constant equation of state (EOS)w Λ=−1.However,there exist serious theoretical problems:fine tuning and coincidence problems.To overcome the coincidence problem,the dynamic dark energy models are proposed,such as quintessence [12],phantom [13],quintom [14],k-essence [15],Chaplygin gas [16],holographic dark energy [17],etc.,as alternative candidates.Another approach to study the dark energy is by an almost model-independent way,i.e.,the parameterized equation state of dark energy which is implemented by giving the concrete form of the equation of state of dark energy directly,such as w (z )=w 0+w 1z [18],w (z )=w 0+w 1z ∂z ≪1,is ruled out [23].Also,the dark energy favors a quitom-like dark energy,i.e.crossing the cosmological constant boundary.In all,it is an effective method to rule out the dark energy models.As known,now the universe is dominated by dark energy and is undergoing accelerated expansion.However,in the past,the universe was dominated by dark matter and underwent a decelerated epoch.So,inspired by this idea,the parameterized decelerated parameter is present in almost model independent way by giving a concrete form ofdecelerated parameters which is positive in the past and changes into negative recently [22,24,25].Moreover,it is interesting and important to know what is the transition time z T from decelerated expansion to accelerated expansion.This is the main point of the paper to be explored.The structure of this paper is as follows.In section II,a parameterized decelerated parameter is constrained by latest 182Sne Ia data points compiled by Riess [23].Section III is the conclusion.II.RECONSTRUCTION OF DECELERATION PARAMETERWe consider a flat FRW cosmological model containing dark matter and dark energy with the metricds 2=−dt 2+a 2(t )dx 2.(1)The Friedmann equation of theflat universe is written asH2=8πGa− ˙aaH2(4) gives˙H=−(1+q)H2.(5) By using the relation a0/a=1+z,the relation of H and q,i.e.,Eq.(5)can be written in its integration formH(z)=H0exp z0[1+q(u)]d ln(1+u) ,(6)where the subscript”0”denotes the current values of the variables.If the function of q(z)is given,the evolution of the Hubble parameter is obtained.In this paper,we consider a parameterized deceleration parameter[22],q(z)=1/2−a/(1+z)b,(7) where,a,b are constants which can be determined from the current observational constraints.From Eq.(7),it can be seen that at the limit of z→∞,the decelerated parameter q→1/2which is the value of decelerated parameter at dark matter dominated epoch.And,the current value of decelerated parameter is determined by q0=1/2−a.In the Eq.(7)form of decelerated parameter,the Hubble parameter is written in the formH(z)=H0(1+z)3/2exp a (1+z)−b−1 /b (8) From the explicit expression of Hubble parameter,it can be seen that this mechanism can also be tried as parametriza-tion of Hubble parameter.Now,we can constrain the model by the supernovae observations.We will use the latest released supernovae datasets to constrain the parameterized deceleration parameter Eq.(7).The Gold dataset contains182Sne Ia data [23]by discarding all Sne Ia with z<0.0233and all Sne Ia with quality=’Silver’.The182datasets points are used to constrain our model.Constraint from Sne Ia can be obtained byfitting the distance modulusµ(z)µth(z)=5log10(D L(z))+M,(9) where,D L(z)is the Hubble free luminosity distance H0d L(z)andd L(z)=(1+z) z0dz′Mpc +25=M−5log10h+42.38,(11) where,M is the absolute magnitude of the object(Sne Ia).With Sne Ia datasets,the bestfit values of parameters in dark energy models can be determined by minimizingχ2SneIa(p s)=Ni=1(µobs(z i)−µth(z i))2χ2with 180dof a b z T∂x i 2x =¯x Cov (x i ,x i )+2n i =1n j =i +1 ∂y ∂x j x =¯xCov (x i ,x j )(13)is used extensively (see Ref.[26]for instance).For Ansatz Eq.(7),we obtain errors of the parameterized decelerated parameter.The evolution of the decelerated parameters q (z )with 1σerror are plotted in Fig.2.III.CONCLUSIONIn this paper,by an almost model-independent way,we have used a parameterized decelerated parameter to obtain the transition time or redshift z T from decelerated expansion to accelerated expansion.It is found out that the best fit transition redshift z T is about z T =0.35+0.14−0.07with 1σerror in this parameterized equation which is compatiblewith the result of Ref.[23].Though,we also can derive the transition redshift from a giving equation of state of dark energy and an concrete dark energy models,they are much model dependent.So,we advocate the almost model-independent way to test and rule out some existent dark energy models.z q zFIG.2:The evolution of decelerated parameter with respect to the redshift z .The center solid lines is plotted with the best fit value,where the shadows denote the 1σregion.AcknowledgmentsL.Xu is supported by DUT (3005-893321)and NSF (10647110).H.Liu is supported by NSF (10573003)and NBRP (2003CB716300)of P.R.China.[1]A.G.Riess,et.al.,Observational evidence from supernovae for an accelerating universe and a cosmological constant,1998Astron.J.1161009,astro-ph/9805201.[2]S.Perlmutter,et.al.,Measurements of omega and lambda from 42high-redshift supernovae,1999Astrophys.J.517565,astro-ph/9812133.[3]J.L.Tonry,et.al.,Cosmological Results from High-z Supernovae ,2003Astrophys.J.5941,astro-ph/0305008;[4]R.A.Knop,et.al.,New Constraints on ΩM ,ΩΛ,and w from an Independent Set of Eleven High-Redshift Supernovae Observed with HST,astro-ph/0309368.[5]B.J.Barris,et.al.,23High Redshift Supernovae from the IfA Deep Survey:Doubling the SN Sample at z >0.7,2004Astrophys.J.602571,astro-ph/0310843.[6]A.G.Riess,et.al.,Type Ia Supernova Discoveries at z >1From the Hubble Space Telescope:Evidence for Past Deceleration and Constraints on Dark Energy Evolution,astro-ph/0402512.[7]P.de Bernardis,et.al.,A Flat Universe from High-Resolution Maps of the Cosmic Microwave Background Radiation,2000Nature 404955,astro-ph/0004404[8]S.Hanany,et.al.,MAXIMA-1:A Measurement of the Cosmic Microwave Background Anisotropy on angular scales of 10arcminutes to 5degrees,2000Astrophys.J.545L5,astro-ph/0005123.[9]D.N.Spergel et.al.,First Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations:Determination of Cosmo-logical Parameters,2003Astrophys.J.Supp.148175,astro-ph/0302209.[10]D.N.Spergel et al 2006,astro-ph/0603449.[11]M.Tegmark et al.,Phys.Rev.D 69(2004)103501,astro-ph/0310723;M.Tegmark et al.,Astrophys.J.606(2004)702,astro-ph/0310725.[12]I.Zlatev,L.Wang,and P.J.Steinhardt ,Quintessence,Cosmic Coincidence,and the Cosmological Constant,1999Phys.Rev.Lett.82896,astro-ph/9807002;P.J.Steinhardt,L.Wang ,I.Zlatev,Cosmological Tracking Solutions,1999Phys.Rev.D 59123504,astro-ph/9812313;M.S.Turner ,Making Sense Of The New Cosmology,2002Int.J.Mod.Phys.A 17S1180,astro-ph/0202008;V.Sahni ,The Cosmological Constant Problem and Quintessence,2002,Class.Quant.Grav.193435,astro-ph/0202076.[13]R.R.Caldwell,M.Kamionkowski,N.N.Weinberg,Phantom Energy:Dark Energy with w <−1Causes a Cosmic Dooms-day,2003Phys.Rev.Lett.91071301,astro-ph/0302506;R.R.Caldwell ,A Phantom Menace?Cosmological consequences of a dark energy component with super-negative equation of state,2002Phys.Lett.B 54523,astro-ph/9908168;P.Singh,M.Sami,N.Dadhich,Cosmological dynamics of a phantomfield,2003Phys.Rev.D6*******,hep-th/0305110;J.G.Hao,X.Z.Li,Attractor Solution of Phantom Field,2003Phys.Rev.D6*******,gr-qc/0302100.[14]Feng B et al2005Phys.Lett.B607(1-2)35.[15]Armendariz-Picon,T.Damour,V.Mukhanov,k-Inflation,1999Physics Letters B458209;M.Malquarti,E.J.Copeland,A.R.Liddle,M.Trodden,A new view of k-essence,2003Phys.Rev.D6*******;T.Chiba,Tracking k-essence,2002Phys.Rev.D6*******,astro-ph/0206298.[16]A.Y.Kamenshchik,U.Moschella,and V.Pasquier,Phys.Lett.B511(2001)265,gr-qc/0103004;N.Bilic,G.B.Tupper,and R.D.Viollier,Phys.Lett.B535(2002)17[astro-ph/0111325];M.C.Bento,O.Bertolami,and A.A.Sen,Phys.Rev.D66(2002)043507,gr-qc/0202064.[17]M.Li,Phys.Lett.B603(2004)1,hep-th/0403127;K.Ke and M.Li,Phys.Lett.B606(2005)173,hep-th/0407056;Y.Gong,Phys.Rev.D70(2004)064029,hep-th/0404030;Y.S.Myung,Phys.Lett.B610(2005)18,hep-th/0412224;Q.G.Huang and M.Li,JCAP0408(2004)013,astro-ph/0404229;Q.G.Huang,M.Li,JCAP0503(2005)001,hep-th/0410095;Q.G.Huang and Y.Gong,JCAP0408(2004)006,astro-ph/0403590;Y.Gong,B.Wang and Y.Z.Zhang,Phys.Rev.D 72(2005)043510,hep-th/0412218;Z.Chang,F.-Q.Wu,and X.Zhang,astro-ph/0509531.[18]A.R.Cooray and D.Huterer,Astrophys.J.513L95(1999).[19]M.Chevallier,D.Polarski,Int.J.Mod.Phys.D10213(2001);gr-qc/0009008.[20]E.V.Linder,Phys.Rev.Lett.90091301(2003).[21]B.F.Gerke and G.Efstathiou,Mon.Not.Roy.Astron.Soc.33533(2002).[22]L.Xu,H.Liu and Y.Ping,Reconstruction of Five-dimensional Bounce cosmological Models From Deceleration Factor,Int.Jour.Theor.Phys.45,869-876,(2006),astro-ph/0601471.[23]A.G.Riess et al.,astro-ph/0611572.[24]N.Banerjee,S.Das,Acceleration of the universe with a simple trigonometric potential,astro-ph/0505121.[25]Y.Gong,A.Wang,Reconstruction of the deceleration parameter and the equation of state of dark energy,astro-ph/0612196.[26]U.Alam,V.Sahni,T.D.Saini and A.A.Starobinsky,astro-ph/0406672.H.Wei,N.N.Tang,S.N Zhang,Reconstructionof Hessence Dark Energy and the Latest Type Ia Supernovae Gold Dataset astro-ph/0612746.。