The detection of orientation continuity and discontinuity by cat V1 neurons
- 格式:pdf
- 大小:515.59 KB
- 文档页数:7
人脸识别论文中英文附录(原文及译文)翻译原文来自Thomas David Heselt ine BSc. Hons. The Un iversity of YorkDepartme nt of Computer Scie neeFor the Qualification of PhD. -- September 2005 -《Face Recog niti on: Two-Dime nsio nal and Three-Dime nsional Tech nique》4 Two-dimensional Face Recognition4.1 Feature LocalizationBefore discuss ing the methods of compari ng two facial images we now take a brief look at some at the prelimi nary processes of facial feature alig nment. This process typically con sists of two stages: face detect ion and eye localisati on. Depe nding on the applicati on, if the positi on of the face with in the image is known beforeha nd (for a cooperative subject in a door access system for example) the n the face detect ion stage can ofte n be skipped, as the regi on of in terest is already known. Therefore, we discuss eye localisati on here, with a brief discussi on of face detect ion in the literature review(sect ion 3.1.1).The eye localisati on method is used to alig n the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented are represe ntative of the face recog niti on accuracy and not a product of the performa nee of the eye localisati on rout ine, all image alig nments are manu ally checked and any errors corrected, prior to testi ng and evaluati on.We detect the position of the eyes within an image using a simple template based method. A training set of manually pre-aligned images of faces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template.Figure 4-1 - The average eyes. Used as a template for eye detection.Both eyes are in cluded in a sin gle template, rather tha n in dividually search ing for each eye in turn, as the characteristic symmetry of the eyes either side of the no se, provides a useful feature that helps disti nguish betwee n the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale(i.e. subject distance from the camera) and also in troduces the assumpti on that eyes in the image appear n ear horiz on tai. Some preliminary experimentation also reveals that it is advantageous to include the area of skin just ben eath the eyes. The reas on being that in some cases the eyebrows can closely match the template, particularly if thereare shadows in the eye-sockets, but the area of skin below the eyes helps to disti nguish the eyes from eyebrows (the area just below the eyebrows con tai n eyes, whereas the area below the eyes contains only plain skin).A window is passed over the test images and the absolute difference taken to that of the average eye image shown above. The area of the image with the lowest difference is taken as the region of interest containing the eyes. Applying the same procedure using a smaller template of the in dividual left and right eyes the n refi nes each eye positi on.This basic template-based method of eye localisati on, although provid ing fairly preciselocalisati ons, ofte n fails to locate the eyes completely. However, we are able to improve performa nce by in cludi ng a weighti ng scheme.Eye localisati on is performed on the set of training images, which is the n separated in to two sets: those in which eye detect ion was successful; and those in which eye detect ion failed. Taking the set of successful localisatio ns we compute the average dista nce from the eye template (Figure 4-2 top). Note that the image is quite dark, indicating that the detected eyes correlate closely to the eye template, as we would expect. However, bright points do occur near the whites of the eye, suggesting that this area is often inconsistent, varying greatly from the average eye template.Figure 4-2 -Distance to the eye template for successful detections (top) indicating variance due to noise and failed detections (bottom) showing credible variance due to miss-detected features.In the lower image (Figure 4-2 bottom), we have take n the set of failed localisati on s(images of the forehead, no se, cheeks, backgro und etc. falsely detected by the localisati on routi ne) and once aga in computed the average dista nce from the eye template. The bright pupils surr oun ded by darker areas in dicate that a failed match is ofte n due to the high correlati on of the nose and cheekb one regi ons overwhel ming the poorly correlated pupils. Wanting to emphasise the differenee of the pupil regions for these failed matches and minimise the varianee of the whites of the eyes for successful matches, we divide the lower image values by the upper image to produce a weights vector as show n in Figure 4-3. When applied to the differe nee image before summi ng a total error, this weight ing scheme provides a much improved detect ion rate.Figure 4-3 - Eye template weights used to give higher priority to those pixels that best represent the eyes.4.2 The Direct Correlation ApproachWe begi n our inv estigatio n into face recog niti on with perhaps the simplest approach,k nown as the direct correlation method (also referred to as template matching by Brunelli and Poggio [29 ]) inv olvi ng the direct comparis on of pixel inten sity values take n from facial images. We use the term ‘ Direct Correlation ' to encompass all techniques in which face images are compareddirectly, without any form of image space an alysis, weight ing schemes or feature extracti on, regardless of the dsta nee metric used. Therefore, we do not infer that Pears on ' s correlat applied as the similarity fun cti on (although such an approach would obviously come un der our definition of direct correlation). We typically use the Euclidean distance as our metric in these inv estigati ons (in versely related to Pears on ' s correlati on and can be con sidered as a scale tran slati on sen sitive form of image correlati on), as this persists with the con trast made betwee n image space and subspace approaches in later sect ions.Firstly, all facial images must be alig ned such that the eye cen tres are located at two specified pixel coord in ates and the image cropped to remove any backgro und in formati on. These images are stored as greyscale bitmaps of 65 by 82 pixels and prior to recog niti on con verted into a vector of 5330 eleme nts (each eleme nt containing the corresp onding pixel inten sity value). Each corresp onding vector can be thought of as describ ing a point with in a 5330 dime nsional image space. This simple prin ciple can easily be exte nded to much larger images: a 256 by 256 pixel image occupies a si ngle point in 65,536-dime nsional image space and again, similar images occupy close points within that space. Likewise, similar faces are located close together within the image space, while dissimilar faces are spaced far apart. Calculati ng the Euclidea n dista need, betwee n two facial image vectors (ofte n referred to as the query image q, and gallery imageg), we get an indication of similarity. A threshold is then applied to make the final verification decision.d q g (d threshold ? accept) d threshold ? reject ) . Equ. 4-14.2.1 Verification TestsThe primary concern in any face recognition system is its ability to correctly verify aclaimed identity or determine a person's most likely identity from a set of potential matches in a database. In order to assess a given system ' s ability to perform these tasks, a variety of evaluati on methodologies have arise n. Some of these an alysis methods simulate a specific mode of operatio n (i.e. secure site access or surveilla nee), while others provide a more mathematical description of data distribution in some classificatio n space. In additi on, the results gen erated from each an alysis method may be prese nted in a variety of formats. Throughout the experime ntatio ns in this thesis, weprimarily use the verification test as our method of analysis and comparison, although we also use Fisher Lin ear Discrim inant to an alyse in dividual subspace comp onents in secti on 7 and the iden tificati on test for the final evaluatio ns described in sect ion 8. The verificati on test measures a system ' s ability to correctly accept or reject the proposed ide ntity of an in dividual. At a fun cti on al level, this reduces to two images being prese nted for comparis on, for which the system must return either an accepta nee (the two images are of the same pers on) or rejectio n (the two images are of differe nt people). The test is desig ned to simulate the applicati on area of secure site access. In this scenario, a subject will present some form of identification at a point of en try, perhaps as a swipe card, proximity chip or PIN nu mber. This nu mber is the n used to retrieve a stored image from a database of known subjects (ofte n referred to as the target or gallery image) and compared with a live image captured at the point of entry (the query image). Access is the n gran ted depe nding on the accepta nce/rejecti on decisi on.The results of the test are calculated accord ing to how many times the accept/reject decisi on is made correctly. In order to execute this test we must first define our test set of face images. Although the nu mber of images in the test set does not affect the results produced (as the error rates are specified as percentages of image comparisons), it is important to ensure that the test set is sufficie ntly large such that statistical ano malies become in sig ni fica nt (for example, a couple of badly aligned images matching well). Also, the type of images (high variation in lighting, partial occlusions etc.) will significantly alter the results of the test. Therefore, in order to compare multiple face recog niti on systems, they must be applied to the same test set.However, it should also be no ted that if the results are to be represe ntative of system performance in a real world situation, then the test data should be captured under precisely the same circumsta nces as in the applicati on en vir onmen t. On the other han d, if the purpose of the experime ntati on is to evaluate and improve a method of face recog niti on, which may be applied to a range of applicati on en vir onmen ts, the n the test data should prese nt the range of difficulties that are to be overcome. This may mea n in cludi ng a greater perce ntage of ‘ difficult would be expected in the perceived operati ng con diti ons and hence higher error rates in the results produced. Below we provide the algorithm for execut ing the verificati on test. The algorithm is applied to a sin gle test set of face images, using a sin gle fun cti on call to the face recog niti on algorithm: CompareFaces(FaceA, FaceB). This call is used to compare two facial images, returni ng a dista nce score in dicat ing how dissimilar the two face images are: the lower the score the more similar the two face images. Ideally, images of the same face should produce low scores, while images of differe nt faces should produce high scores.Every image is compared with every other image, no image is compared with itself and no pair is compared more tha n once (we assume that the relati on ship is symmetrical). Once two images have been compared, producing a similarity score, the ground-truth is used to determine if the images are ofthe same person or different people. In practical tests this information is ofte n en capsulated as part of the image file name (by means of a unique pers on ide ntifier). Scores are the n stored in one of two lists: a list containing scores produced by compari ng images of differe nt people and a list containing scores produced by compari ng images of the same pers on. The final accepta nce/reject ion decisi on is made by applicati on of a threshold. Any in correct decision is recorded as either a false acceptance or false rejection. The false rejection rate (FRR) is calculated as the perce ntage of scores from the same people that were classified as rejectio ns. The false accepta nce rate (FAR) is calculated as the perce ntage of scores from differe nt people that were classified as accepta nces.For IndexA = 0 to length (TestSet)For IndexB = lndexA+1 to length (T estSet)Score = CompareFaces (T estSet[IndexA], TestSet[IndexB]) If IndexA and IndexB are the same person Append Score to AcceptScoresListElseAppend Score to RejectScoresListFor Threshold = Minimum Score to Maximum Score:FalseAcceptCount, FalseRejectCount = 0For each Score in RejectScoresListIf Score <= ThresholdIncrease FalseAcceptCountFor each Score in AcceptScoresListIf Score > ThresholdIncrease FalseRejectCountFalseAcceptRate = FalseAcceptCount / Length(AcceptScoresList) FalseRejectRate = FalseRejectCount / length(RejectScoresList) Add plot to error curve at (FalseRejectRate, FalseAcceptRate)These two error rates express the in adequacies of the system whe n operat ing at aspecific threshold value. Ideally, both these figures should be zero, but in reality reducing either the FAR or FRR (by alteri ng the threshold value) will in evitably resultin increasing the other. Therefore, in order to describe the full operating range of a particular system, we vary the threshold value through the en tire range of scores produced. The applicati on of each threshold value produces an additi onal FAR, FRR pair, which when plotted on a graph produces the error rate curve shown below.Figure 4-5 - Example Error Rate Curve produced by the verification test.The equal error rate (EER) can be see n as the point at which FAR is equal to FRR. This EER value is often used as a single figure representing the general recognition performa nee of a biometric system and allows for easy visual comparis on of multiple methods. However, it is important to note that the EER does not indicate the level of error that would be expected in a real world applicati on .It is un likely that any real system would use a threshold value such that the perce ntage of false accepta nces were equal to the perce ntage of false rejecti ons. Secure site access systems would typically set the threshold such that false accepta nces were sig nifica ntly lower tha n false rejecti ons: unwilling to tolerate intruders at the cost of inconvenient access denials.Surveilla nee systems on the other hand would require low false rejectio n rates to successfully ide ntify people in a less con trolled en vir onment. Therefore we should bear in mind that a system with a lower EER might not n ecessarily be the better performer towards the extremes of its operating capability.There is a strong conn ecti on betwee n the above graph and the receiver operat ing characteristic (ROC) curves, also used in such experime nts. Both graphs are simply two visualisati ons of the same results, in that the ROC format uses the True Accepta nee Rate(TAR), where TAR = 1.0 -FRR in place of the FRR, effectively flipping the graph vertically. Another visualisation of the verification test results is to display both the FRR and FAR as functions of the threshold value. This prese ntati on format provides a refere nee to determ ine the threshold value necessary to achieve a specific FRR and FAR. The EER can be seen as the point where the two curves in tersect.ThrasholdFigure 4-6 - Example error rate curve as a function of the score thresholdThe fluctuati on of these error curves due to no ise and other errors is depe ndant on the nu mber of face image comparis ons made to gen erate the data. A small dataset that on ly allows for a small nu mber of comparis ons will results in a jagged curve, in which large steps corresp ond to the in flue nce of a si ngle image on a high proporti on of thecomparis ons made. A typical dataset of 720 images (as used in sect ion 422) provides 258,840 verificatio n operati ons, hence a drop of 1% EER represe nts an additi onal 2588 correct decisions, whereas the quality of a single image could cause the EER tofluctuate by up to 0.28.4.2.2 ResultsAs a simple experiment to test the direct correlation method, we apply the technique described above to a test set of 720 images of 60 different people, taken from the AR Face Database [ 39 ]. Every image is compared with every other image in the test set to produce a like ness score, provid ing 258,840 verificati on operati ons from which to calculate false accepta nce rates and false rejecti on rates. The error curve produced is show n in Figure 4-7.Figure 4-7 - Error rate curve produced by the direct correlation method using no image preprocessing.We see that an EER of 25.1% is produced, meaning that at the EER thresholdapproximately one quarter of all verification operations carried out resulted in anin correct classificati on. There are a nu mber of well-k nown reas ons for this poor levelof accuracy. Tiny changes in lighting, expression or head orientation cause the location in image space to cha nge dramatically. Images in face space are moved far apart due to these image capture conditions, despite being of the same person ' s face. The distanee between images differe nt people becomes smaller tha n the area of face space covered by images of the same pers on and hence false accepta nces and false rejecti ons occur freque ntly. Other disadva ntages in clude the large amount of storage n ecessary for holdi ng many face images and the inten sive process ing required for each comparis on, making this method un suitable for applicati ons applied to a large database. In secti on 4.3 we explore the eige nface method, which attempts to address some of these issues.4二维人脸识别4.1功能定位在讨论比较两个人脸图像,我们现在就简要介绍的方法一些在人脸特征的初步调整过程。
The Beginning of Model Checking:A Personal PerspectiveE.Allen Emerson1,21.Department of Computer Sciencesputer Engineering Research CenterThe University of Texas at Austin,Austin TX78712,USAAbstract.Model checking provides an automated method for verify-ing concurrent systems.Correctness specifications are given in tempo-ral logic.The method hinges on an efficient andflexible graph-theoreticreachability algorithm.At the time of its introduction in the early1980’s,the prevailing paradigm for verification was a manual one of proof-theoretic reasoning using formal axioms and inference rules oriented to-wards sequential programs.The need to encompass concurrent programs,the desire to avoid the difficulties with manual deductive proofs,and thesmall model theorem for temporal logic motivated the development ofmodel checking.Keywords:model checking,model-theoretic,synthesis,history,origins1IntroductionIt has long been known that computer software programs,computer hardware designs,and computer systems in general exhibit errors.Working programmers may devote more than half of their time on testing and debugging in order to increase reliability.A great deal of research effort has been and is devoted to developing improved testing methods.Testing successfully identifies many significant errors.Yet,serious errors still afflict many computer systems including systems that are safety critical,mission critical,or economically vital.The US National Institute of Standards and Technology has estimated that programming errors cost the US economy$60B annually[Ni02].Given the incomplete coverage of testing,alternative approaches have been sought.The most promising approach depends on the fact that programs and more generally computer systems may be viewed as mathematical objects with behavior that is in principle well-determined.This makes it possible to specify using mathematical logic what constitutes the intended(correct)behavior.Then one can try to give a formal proof or otherwise establish that the program meets This work was supported in part by National Science Foundation grants CCR-009-8141&CCR-020-5483and funding from Fujitsu Labs of America. email:emerson@ URL:/∼emerson/its specification.This line of study has been active for about four decades now. It is often referred to as formal methods.The verification problem is:Given program M and specification h determine whether or not the behavior of M meets the specification h.Formulated in terms of Turing Machines,the verification problem was considered by Turing[Tu36]. Given a Turing Machine M and the specification h that it should eventually halt (say on blank input tape),one has the halting problem which is algorithmically unsolvable.In a later paper[Tu49]Turing argued for the need to give a(manual) proof of termination using ordinals,thereby presaging work by Floyd[Fl67]and others.The model checking problem is an instance of the verification problem.Model checking provides an automated method for verifying concurrent(nominally)finite state systems that uses an efficient andflexible graph search,to determine whether or not the ongoing behavior described by a temporal property holds of the system’s state graph.The method is algorithmic and often efficient because the system isfinite state,despite reasoning about infinite behavior.If the answer is yes then the system meets its specification.If the answer is no then the system violates its specification;in practice,the model checker can usually produce a counterexample for debugging purposes.At this point it should be emphasized that the verification problem and the model checking problem are mathematical problems.The specification is for-mulated in mathematical logic.The verification problem is distinct from the pleasantness problem[Di89]which concerns having a specification capturing a system that is truly needed and wanted.The pleasantness problem is inherently pre-formal.Nonetheless,it has been found that carefully writing a formal specifi-cation(which may be the conjunction of many sub-specifications)is an excellent way to illuminate the murk associated with the pleasantness problem.At the time of its introduction in the early1980’s,the prevailing paradigm for verification was a manual one of proof-theoretic reasoning using formal axioms and inference rules oriented towards sequential programs.The need to encom-pass concurrent programs,and the desire to avoid the difficulties with manual deductive proofs,motivated the development of model checking.In my experience,constructing proofs was sufficiently difficult that it did seem there ought to be an easier alternative.The alternative was suggested by temporal logic.Temporal logic possessed a nice combination of expressiveness and decidability.It could naturally capture a variety of correctness properties, yet was decidable on account of the“Small”Finite Model Theorem which en-sured that any satisfiable formula was true in somefinite model that was small. It should be stressed that the Small Finite Model Theorem concerns the satisfi-ability problem of propositional temporal logic,i.e.,truth in some state graph. This ultimately lead to model checking,i.e.,truth in a given state graph.The origin and development of model checking will be described below.De-spite being hampered by state explosion,over the past25years model checking has had a substantive impact on program verification efforts.Formal verification2has progressed from discussions of how to manually prove programs correct to the routine,algorithmic,model-theoretic verification of many programs.The remainder of the paper is organized as follows.Historical background is discussed in section2largely related to verification in the Floyd-Hoare paradigm; protocol verification is also considered.Section3describes temporal logic.A very general type of temporal logic,the mu-calculus,that defines correctness in terms offixpoint expressions is described in section4.The origin of model checking is described in section5along with some relevant personal influences on me.A discussion of model checking today is given in section6.Some concluding remarks are made in section7.2Background of Model CheckingAt the time of the introduction of model checking in the early1980’s,axiomatic verification was the prevailing verification paradigm.The orientation of this paradigm was manual proofs of correctness for(deterministic)sequential pro-grams,that nominally started with their input and terminated with their out-put.The work of Floyd[Fl67]established basic principles for proving partial correctness,a type of safety property,as well as termination and total correct-ness,forms of liveness properties.Hoare[Ho69]proposed an axiomatic basis for verification of partial correctness using axioms and inference rules in a formal deductive system.An important advantage of Hoare’s approach is that it was compositional so that the proof a program was obtained from the proofs of its constituent subprograms.The Floyd-Hoare framework was a tremendous success intellectually.It en-gendered great interest among researchers.Relevant notions from logic such as soundness and(relative)completeness as well as compositionality were in-vestigated.Proof systems were proposed for new programming languages and constructs.Examples of proofs of correctness were given for small programs.However,this framework turned out to be of limited use in practice.It did not scale up to“industrial strength”programs,despite its merits.Problems start with the approach being one of manual proof construction.These are formal proofs that can involve the manipulations of extremely long logical formulae. This can be inordinately tedious and error-prone work for a human.In practice, it may be wholly infeasible.Even if strict formal reasoning were used throughout, the plethora of technical detail could be overwhelming.By analogy,consider the task of a human adding100,000decimal numbers of1,000digits each.This is rudimentary in principle,but likely impossible in practice for any human to perform reliably.Similarly,the manual verification of100,000or10,000or even 1,000line programs by hand is not feasible.Transcription errors alone would be prohibitive.Furthermore,substantial ingenuity may also be required on the part of the human to devise suitable assertions for loop invariants.One can attempt to partially automate the process of proof construction using an interactive theorem prover.This can relieve much of the clerical burden.3However,human ingenuity is still required for invariants and various lemmas. Theorem provers may also require an expert operator to be used effectively.Moreover,the proof-theoretic framework is one-sided.It focuses on providing a way to(syntactically)prove correct programs that are genuinely(semantically) correct.If one falters or fails in the laborious process of constructing a proof of a program,what then?Perhaps the program is really correct but one has not been clever enough to prove it so.On the other hand,if the program is really incorrect,the proof systems do not cater for proving incorrectness.Since in practice programs contain bugs in the overwhelming majority of the cases,the inability to identify errors is a serious drawback of the proof-theoretic approach.It seemed there ought to be a better way.It would be suggested by temporal logic as discussed below.Remark.We mention that the term verification is sometimes used in a specific sense meaning to establish correctness,while the term refutation(or falsification) is used meaning to detect an error.More generally,verification refers to the two-sided process of determining whether the system is correct or erroneous.Lastly,we should also mention in this section the important and useful area of protocol work protocols are commonlyfinite state.This makes it possible to do simple graph reachability analysis to determine if a bad state is accessible(cf.[vB78],[Su78]).What was lacking here was aflexible and expres-sive means to specify a richer class of properties.3Temporal LogicModal and temporal logics provided key inspiration for model checking.Origi-nally developed by philosophers,modal logic deals with different modalities of truth,distinguishing between P being true in the present circumstances,pos-sibly P holding under some circumstances,and necessarily P holding under all circumstances.When the circumstances are points in time,we have a modal tense logic or temporal logic.Basic temporal modalities include sometimes P and always P.Several writers including Prior[Pr67]and Burstall[Bu74]suggested that temporal logic might be useful in reasoning about computer programs.For in-stance,Prior suggested that it could be used to describe the“workings of a digital computer”.But it was the seminal paper of Pnueli[Pn77]that made the crit-ical suggestion of using temporal logic for reasoning about ongoing concurrent programs which are often characterized as reactive systems.Reactive systems typically exhibit ideally nonterminating behavior so that they do not conform to the Floyd-Hoare paradigm.They are also typically non-deterministic so that their non-repeatable behavior was not amenable to testing. Their semantics can be given as infinite sequences of computation states(paths) or as computation trees.Examples of reactive systems include microprocessors, operating systems,banking networks,communication protocols,on-board avion-ics systems,automotive electronics,and many modern medical devices.4Pnueli used a temporal logic with basic temporal operators F(sometimes) and G(always);augmented with X(next-time)and U(until)this is today known as LTL(Linear Time Logic).Besides the basic temporal operators applied to propositional arguments,LTL permitted formulae to be built up by forming nestings and boolean combinations of subformulae.For example,G¬(C1∧C2) expresses mutual exclusion of critical sections C1and C2;formula G(T1⇒(T1U C1)specifies that if process1is in its trying region it remains there until it eventually enters its critical section.The advantages of such a logic include a high degree of expressiveness permit-ting the ready capture of a wide range of correctness properties of concurrent programs,and a great deal offlexibility.Pnueli focussed on a proof-theoretic approach,giving a proof in a deductive system for temporal logic of a small example program.Pnueli does sketch a decision procedure for truth overfinite state graphs.However,the complexity would be nonelementary,growing faster than anyfixed composition of exponential functions,as it entails a reduction to S1S,the monadic Second order theory of1Successor,(or SOLLO;see be-low).In his second paper[Pn79]on temporal logic the focus is again on the proof-theoretic approach.I would claim that temporal logic has been a crucial factor in the success of model checking.We have one logical framework with a few basic temporal operators permitting the expression of limitless specifications.The connection with natural language is often significant as well.Temporal logic made it possible, by and large,to express the correctness properties that needed to be expressed. Without that ability,there would be no reason to use model checking.Alternative temporal formalisms in some cases may be used as they can be more expressive or succinct than temporal logic.But historically it was temporal logic that was the driving force.These alternative temporal formalisms include:(finite state)automata(on infinite strings)which accept infinite inputs by infinitely often entering a desig-nated set of automaton states[Bu62].An expressively equivalent but less succinct formalism is that ofω-regular expressions;for example,ab cωdenotes strings of the form:one a,0or more b s,and infinitely many copies of c;and a property not expressible in LTL(true P)ωensuring that at every even moment P holds. FOLLO(First Order Language of Linear Order)which allows quantification over individual times,for example,∀i≥0Q(i);and SOLLO(Second Order Language of Linear Order)which also allows quantification over sets of times corresponding to monadic predicates such as∃Q(Q(0)∧∀i≥0(Q(i)⇒Q(i+1)).1These alterna-tives are sometimes used for reasons of familiarity,expressiveness or succinctness. LTL is expressively equivalent to FOLLO,but FOLLO can be nonelementarily more succinct.This succinctness is generally found to offer no significant practi-cal advantage.Moreover,model checking is intractably(nonelementarily)hard for FOLLO.Similarly,SOLLO is expressively equivalent toω−regular expres-sions but nonelementarily more succinct.See[Em90]for further discussion.1Technically,the latter abbreviates∃Q(Q(0)∧∀i,j≥0(i<j∧¬∃k(i<k<j))⇒(Q(i)⇒Q(j)).5Temporal logic comes in two broad styles.A linear time LTL assertion h is interpreted with respect to a single path.When interpreted over a program there is an implicit universal quantifications over all paths of the program.An assertion of a branching time logic is interpreted over computation trees.The universal A (for all futures)and existential E(for some future)path quantifiers are important in this context.We can distinguish between AF P(along all futures,P eventually holds and is thus inevitable))and EF P(along some future,P eventually holds and is thus possible).One widely used branching time logic is CTL(Computation Tree Logic)(cf. [CE81]).Its basic temporal modalities are A(for all futures)or E(for some fu-ture)followed by one of F(sometime),G(always),X(next-time),and U(until); compound formulae are built up from nestings and propositional combinations of CTL subformulae.CTL derives from[EC80].There we defined the precursor branching time logic CTF which has path quantifiers∀fullpath and∃fullpath, and is very similar to CTL.In CTF we could write∀fullpath∃state P as well as ∃fullpath∃state P These would be rendered in CTL as AF P and EF P,respec-tively.The streamlined notation was derived from[BMP81].We also defined a modal mu-calculus FPF,and then showed how to translate CTF into FPF. The heart of the translation was characterizing the temporal modalities such as AF P and EF P asfixpoints.Once we had thefixpoint characterizations of these temporal operators,we were close to having model checking.CTL and LTL are of incomparable expressive power.CTL can assert the existence of behaviors,e.g.,AGEF start asserts that it is always possible to re-initialize a circuit.LTL can assert certain more complex behaviors along a computation,such as GF en⇒F ex relating to fairness.(It turns out this formula is not expressible in CTL,but it is in“FairCTL”[EL87])The branching time logic CTL*[EH86]provides a uniform framework that subsumes both LTL and CTL,but at the higher cost of deciding satisfiability.There has been an ongoing debate as to whether linear time logic or branching time logic is better for program reasoning(cf.[La80],[EH86],[Va01]).Remark.The formal semantics of temporal logic formulae are defined with respect to a(Kripke)structure M=(S,S0,R,L)where S is a set of states,S0 comprises the initial states,R⊆S×S is a total binary relation,and L is a labelling of states with atomic facts(propositions)true there.An LTL formula h such as F P is defined over path x=t0,t1,t2...through M by the rule M,x|= F P iff∃i≥0P∈L(t i).Similarly a CTL formula f such as EGP holds of a state t0,denoted M,t0|=EGP,iffthere exists a path x=t0,t1,t2,...in M such that∀i≥0P∈L(t i).For LTL h,we define M|=h ifffor all paths x starting in S0,M,x|=h.For CTL formula f we define M|=f ifffor each s∈S0,M,s|=f.A structure is also known as a state graph or state transition graph or transition system.See[Em90]for details.64The Mu-calculusThe mu-calculus may be viewed as a particular but very general temporal logic. Some formulations go back to the work of de Bakker and Scott[deBS69];we deal specifically with the(propositional or)modal mu-calculus(cf.[EC80],[Ko83]). The mu-calculus provides operators for defining correctness properties using re-cursive definitions plus leastfixpoint and greatestfixpoint operators.Leastfix-points correspond to well-founded or terminating recursion,and are used to capture liveness or progress properties asserting that something does happen. Greatestfixpoints permit infinite recursion.They can be used to capture safety or invariance properties.The mu-calculus is very expressive and veryflexible.It has been referred to as a“Theory of Everything”.The formulae of the mu-calculus are built up from atomic proposition con-stants P,Q,...,atomic proposition variables Y,Z,...,propositional connectives ∨,∧,¬,and the leastfixpoint operator,µas well as the greatestfixpoint opera-tor,ν.Eachfixpoint formula such asµZ.τ(Z)should be syntactically monotone meaning Z occurs under an even number of negations,and similarly forν.The mu-calculus is interpreted with respect to a structure M=(S,R,L). The power set of S,2S,may be viewed as the complete lattice(2S,S,∅,⊆,∪,∩).Intuitively,each(closed)formula may be identified with the set of states of S where it is true.Thus,false which corresponds to the empty set is the bottom element,true which corresponds to S is the top element,and implication (∀s∈S[P(s)⇒Q(s)])which corresponds to simple set-theoretic containment (P⊆Q)provides the partial ordering on the lattice.An open formulaτ(Z) defines a mapping from2S→2S whose value varies as Z varies.A givenτ: 2S→2S is monotone provided that P⊆Q impliesτ(P)⊆τ(Q).Tarski-Knaster Theorem.(cf.[Ta55],[Kn28])Letτ:2S→2S be a monotone functional.Then(a)µY.τ(Y)=∩{Y:τ(Y)=Y}=∩{Y:τ(Y)⊆Y},(b)νY.τ(Y)=∪{Y:τ(Y)=Y}=∪{Y:τ(Y)⊇Y},(c)µY.τ(Y)=∪iτi(false)where i ranges over all ordinals of cardinality at mostthat of the state space S,so that when S isfinite i ranges over[0:|S|],and (d)νY.τ(Y)=∩iτi(true)where i ranges over all ordinals of cardinality at mostthat of the state space S,so that when S isfinite i ranges over[0:|S|].Consider the CTL property AF P.Note that it is afixed point orfixpoint of the functionalτ(Z)=P∨AXZ.That is,as the value of the input Z varies,the value of the outputτ(Z)varies,and we have AF P=τ(AF P)=P∨AXAF P. It can be shown that AF P is the leastfixpoint ofτ(Z),meaning the set of states associated with AF P is a subset of the set of states associated with Z,for any fixpoint Z=τ(Z).This might be denotedµZ.Z=τ(Z).More succinctly,we normally write justµZ.τ(Z).In this case we have AF P=µZ.P∨AXZ.We can get some intuition for the the mu-calculus by noting the following fixpoint characterizations for CTL properties:7EF P=µZ.P∨EXZAGP=νZ.P∧AXZAF P=µZ.P∨AXZEGP=νZ.P∧EXZA(P U Q)=µZ.Q∨(P∧AXZ)E(P U Q)=µZ.Q∨(P∧EXZ)For all these properties,as we see,thefixpoint characterizations are simple and plausible.It is not too difficult to give rigorous proofs of their correctness (cf.[EC80],[EL86]).We emphasize that the mu-calculus is a rich and powerful formalism;its formulae are really representations of alternatingfinite state au-tomata on infinite trees[EJ91].Since even such basic automata as deterministic finite state automata onfinite strings can form quite complex“cans of worms”, we should not be so surprised that it is possible to write down highly inscrutable mu-calculus formulae for which there is no readily apparent intuition regard-ing their intended meaning.The mu-calculus has also been referred to as the “assembly language of program logics”reflecting both its comprehensiveness and potentially intricate syntax.On the other hand,many mu-calculus char-acterizations of correctness properties are elegant due to its simple underlying mathematical organization.In[EL86]we introduced the idea of model checking for the mu-calculus in-stead of testing satisfiability.We catered for efficient model checking in fragments of the the mu-calculus.This provides a basis for practical(symbolic)model checking algorithms.We gave an algorithm essentially of complexity n d,where d is the alternation depth reflecting the number of significantly nested least and greatestfixpoint operators.We showed that common logics such as CTL,LTL, and CTL*were of low alternation depth d=1or d=2.We also provided succinctfixpoint characterizations for various natural fair scheduling criteria.A symbolic fair cycle detection method,known as the“Emerson-Lei”algorithm, is comprised of a simplefixpoint characterization plus the Tarski-Knaster theo-rem.It is widely used in practice even though it has worst case quadratic cost. Empirically,it usually outperforms alternatives.5The Origin of Model CheckingThere were several influences in my personal background that facilitated the de-velopment of model checking.In1975Zohar Manna gave a talk at the University of Texas onfixpoints and the Tarski-Knaster Theorem.I was familiar with Di-jkstra’s book[Di76]extending the Floyd-Hoare framework with wlp the weakest liberal precondition for partial correctness and wp the weakest precondition for to-tal correctness.It turns out that wlp and wp may be viewed as modal operators, for which Dijkstra implicitly gavefixpoint characterizations,although Dijkstra did not favor this viewpoint.Basu and Yeh[BY75]at Texas gavefixpoint char-acterizations of weakest preconditions for while loops.Ed Clarke[Cl79]gave similarfixpoint characterizations for both wp and wlp for a variety of control structures.8I will now describe how model checking originated at Harvard University.In prior work[EC80]we gavefixpoint characterizations for the main modalities of a logic that was essentially CTL.These would ultimately provide thefirst key ingredient of model checking.Incidentally,[EC80]is a paper that could very well not have appeared.Some-how the courier service delivering the hard-copies of the submission to Amster-dam for the program chair at CWI(Dutch for“Center for Mathematics and Computer Science”)sent the package in bill-the-recipient mode.Fortunately, CWI was gracious and accepted the package.All that remained to undo this small misfortune was to get an overseas bank draft to reimburse them.The next work,entitled“Design and Synthesis of Synchronization Skeletons using Branching Time Logic”,was devoted to program synthesis and model checking.I suggested to Ed Clarke that we present the paper,which would be known as[CE81],at the IBM Logics of Programs workshop,since he had an invitation to participate.Both the idea and the term model checking were introduced by Clarke and Emerson in[CE81].Intuitively,this is a method to establish that a given program meets a given specification where:–The program defines afinite state graph M.–M is searched for elaborate patterns to determine if the specification f holds.–Pattern specification isflexible.–The method is efficient in the sizes of M and,ideally,f.–The method is algorithmic.–The method is practical.The conception of model checking was inspired by program synthesis.I was interested in verification,but struck by the difficulties associated with manual proof-theoretic verification as noted above.It seemed that it might be possible to avoid verification altogether and mechanically synthesize a correct program directly from its CTL specification.The idea was to exploit the small model property possessed by certain decidable temporal logics:any satisfiable formula must have a“small”finite model of size that is a function of the formula size. The synthesis method would be sound:if the input specification was satisfiable, it built afinite global state graph that was a model of the specification,from which individual processes could be extracted The synthesis method should also be complete:If the specification was unsatisfiable,it should say so.Initially,it seemed to me technically problematic to develop a sound and complete synthesis method for CTL.However,it could always be ensured that an alleged synthesis method was at least sound.This was clear because given anyfinite model M and CTL specification f one can algorithmically check that M is a genuine model of f by evaluating(verifying)the basic temporal modal-ities over M based on the Tarski-Knaster theorem.This was the second key ingredient of model posite temporal formulae comprised of nested subformulae and boolean combinations of subformulae could be verified by re-cursive descent.Thus,fixpoint characterizations,the Tarski-Knaster theorem, and recursion yielded model checking.9Thus,we obtained the model checking framework.A model checker could be quite useful in practice,given the prevalence offinite state concurrent systems. The temporal logic CTL had theflexibility and expressiveness to capture many important correctness properties.In addition the CTL model checking algorithm was of reasonable efficiency,polynomial in the structure and specification sizes. Incidentally,in later years we sometimes referred to temporal logic model check-ing.The crucial roles of abstraction,synchronization skeletons,andfinite state spaces were discussed in[CE81]:The synchronization skeleton is an abstraction where detail irrelevant to synchronization is suppressed.Most solutions to synchronization prob-lems are in fact given as synchronization skeletons.Because synchronization skeletons are in generalfinite state...proposi-tional temporal logic can be used to specify their properties.Thefinite model property ensures that any program whose synchroniza-tion properties can be expressed in propositional temporal logic can be realized by afinite state machine.Conclusions of[CE81]included the following prognostications,which seem to have been on target:[Program Synthesis]may in the long run be quite practical.Much addi-tional research will be needed,however,to make it feasible in practice....We believe that practical[model checking]tools could be developed in the near future.To sum up,[CE81]made several contributions.It introduced model check-ing,giving an algorithm of quadratic complexity O(|f||M|2).It introduced the logic CTL.It gave an algorithmic method for concurrent program synthesis(that was both sound and complete).It argued that most concurrent systems can be abstracted tofinite state synchronization skeletons.It described a method for efficiently model checking basic fairness using strongly connected components. An NP-hardness result was established for checking certain assertions in a richer logic than CTL.A prototype(and non-robust)model checking tool BMoC was developed,primarily by a Harvard undergraduate,to permit verification of syn-chronization protocols.A later paper[CES86]improved the complexity of CTL model checking to linear O(|f||M|).It showed how to efficiently model check relative to uncondi-tional and weak fairness.The EMC model checking tool was described,and a version of the alternating bit protocol verified.A general framework for efficiently model checking fairness properties was given in[EL87],along with a reduction showing that CTL*model checking could be done as efficiently as LTL model checking.Independently,a similar method was developed in France by Sifakis and his student[QS82].Programs were interpreted over transition systems.A branching10。
Global Environmental Change 16(2006)268–281VulnerabilityW.Neil AdgerTyndall Centre for Climate Change Research,School of Environmental Sciences,University of East Anglia,Norwich NR47TJ,UKReceived 8May 2005;received in revised form 13February 2006;accepted 15February 2006AbstractThis paper reviews research traditions of vulnerability to environmental change and the challenges for present vulnerability research in integrating with the domains of resilience and adaptation.Vulnerability is the state of susceptibility to harm from exposure to stresses associated with environmental and social change and from the absence of capacity to adapt.Antecedent traditions include theories of vulnerability as entitlement failure and theories of hazard.Each of these areas has contributed to present formulations of vulnerability to environmental change as a characteristic of social-ecological systems linked to resilience.Research on vulnerability to the impacts of climate change spans all the antecedent and successor traditions.The challenges for vulnerability research are to develop robust and credible measures,to incorporate diverse methods that include perceptions of risk and vulnerability,and to incorporate governance research on the mechanisms that mediate vulnerability and promote adaptive action and resilience.These challenges are common to the domains of vulnerability,adaptation and resilience and form common ground for consilience and integration.r 2006Elsevier Ltd.All rights reserved.Keywords:Vulnerability;Disasters;Food insecurity;Hazards;Social-ecological systems;Surprise;Governance;Adaptation;Resilience1.IntroductionThe purpose of this article is to review existing knowl-edge on analytical approaches to vulnerability to environ-mental change in order to propose synergies between research on vulnerability and on resilience of social-ecological systems.The concept of vulnerability has been a powerful analytical tool for describing states of suscept-ibility to harm,powerlessness,and marginality of both physical and social systems,and for guiding normative analysis of actions to enhance well-being through reduction of risk.In this article,I argue that emerging insights into the resilience of social-ecological systems complement and can significantly add to a converging research agenda on the challenges faced by human environment interactions under stresses caused by global environmental and social change.I review the precursors and the present emphases of vulnerability research.I argue that,following decades of vulnerability assessment that distinguished between processand outcome,much exciting current research emphasizes multiple stressors and multiple pathways of vulnerability.This current research can potentially contribute to emer-ging resilience science through methods and conceptualiza-tion of the stresses and processes that lead to threshold changes,particularly those involved in the social and institutional dynamics of social-ecological systems.Part of the potential convergence and learning across vulnerability and resilience research comes from a con-sistent focus on social-ecological systems.The concept of a social-ecological system reflects the idea that human action and social structures are integral to nature and hence any distinction between social and natural systems is arbitrary.Clearly natural systems refer to biological and biophysical processes while social systems are made up of rules and institutions that mediate human use of resources as well as systems of knowledge and ethics that interpret natural systems from a human perspective (Berkes and Folke,1998).In the context of these social-ecological systems,resilience refers to the magnitude of disturbance that can be absorbed before a system changes to a radically different state as well as the capacity to self-organise and the/locate/gloenvcha0959-3780/$-see front matter r 2006Elsevier Ltd.All rights reserved.doi:10.1016/j.gloenvcha.2006.02.006E-mail address:n.adger@.capacity for adaptation to emerging circumstances(e.g. Carpenter et al.,2001;Berkes et al.,2003;Folke,2006). Vulnerability,by contrast,is usually portrayed in negative terms as the susceptibility to be harmed.The central idea of the often-cited IPCC definition(McCarthy et al.,2001)is that vulnerability is degree to which a system is susceptible to and is unable to cope with adverse effects (of climate change).In all formulations,the key parameters of vulnerability are the stress to which a system is exposed, its sensitivity,and its adaptive capacity.Thus,vulnerability research and resilience research have common elements of interest—the shocks and stresses experienced by the social-ecological system,the response of the system,and the capacity for adaptive action.The points of convergence are more numerous and more fundamental than the points of divergence.The different formulations of research needs,research methods,and normative implications of resilience and vulnerability research stem from,I believe,the formulation of the objectives of study(or the system)in each case.As Berkes and Folke(1998,p.9)point out,‘there is no single universally accepted way of formulating the linkages between human and natural systems’.Other areas of research in the human–environment interaction(such as common property,ecological economics or adaptive management)conceptualize social-ecological linkages in different ways.The common property resource tradition, for example,stresses the importance of social,political and economic organizations in social-ecological systems,with institutions as mediating factors that govern the relation-ship between social systems and ecosystems on which they depend(Dols ak and Ostrom,2003).Ecological economics, by contrast,links social and natural systems through analysis of the interactions and substitutability of natural capital with other forms of capital(human,social and physical)(e.g.the‘containing and sustaining ecosystem’idea of Daly and Farley,2004).Adaptive management,by contrast,deals with the unpredictable interactions between humans and ecosystems that evolve together—it is the science of explaining how social and natural systems learn through experimentation(Berkes and Folke,1998).All of these other traditions(and both vulnerability and resilience research in effect)seek to elaborate the nature of social-ecological systems while using theories with explanatory power for particular dimensions of human–environment interactions.Evolving insights into the vulnerability of social-ecolo-gical systems show that vulnerability is influenced by the build up or erosion of the elements of social-ecological resilience.These are the ability to absorb the shocks,the autonomy of self-organisation and the ability to adapt both in advance and in reaction to shocks.The impacts and recovery from Asian tsunami of2004,or the ability of small islands to cope with weather-related extremes,for example,demonstrate how discrete events in nature expose underlying vulnerability and push systems into new domains where resilience may be reduced(Adger et al.,2005b).In a world of global change,such discrete events are becoming more common.Indeed,risk and perturbation in many ways define and constitute the landscape of decision-making for social-ecological systems.I proceed by examining the traditions within vulner-ability research including thefields of disasters research (delineated into human ecology,hazards,and the‘Pressure and Release’model)and research on entitlements.This discussion is complementary to other reviews that discern trends and strategies for useful and analytically powerful vulnerability research.Eakin and Luers(2006),Bankoff et al.(2004),Pelling(2003),Fu ssel and Klein(2006),Cutter (2003),Ionescu et al.(2005)and Kasperson et al.(2005), for example,present significant reviews of the evolution and present application of vulnerability tools and methods across resource management,social change and urbaniza-tion and climate change.These build on earlier elabora-tions by Liverman(1990),Dow(1992),Ribot et al.(1996), and others(see the paper by Janssen et al.(2006)for an evaluation of the seminal articles).Elements of disasters and entitlements theories have contributed to current use of vulnerability in the analysis of social-ecological systems and in sustainable livelihoods research.Livelihoods research remains,I argue,firmly rooted in social systems rather than integrative of risks across social-ecological systems.All these traditions and approaches are found in applications of vulnerability in the context of climate change.The remaining sections of the paper examine methodological developments and chal-lenges to human dimensions research,particularly on measurement of vulnerability,dealing with perceptions of risk,and issues of governance.The paper demonstrates that these challenges are common to thefields of vulnerability,adaptation and resilience and hence point to common ground for learning between presently dis-parate traditions and communities.2.Evolution of approaches to vulnerability2.1.Antecedents:hazards and entitlementsA number of traditions and disciplines,from economics and anthropology to psychology and engineering,use the term vulnerability.It is only in the area of human–envir-onment relationships that vulnerability has common, though contested,meaning.Human geography and human ecology have,in particular,theorized vulnerability to environmental change.Both of these disciplines have made contributions to present understanding of social-ecological systems,while related insights into entitlements grounded vulnerability analysis in theories of social change and decision-making.In this section,I argue that all these disciplines traditions continue to contribute to emerging methods and concepts around social-ecological systems and their inherent and dynamic vulnerability.While there are differences in approaches,there are many commonalities in vulnerability research in theW.N.Adger/Global Environmental Change16(2006)268–281269environmental arena.First,it is widely noted that vulnerability to environmental change does not exist in isolation from the wider political economy of resource use.Vulnerability is driven by inadvertent or deliberate human action that reinforces self-interest and the distribution of power in addition to interacting with physical and ecological systems.Second,there are common terms across theoretical approaches:vulnerability is most often con-ceptualized as being constituted by a components that include exposure and sensitivity to perturbations or external stresses,and the capacity to adapt.Exposure is the nature and degree to which a system experiences environmental or socio-political stress.The characteristics of these stresses include their magnitude,frequency,duration and areal extent of the hazard (Burton et al.,1993).Sensitivity is the degree to which a system is modified or affected by perturbations.1Adaptive capacity is the ability of a system to evolve in order to accommodate environmental hazards or policy change and to expand the range of variability with which it can cope.There are,I believe,two relevant existing theories that relate to human use of environmental resources and to environmental risks:the vulnerability and related resilience research on social-ecological systems and the separate literature on vulnerability of livelihoods to poverty.Fig.1is an attempt to portray the overlap in ideas and those ideas,which are distinct from each other and is based on my reading of this literature.2Two major research traditions in vulnerability acted as seedbeds for ideas that eventually translated into current research on vulnerability of social and physical systems in an integrated manner.These two antecedents are the analysis of vulnerability as lack of entitlements and the analysis of vulnerability to natural hazards.These are depicted in the upper part of Fig.1,with the hazards tradition delineated into three overlapping areas of human ecology (or political ecology),natural hazards,and the so-called ‘Pressure and Release’model that spans the space between hazards and political ecology approaches.Other reviews of vulnerability have come to different conclusions on intellectual traditions.Cutter (1996)and Cutter et al.(2003),for example,classify research into first,vulnerability as exposure (conditions that make people or places vulnerable to hazard),second,vulnerability as social condition (measure of resilience to hazards),and third,‘the integration of potential exposures and societal resiliencewith a specific focus on places or regions (Cutter et al.,2003,p.243).O’Brien et al.(2005)identify similar trends in ‘vulnerability as outcome’and ‘contextual vulnerability’as two opposing research foci and traditions,relating to debates within the climate change area (see also Kelly and Adger,2000).These distinctions between outcome and processes of vulnerability are also important,though not captured in Fig.1,which portrays more of the disciplinary divide between those endeavours which largely ignore physical and biological systems (entitlements and liveli-hoods)and those that try to integrate social and ecological systems.The impetus for research on entitlements in livelihoods has been the need to explain food insecurity,civil strife and social upheaval.Research on the social impacts of natural hazards came from explaining commonalities between apparently different types of natural disasters and their impacts on society.But clearly these phenomena (of entitlement failure leading to famine and natural hazards)themselves are not independent of each other.While some famines can be triggered by extreme climate events,such as drought or flood,for example,vulnerability researchers have increasingly shown that famines and food insecurity are much more often caused by disease,war or other factors (Sen,1981;Swift,1989;Bohle et al.,1994;Blaikie et al.,1994).Entitlements-based explanations of vulnerability focussed almost exclusively on the social realm of institu-tions,well-being and on class,social status and gender as important variables while vulnerability research on natural hazards developed an integral knowledge of environmental risks with human response drawing on geographical and psychological perspectives in addition to social parameters of risk.Vulnerability to food insecurity is explained,through so-called entitlement theory,as a set of linked economic and institutional factors.Entitlements are the actual or potential resources available to individuals based on their own production,assets or reciprocal arrangements.Food insecurity is therefore a consequence of human activity,which can be prevented by modified behaviour and by political interventions.Vulnerability is the result of processes in which humans actively engage and which they can almost always prevent.The theory of entitlements as an explanation for famine causes was developed in the early 1980s (Sen,1981,1984)and displaced prior notions that shortfalls in food production through drought,flood,or pest,were the principal cause of famine.It focused instead on the effective demand for food,and the social and economic means of obtaining it.Entitlements are sources of welfare or income that are realized or are latent.They are ‘the set of alternative commodity bundles that a person can command in a society using the totality of rights and opportunities that he or she faces’(Sen,1984,p.497).Essentially,vulnerability of livelihoods to shocks occurs when people have insufficient real income and wealth,and when there is a breakdown in other previously held endowments.1The generic meaning of sensitivity is applied in the climate change field where McCarthy et al.(2001)in the IPCC report of 2001defines sensitivity and illustrates the generic meaning with reference to climate change risks thus:‘the degree to which a system is affected,either adversely or beneficially,by climate-related stimuli.The effect may be direct (e.g.,a change in crop yield in response to a change in the mean,range,or variability of temperature)or indirect (e.g.,damages caused by an increase in the frequency of coastal flooding due to sea level rise)’.2The observations leading to Fig.1are confirmed to an extent by the findings of Janssen et al.(2006)on the importance of Sen (1981)as a seminal reference across many areas of vulnerability research and the non-inclusion of the present livelihood vulnerability literature.W.N.Adger /Global Environmental Change 16(2006)268–281270The advantage of the entitlements approach to famine is that it can be used to explain situations where populations have been vulnerable to famine even where there are no absolute shortages of food or obvious environmental drivers at work.Famines and other crises occur when entitlements fail.While the entitlements approach to analysing vulner-ability to famine often underplayed ecological or physical risk,it succeeded in highlighting social differentiation in cause and outcome of vulnerability.The second research tradition(upper right in Fig.1)on natural hazards,by contrast has since its inception attempted to incorporate physical science,engineering and social science to explain linkages between system elements.The physical elements of exposure,probability and impacts of hazards,both seemingly natural and unnatural, are the basis for this tradition.Burton et al.(1978and 1993)summarized and synthesized decades of research and practice onflood management,geo-hazards and major technological hazards,deriving lessons on individual perceptions of risk,through to international collective action.They demonstrated that virtually all types of natural hazard and all social and political upheaval have vastly different impacts on different groups in society.For many natural hazards the vulnerability of human popula-tions is based on where they reside,their use of the natural resources,and the resources they have to cope.The human ecology tradition(sometimes labelled the political ecology stream—Cutter,1996)within analysis of vulnerability to hazards(upper right in Fig.1)argued that the discourse of hazard management,because of a perceived dominance of engineering approaches,failed to engage with the political and structural causes of vulner-ability within society.Human ecologists attempted to explain why the poor and marginalized have been most at risk from natural hazards(Hewitt,1983;Watts,1983), what Hewitt(1997)termed‘the human ecology of endangerment’.Poorer households tend to live in riskier areas in urban settlements,putting them at risk from flooding,disease and other chronic stresses.Women are differentially at risk from many elements of environmental hazards,including,for example,the burden of work in recovery of home and livelihood after an event(Fordham, 2003).Flooding in low-lying coastal areas associated with monsoon climates or hurricane impacts,for example,are seasonal and usually short lived,yet can have significant unexpected impacts for vulnerable sections of society. Burton et al.(1993),from a mainstream hazards tradition,argued that hazards are essentially mediated by institutional structures,and that increased economic activity does not necessarily reduce vulnerability to impacts of hazards in general.As with food insecurity,vulnerability to natural hazards has often been explained by technical and institutional factors.By contrast the human ecology approach emphasizes the role of economic development in adapting to changing exogenous risk and hence differences in class structure,governance,and economic dependency in the differential impacts of hazards(Hewitt,1983).Much of the world’s‘vulnerability as experienced’comes from perceptions of insecurity.Insecurity at its most basic level is not only a lack of security of food supply and availability and of economic well-being,but also freedom from strife and conflict.Hewitt(1994,1997)argues that violence and the‘disasters of war’have been pervasive sources of danger for societies,accounting for up to half of all reported famines in the past century.While war andresilience of social-ecologicalDirect flow of ideasIndirect flow of ideasFig.1.Traditions in vulnerability research and their evolution.W.N.Adger/Global Environmental Change16(2006)268–281271civil strife often exacerbate natural hazards,the perceptions of vulnerability associated with them are fundamentally different in that food insecurity,displacement and violence to create vulnerabilities are deliberate acts perpetrated towards political ends(Hewitt,1994,1997).In Fig.1,I portray these two traditions of hazards research as being successfully bridged by Blaikie and colleagues(1994)in their‘Pressure and Release’model of hazards.They proposed that physical or biological hazards represent one pressure and characteristic of vulnerability and that a further pressure comes from the cumulative progression of vulnerability,from root causes through to local geography and social differentiation.These two pressures culminate in the disasters that result from the additive pressures of hazard and vulnerability(Blaikie et al.,1994).The analysis captured the essence of vulner-ability from the physical hazards tradition while also identifying the proximate and underlying causes of vulner-ability within a human ecology framework.The analysis was also comprehensive in seeking to explain physical and biological hazards(though deliberately omitting technolo-gical hazards).Impacts associated with geological hazards often occur without much effective warning and with a speed of onset of only a few minutes.By contrast,the HIV/ AIDS epidemic is a long wave disaster with a slow onset but catastrophic impact(Barnett and Blaikie,1994; Stabinski et al.,2003).Blaikie et al.(1994)also prescribed actions and principles for recovery and mitigation of disasters that focussed explicitly on reducing vulnerability.The pressure and release model is portrayed in Fig.1as successfully synthesizing social and physical vulnerability.In being comprehensive and in giving equal weight to‘hazard’and ‘vulnerability’as pressures,the analysis fails to provide a systematic view of the mechanisms and processes of vulnerability.Operationalising the pressure and release model necessarily involves typologies of causes and categorical data on hazards types,limiting the analysis in terms of quantifiable or predictive relationships.In Fig.1,a separate stream on sustainable livelihoods and vulnerability to poverty is shown as a successor to vulnerability as entitlement failure.This research tradition, largely within development economics,tends not to consider integrative social-ecological systems and hence, but nevertheless complements the hazards-based ap-proaches in Fig.1through conceptualization and measure-ment of the links between risk and well-being at the individual level(Alwang et al.,2001;Adger and Winkels, 2006).A sustainable livelihood refers to the well-being of a person or household(Ellis,2000)and comprises the capabilities,assets and activities that lead to well-being (Chambers and Conway,1992;Allison and Ellis,2001). Vulnerability in this context refers to the susceptibility to circumstances of not being able to sustain a livelihood:the concepts are most often applied in the context of development assistance and poverty alleviation.While livelihoods are conceptualized asflowing from capital assets that include ecosystem services(natural capital),the physical and ecological dynamics of risk remain largely unaccounted for in this area of research.The principal focus is on consumption of poor households as a manifestation of vulnerability(Dercon,2004).Given the importance of this tradition and the contribution that researchers in thisfield make to methods(see section below),it seems that cross-fertilization of development economics with vulnerability,adaptation and resilience research would yield new insights.2.2.Successors and current research frontiersThe upper part of Fig.1and the discussions here portray a somewhat linear relationship between antecedent and successor traditions of vulnerability research.This is,of course,a caricature,given the influence of particular researchers across traditions and the overlap and cross-fertilization of ideas and methods.Nevertheless,from its origins in disasters and entitlement theories,there is a newly emerging synthesis of systems-oriented research attempting,through advances in methods,to understand vulnerability in a holistic manner in natural and social systems.Portraying vulnerability as a property of a social-ecological system,and seeking to elaborate the mechanisms and processes in a coupled manner,represents a conceptual advance in analysis(Turner et al.,2003a).Rather than focusing on multiple outcomes from a single physical stress,the approach proposed by Turner and colleagues (2003a)seeks to analyse the elements of vulnerability(its exposure,sensitivity and resilience)of a bounded system at a particular spatial scale.It also seeks to quantify and make explicit both the links to other scales and to quantify the impact of action to cope and responsibility on other elements of the system(such as the degree of exposure of ecological components or communities).The interdisci-plinary and integrative nature of the framework is part of a wider effort to identify science that supports goals of sustainability(e.g.Kates et al.,2001)and is mirrored in other system-oriented vulnerability research such as that developed at the Potsdam Institute for Climate Impacts Research(Schro ter et al.,2005;Ionescu et al.,2005). Integrative frameworks focused on interaction between properties of social-ecological systems have built on pioneering work,for example by Liverman(1990)that crucially developed robust methods for vulnerability assessment.In her work on vulnerability to drought in Mexico,Liverman(1990)argued for integrative ap-proaches based on comparative quantitative assessment of the drivers of vulnerability.She showed that irrigation and land tenure have the greatest impact on the incidence of vulnerability to drought making collectively owned ejido land more susceptible.Thus,using diverse sources of quantitative data,this study showed the places and the people and the drivers within the social-ecological system that led to vulnerability.W.N.Adger/Global Environmental Change16(2006)268–281 272Following in that tradition,Luers and colleagues(2003) utilize the Turner et al.(2003a)framework to also examine vulnerability of social-ecological systems in an agricultural region of Mexico and demonstrate innovations in methods associated with this approach.In recognizing many of the constraints they make a case for measuring the vulner-ability of specific variables:they argue that vulnerability should shift away from quantifying critical areas or vulnerable places towards measures that can be applied at any scale.They argue for assessing the vulnerability of the most important variables in the causal chain of vulnerability to specific sets of stressors.They develop generic metrics that attempt to assess the relationship between a wide range of stressors and the outcome variables of concern(Luers et al.,2003).In their most general form:Vulnerability¼sensitivity to stressstate relative to thresholdÂProb:of exposure to stress:The parameter under scrutiny here could be a physical or social parameter.In the case of Luers et al.(2003)they investigate the vulnerability of farming systems in an irrigated area of Mexico through examining agricultural yields.But the same generalized equation could examine disease prevalence,mortality in human populations,or income of households—all of which are legitimate potential measures within vulnerability analysis.But other research presently argues that the key to understanding vulnerability lies in the interaction between social dynamics within a social-ecological system and that these dynamics are important for resilience. For example,livelihood specialization and diversity have been shown to be important elements in vulner-ability to drought in Kenya and Tanzania(Eriksen et al., 2005).While these variables can be measured directly, it is the social capital and social relations that translate these parameters into vulnerability of place.Eriksen et al. (2005)show that women are excluded from particular high-value activities:hence vulnerability is reproduced within certain parts of social systems through deep structural elements.Similarly,Eakin(2005)shows for Mexican farmers that diversity is key to avoiding vulnerability and that investment in commercial high-yielding irrigated agriculture can exacerbate vulnerability compared to a farming strategy based on maize(that is in effect more sensitive to drought).It is the multi-level interactions between system components(livelihoods, social structures and agricultural policy)that determine system vulnerability.Hence vulnerability assessment incorporates a significant range of parameters in building quantitative and qualita-tive pictures of the processes and outcomes of vulner-ability.These relate to ideas of resilience by identifying key elements of the system that represent adaptive capacity (often social capital and other assets—Pelling and High,2005;Adger,2003)and the impact of extreme event thresholds on creating vulnerabilities within systems.2.3.Traditions exemplified in vulnerability to climate changeResearch on vulnerability applied to the issue of climate change impacts and risk demonstrates the full range of research traditions while contributing in a significant way to the development of newly emerging systems vulner-ability analysis.Vulnerability research in climate change has,in some ways,a unique distinction of being a widely accepted and used term and an integral part of its scientific agenda.Climate change represents a classic multi-scale global change problem in that it is characterized by infinitely diverse actors,multiple stressors and multiple time scales.The existing evidence suggests that climate change impacts will substantially increase burdens on those populations that are already vulnerable to climate ex-tremes,and bear the brunt of projected(and increasingly observed)changes that are attributable to global climate change.The2003European heatwave and even the impacts of recent Atlantic hurricanes demonstrate essential ele-ments of system vulnerability(Poumade re et al.,2005; Stott et al.,2004;Kovats et al.,2005;O’Brien,2006) Groups that are already marginalized bear a dispropor-tionate burden of climate impacts,both in the developed countries and in the developing world.The science of climate change relies on insights from multiple disciplines and is founded on multiple epistemol-ogies.Climate change is,in addition,unusually focused on consensus(Oreskes,2004)because of the nature of evidence and interaction of science with a highly contested legal instrument,the UN Framework Convention on Climate Change.Within climate change,therefore,the reports of the Intergovernmental Panel on Climate Change (IPCC)have become an authoritative source that sets agendas and acts as a legitimizing device for research.It is therefore worth examining primary research on vulner-ability to climate change and its interpretation within the reports of the IPCC.The full range of concepts and approaches highlighted in Fig.1are used within vulnerability assessments of climate change.O’Brien et al.(2005)argues that this diversity of approaches confuses policy-makers in this arena—research is often not explicit about whether it portrays vulnerability as an outcome or vulnerability as a context in which climate risks are dealt with and adapted to.The IPCC defines vulnerability within the latest assessment report (McCarthy et al.,2001)as‘the degree to which a system is susceptible to,or unable to cope with,adverse effects of climate change,including climate variability and extremes. Vulnerability is a function of the character,magnitude,and rate of climate variation to which a system is exposed,its sensitivity,and its adaptive capacity’.Vulnerability to climate change in this context is therefore defined as a characteristic of a system and as aW.N.Adger/Global Environmental Change16(2006)268–281273。
多方位看待问题的能力的英文English:The ability to perceive issues from multiple perspectives is crucial in navigating the complexities of our interconnected world. It involves adopting a flexible mindset that acknowledges diverse viewpoints, cultural backgrounds, and contextual factors. By cultivating this skill, individuals can transcend narrow-mindedness and develop a more holistic understanding of various situations. Multidimensional thinking enables one to anticipate potential challenges, identify innovative solutions, and foster effective collaboration across diverse teams. Moreover, it promotes empathy and fosters constructive dialogue, as individuals strive to comprehend others' viewpoints and find common ground amidst differences. In a rapidly changing global landscape, the capacity to see beyond one's own vantage point is essential for informed decision-making and sustainable problem-solving. Embracing a multidimensional perspective empowers individuals to navigate ambiguity with confidence, adapt to evolving circumstances, and contribute positively to societal progress.中文翻译:从多个角度看待问题的能力在应对我们日益紧密联系的世界复杂性方面至关重要。
a r X i v :p h y s i c s /0302066v 1 [p h y s i c s .g e n -p h ] 19 F eb 2003Detection of the Neutrino Fluxes from Several SourcesD.L.KhokhlovSumy State University ∗(Dated:February 2,2008)It is considered the detection of neutrinos moving from the opposite directions.The states of the particle of the detector interacting with the neutrinos are connected with the P-transformation.Hence only a half of neutrinos gives contribution into the superposition of the neutrino states.Taking into account the effect of the opposite neutrino directions the total neutrino flux from several sources are in the range 0.5–1of that without the effect.The neutrino flux from nuclear reactors measured in the KamLAND experiment is 0.611±0.085(stat)±0.041(syst)of the expected flux.Calculations for the conditions of the KamLAND experiment yield the neutrino flux taking into account the effect of the opposite neutrino directions,0.555,of that without the effect that may account for the neutrino flux observed in the KamLAND experiment.PACS numbers:03.65.-w,28.50.HwLet two equal neutrino fluxes move towards the neu-trino detector from the opposite directions.The first neutrino flux moves from the x direction,and the second neutrino flux moves from the −x direction.The probabil-ity of the weak interaction of the particle of the detector with a single neutrino is given byw =| ψ1ψ2|g |ψνψp |2(1)where g is the weak coupling,ψp is the state of the parti-cle of the detector,ψνis the state of neutrino,ψ1,ψ2are the states of the particles arising in the interaction.The probability given by eq.(1)is too small to register the event in a single interaction of the particle of the detector with the neutrino.The probability of the event is a sum of probabilities of a number of interactions.To define the probability of interaction of the particle of the detector with two neutrinos moving from the oppo-site directions one should form the superposition of the particle of the detector and neutrino states.The state of the particle interacting with the neutrino moving from the x direction and the state of the particle interacting with the neutrino moving from the −x direction are con-nected with the P-transformation|ψp (−x ) =P |ψp (x ) .(2)Then the superposition of the particle and neutrino states should be given by|ψ =12|ψν [|ψp (x ) +|ψp (−x ) ]=12|ψν [|ψp (x ) +P |ψp (x ) ]=12[|ψν +P |ψν ]|ψp (x ) .(3)2where one should sum thefluxes F ix, F iy as vectors.One can see that the neutrinoflux given by eq.(9)are within the range0.5–1of the neutrinoflux given by eq.(5).In the KamLAND experiment[1],it is measured the flux of¯νe’s from distant nuclear reactors.The ratio of the number of observed events to the expected num-ber of events is reported[2]to be0.611±0.085(stat)±0.041(syst).Using the data on the location and ther-mal power of reactors[3]one can estimate the neutrino flux from reactors which is approximately proportional to the thermal powerflux P th/4πL2,where P th is the re-actor thermal power,L is the distance from the detector to the reactor.Hence one can estimate the effect of the opposite neutrino directions through the ratio of the neu-trinoflux calculated with eq.(9)and that calculated with eq.(5).More than99%of the neutrinoflux measured in the KamLAND experiment comes from Japanese and Ko-rean nuclear reactors.So in thefirst approximation one can calculate the neutrinoflux only from Japanese and Korean nuclear reactors.Calculations for the conditions of the KamLAND experiment yield the ratio of the neu-trinoflux taking into account the effect of the opposite neutrino directions and that without this effect,0.555. Thus the effect of the opposite neutrino directions may account for the neutrinoflux observed in the KamLAND experiment.Note that the value0.555is obtained with the use of the specification data on the reactor thermal power.To obtain the real value one should use the op-eration data on the reactor thermal power for the period of measurement.[1]http://www.awa.tohoku.ac.jp/KamLAND/[2]K.Eguchi et al.,KamLAND Collaboration,submitted toPhys.Rev.Lett.,hep-ex/0212021.[3]/。
Unit 1 Schooling●Vocabulary (passage 1)1.striking2.slender impeccable3.discernible4.sloppy5.Sagacity6.arrogance7.vow8.homonym9.glistening 10.fix the blame on●Vocabulary (passage 2)ABCAB DADDC●Translation1.我曾经遇到过这样一位管弦乐队指挥严师。
当有人弹错时,他怒骂他为白痴;当有人谈走音时,他暂停指挥,怒吼。
他就是杰瑞.帕卡瑞斯基——乌克兰移民。
2.传统的观念认为老师应该为学生梳理知识,而不是一味的把知识塞进他们的脑袋里。
作业和小组学习都是备受青睐的学习手段。
传统的方法,如讲授和背诵,都被讽刺为“专杀”,被人反对,被贬为是用正确的方法来蚕食年轻一代的创造力和积极性。
3.死记硬背现在被作为解释来自印度(印度人的记忆力让人赞不绝口)家庭的孩子在全国拼字游戏比赛中大胜对手的一个原因。
4.当然,我们也担心失败会给孩子造成精神创伤,削弱他们的自尊。
5.研究人员曾认为,最有效的老师会通过小组学习和讨论带领学生学习知识。
Unit 2 Music●Vocabulary (passage 1)1.molecular2.hyperactive3.integrated4.retention5.condense6.clerical7.alert8.aestheticallypelling 10.undeniably●Vocabulary (passage 2)1-10 BDABC DABDC●Translation1.一项新的研究消除了某些美国人所珍视的观点,及音乐能够提高孩子的智力。
2.对于这一点,仅有几十项研究检验过传闻中的学习音乐对智力的益处,其中只有5项采用了随机对照试验,即所设计的研究将因果效应分离出来。
3.尽管这一效应只持续了大约15分钟,但却被吹嘘成智力有可持续的提高,乔治亚州州长泽尔.米勒就在1998年承诺每年拿出超过1000000美元的资金用于给该州每位儿童提供一张古典音乐的唱片。
Network impacts of a road capacity reduction:Empirical analysisand model predictionsDavid Watling a ,⇑,David Milne a ,Stephen Clark baInstitute for Transport Studies,University of Leeds,Woodhouse Lane,Leeds LS29JT,UK b Leeds City Council,Leonardo Building,2Rossington Street,Leeds LS28HD,UKa r t i c l e i n f o Article history:Received 24May 2010Received in revised form 15July 2011Accepted 7September 2011Keywords:Traffic assignment Network models Equilibrium Route choice Day-to-day variabilitya b s t r a c tIn spite of their widespread use in policy design and evaluation,relatively little evidencehas been reported on how well traffic equilibrium models predict real network impacts.Here we present what we believe to be the first paper that together analyses the explicitimpacts on observed route choice of an actual network intervention and compares thiswith the before-and-after predictions of a network equilibrium model.The analysis isbased on the findings of an empirical study of the travel time and route choice impactsof a road capacity reduction.Time-stamped,partial licence plates were recorded across aseries of locations,over a period of days both with and without the capacity reduction,and the data were ‘matched’between locations using special-purpose statistical methods.Hypothesis tests were used to identify statistically significant changes in travel times androute choice,between the periods of days with and without the capacity reduction.A trafficnetwork equilibrium model was then independently applied to the same scenarios,and itspredictions compared with the empirical findings.From a comparison of route choice pat-terns,a particularly influential spatial effect was revealed of the parameter specifying therelative values of distance and travel time assumed in the generalised cost equations.When this parameter was ‘fitted’to the data without the capacity reduction,the networkmodel broadly predicted the route choice impacts of the capacity reduction,but with othervalues it was seen to perform poorly.The paper concludes by discussing the wider practicaland research implications of the study’s findings.Ó2011Elsevier Ltd.All rights reserved.1.IntroductionIt is well known that altering the localised characteristics of a road network,such as a planned change in road capacity,will tend to have both direct and indirect effects.The direct effects are imparted on the road itself,in terms of how it can deal with a given demand flow entering the link,with an impact on travel times to traverse the link at a given demand flow level.The indirect effects arise due to drivers changing their travel decisions,such as choice of route,in response to the altered travel times.There are many practical circumstances in which it is desirable to forecast these direct and indirect impacts in the context of a systematic change in road capacity.For example,in the case of proposed road widening or junction improvements,there is typically a need to justify econom-ically the required investment in terms of the benefits that will likely accrue.There are also several examples in which it is relevant to examine the impacts of road capacity reduction .For example,if one proposes to reallocate road space between alternative modes,such as increased bus and cycle lane provision or a pedestrianisation scheme,then typically a range of alternative designs exist which may differ in their ability to accommodate efficiently the new traffic and routing patterns.0965-8564/$-see front matter Ó2011Elsevier Ltd.All rights reserved.doi:10.1016/j.tra.2011.09.010⇑Corresponding author.Tel.:+441133436612;fax:+441133435334.E-mail address:d.p.watling@ (D.Watling).168 D.Watling et al./Transportation Research Part A46(2012)167–189Through mathematical modelling,the alternative designs may be tested in a simulated environment and the most efficient selected for implementation.Even after a particular design is selected,mathematical models may be used to adjust signal timings to optimise the use of the transport system.Road capacity may also be affected periodically by maintenance to essential services(e.g.water,electricity)or to the road itself,and often this can lead to restricted access over a period of days and weeks.In such cases,planning authorities may use modelling to devise suitable diversionary advice for drivers,and to plan any temporary changes to traffic signals or priorities.Berdica(2002)and Taylor et al.(2006)suggest more of a pro-ac-tive approach,proposing that models should be used to test networks for potential vulnerability,before any reduction mate-rialises,identifying links which if reduced in capacity over an extended period1would have a substantial impact on system performance.There are therefore practical requirements for a suitable network model of travel time and route choice impacts of capac-ity changes.The dominant method that has emerged for this purpose over the last decades is clearly the network equilibrium approach,as proposed by Beckmann et al.(1956)and developed in several directions since.The basis of using this approach is the proposition of what are believed to be‘rational’models of behaviour and other system components(e.g.link perfor-mance functions),with site-specific data used to tailor such models to particular case studies.Cross-sectional forecasts of network performance at specific road capacity states may then be made,such that at the time of any‘snapshot’forecast, drivers’route choices are in some kind of individually-optimum state.In this state,drivers cannot improve their route selec-tion by a unilateral change of route,at the snapshot travel time levels.The accepted practice is to‘validate’such models on a case-by-case basis,by ensuring that the model—when supplied with a particular set of parameters,input network data and input origin–destination demand data—reproduces current mea-sured mean link trafficflows and mean journey times,on a sample of links,to some degree of accuracy(see for example,the practical guidelines in TMIP(1997)and Highways Agency(2002)).This kind of aggregate level,cross-sectional validation to existing conditions persists across a range of network modelling paradigms,ranging from static and dynamic equilibrium (Florian and Nguyen,1976;Leonard and Tough,1979;Stephenson and Teply,1984;Matzoros et al.,1987;Janson et al., 1986;Janson,1991)to micro-simulation approaches(Laird et al.,1999;Ben-Akiva et al.,2000;Keenan,2005).While such an approach is plausible,it leaves many questions unanswered,and we would particularly highlight two: 1.The process of calibration and validation of a network equilibrium model may typically occur in a cycle.That is to say,having initially calibrated a model using the base data sources,if the subsequent validation reveals substantial discrep-ancies in some part of the network,it is then natural to adjust the model parameters(including perhaps even the OD matrix elements)until the model outputs better reflect the validation data.2In this process,then,we allow the adjustment of potentially a large number of network parameters and input data in order to replicate the validation data,yet these data themselves are highly aggregate,existing only at the link level.To be clear here,we are talking about a level of coarseness even greater than that in aggregate choice models,since we cannot even infer from link-level data the aggregate shares on alternative routes or OD movements.The question that arises is then:how many different combinations of parameters and input data values might lead to a similar link-level validation,and even if we knew the answer to this question,how might we choose between these alternative combinations?In practice,this issue is typically neglected,meaning that the‘valida-tion’is a rather weak test of the model.2.Since the data are cross-sectional in time(i.e.the aim is to reproduce current base conditions in equilibrium),then in spiteof the large efforts required in data collection,no empirical evidence is routinely collected regarding the model’s main purpose,namely its ability to predict changes in behaviour and network performance under changes to the network/ demand.This issue is exacerbated by the aggregation concerns in point1:the‘ambiguity’in choosing appropriate param-eter values to satisfy the aggregate,link-level,base validation strengthens the need to independently verify that,with the selected parameter values,the model responds reliably to changes.Although such problems–offitting equilibrium models to cross-sectional data–have long been recognised by practitioners and academics(see,e.g.,Goodwin,1998), the approach described above remains the state-of-practice.Having identified these two problems,how might we go about addressing them?One approach to thefirst problem would be to return to the underlying formulation of the network model,and instead require a model definition that permits analysis by statistical inference techniques(see for example,Nakayama et al.,2009).In this way,we may potentially exploit more information in the variability of the link-level data,with well-defined notions(such as maximum likelihood)allowing a systematic basis for selection between alternative parameter value combinations.However,this approach is still using rather limited data and it is natural not just to question the model but also the data that we use to calibrate and validate it.Yet this is not altogether straightforward to resolve.As Mahmassani and Jou(2000) remarked:‘A major difficulty...is obtaining observations of actual trip-maker behaviour,at the desired level of richness, simultaneously with measurements of prevailing conditions’.For this reason,several authors have turned to simulated gaming environments and/or stated preference techniques to elicit information on drivers’route choice behaviour(e.g. 1Clearly,more sporadic and less predictable reductions in capacity may also occur,such as in the case of breakdowns and accidents,and environmental factors such as severe weather,floods or landslides(see for example,Iida,1999),but the responses to such cases are outside the scope of the present paper. 2Some authors have suggested more systematic,bi-level type optimization processes for thisfitting process(e.g.Xu et al.,2004),but this has no material effect on the essential points above.D.Watling et al./Transportation Research Part A46(2012)167–189169 Mahmassani and Herman,1990;Iida et al.,1992;Khattak et al.,1993;Vaughn et al.,1995;Wardman et al.,1997;Jou,2001; Chen et al.,2001).This provides potentially rich information for calibrating complex behavioural models,but has the obvious limitation that it is based on imagined rather than real route choice situations.Aside from its common focus on hypothetical decision situations,this latter body of work also signifies a subtle change of emphasis in the treatment of the overall network calibration problem.Rather than viewing the network equilibrium calibra-tion process as a whole,the focus is on particular components of the model;in the cases above,the focus is on that compo-nent concerned with how drivers make route decisions.If we are prepared to make such a component-wise analysis,then certainly there exists abundant empirical evidence in the literature,with a history across a number of decades of research into issues such as the factors affecting drivers’route choice(e.g.Wachs,1967;Huchingson et al.,1977;Abu-Eisheh and Mannering,1987;Duffell and Kalombaris,1988;Antonisse et al.,1989;Bekhor et al.,2002;Liu et al.,2004),the nature of travel time variability(e.g.Smeed and Jeffcoate,1971;Montgomery and May,1987;May et al.,1989;McLeod et al., 1993),and the factors affecting trafficflow variability(Bonsall et al.,1984;Huff and Hanson,1986;Ribeiro,1994;Rakha and Van Aerde,1995;Fox et al.,1998).While these works provide useful evidence for the network equilibrium calibration problem,they do not provide a frame-work in which we can judge the overall‘fit’of a particular network model in the light of uncertainty,ambient variation and systematic changes in network attributes,be they related to the OD demand,the route choice process,travel times or the network data.Moreover,such data does nothing to address the second point made above,namely the question of how to validate the model forecasts under systematic changes to its inputs.The studies of Mannering et al.(1994)and Emmerink et al.(1996)are distinctive in this context in that they address some of the empirical concerns expressed in the context of travel information impacts,but their work stops at the stage of the empirical analysis,without a link being made to net-work prediction models.The focus of the present paper therefore is both to present thefindings of an empirical study and to link this empirical evidence to network forecasting models.More recently,Zhu et al.(2010)analysed several sources of data for evidence of the traffic and behavioural impacts of the I-35W bridge collapse in Minneapolis.Most pertinent to the present paper is their location-specific analysis of linkflows at 24locations;by computing the root mean square difference inflows between successive weeks,and comparing the trend for 2006with that for2007(the latter with the bridge collapse),they observed an apparent transient impact of the bridge col-lapse.They also showed there was no statistically-significant evidence of a difference in the pattern offlows in the period September–November2007(a period starting6weeks after the bridge collapse),when compared with the corresponding period in2006.They suggested that this was indicative of the length of a‘re-equilibration process’in a conceptual sense, though did not explicitly compare their empiricalfindings with those of a network equilibrium model.The structure of the remainder of the paper is as follows.In Section2we describe the process of selecting the real-life problem to analyse,together with the details and rationale behind the survey design.Following this,Section3describes the statistical techniques used to extract information on travel times and routing patterns from the survey data.Statistical inference is then considered in Section4,with the aim of detecting statistically significant explanatory factors.In Section5 comparisons are made between the observed network data and those predicted by a network equilibrium model.Finally,in Section6the conclusions of the study are highlighted,and recommendations made for both practice and future research.2.Experimental designThe ultimate objective of the study was to compare actual data with the output of a traffic network equilibrium model, specifically in terms of how well the equilibrium model was able to correctly forecast the impact of a systematic change ap-plied to the network.While a wealth of surveillance data on linkflows and travel times is routinely collected by many local and national agencies,we did not believe that such data would be sufficiently informative for our purposes.The reason is that while such data can often be disaggregated down to small time step resolutions,the data remains aggregate in terms of what it informs about driver response,since it does not provide the opportunity to explicitly trace vehicles(even in aggre-gate form)across more than one location.This has the effect that observed differences in linkflows might be attributed to many potential causes:it is especially difficult to separate out,say,ambient daily variation in the trip demand matrix from systematic changes in route choice,since both may give rise to similar impacts on observed linkflow patterns across re-corded sites.While methods do exist for reconstructing OD and network route patterns from observed link data(e.g.Yang et al.,1994),these are typically based on the premise of a valid network equilibrium model:in this case then,the data would not be able to give independent information on the validity of the network equilibrium approach.For these reasons it was decided to design and implement a purpose-built survey.However,it would not be efficient to extensively monitor a network in order to wait for something to happen,and therefore we required advance notification of some planned intervention.For this reason we chose to study the impact of urban maintenance work affecting the roads,which UK local government authorities organise on an annual basis as part of their‘Local Transport Plan’.The city council of York,a historic city in the north of England,agreed to inform us of their plans and to assist in the subsequent data collection exercise.Based on the interventions planned by York CC,the list of candidate studies was narrowed by considering factors such as its propensity to induce significant re-routing and its impact on the peak periods.Effectively the motivation here was to identify interventions that were likely to have a large impact on delays,since route choice impacts would then likely be more significant and more easily distinguished from ambient variability.This was notably at odds with the objectives of York CC,170 D.Watling et al./Transportation Research Part A46(2012)167–189in that they wished to minimise disruption,and so where possible York CC planned interventions to take place at times of day and of the year where impacts were minimised;therefore our own requirement greatly reduced the candidate set of studies to monitor.A further consideration in study selection was its timing in the year for scheduling before/after surveys so to avoid confounding effects of known significant‘seasonal’demand changes,e.g.the impact of the change between school semesters and holidays.A further consideration was York’s role as a major tourist attraction,which is also known to have a seasonal trend.However,the impact on car traffic is relatively small due to the strong promotion of public trans-port and restrictions on car travel and parking in the historic centre.We felt that we further mitigated such impacts by sub-sequently choosing to survey in the morning peak,at a time before most tourist attractions are open.Aside from the question of which intervention to survey was the issue of what data to collect.Within the resources of the project,we considered several options.We rejected stated preference survey methods as,although they provide a link to personal/socio-economic drivers,we wanted to compare actual behaviour with a network model;if the stated preference data conflicted with the network model,it would not be clear which we should question most.For revealed preference data, options considered included(i)self-completion diaries(Mahmassani and Jou,2000),(ii)automatic tracking through GPS(Jan et al.,2000;Quiroga et al.,2000;Taylor et al.,2000),and(iii)licence plate surveys(Schaefer,1988).Regarding self-comple-tion surveys,from our own interview experiments with self-completion questionnaires it was evident that travellersfind it relatively difficult to recall and describe complex choice options such as a route through an urban network,giving the po-tential for significant errors to be introduced.The automatic tracking option was believed to be the most attractive in this respect,in its potential to accurately map a given individual’s journey,but the negative side would be the potential sample size,as we would need to purchase/hire and distribute the devices;even with a large budget,it is not straightforward to identify in advance the target users,nor to guarantee their cooperation.Licence plate surveys,it was believed,offered the potential for compromise between sample size and data resolution: while we could not track routes to the same resolution as GPS,by judicious location of surveyors we had the opportunity to track vehicles across more than one location,thus providing route-like information.With time-stamped licence plates, the matched data would also provide journey time information.The negative side of this approach is the well-known poten-tial for significant recording errors if large sample rates are required.Our aim was to avoid this by recording only partial licence plates,and employing statistical methods to remove the impact of‘spurious matches’,i.e.where two different vehi-cles with the same partial licence plate occur at different locations.Moreover,extensive simulation experiments(Watling,1994)had previously shown that these latter statistical methods were effective in recovering the underlying movements and travel times,even if only a relatively small part of the licence plate were recorded,in spite of giving a large potential for spurious matching.We believed that such an approach reduced the opportunity for recorder error to such a level to suggest that a100%sample rate of vehicles passing may be feasible.This was tested in a pilot study conducted by the project team,with dictaphones used to record a100%sample of time-stamped, partial licence plates.Independent,duplicate observers were employed at the same location to compare error rates;the same study was also conducted with full licence plates.The study indicated that100%surveys with dictaphones would be feasible in moderate trafficflow,but only if partial licence plate data were used in order to control observation errors; for higherflow rates or to obtain full number plate data,video surveys should be considered.Other important practical les-sons learned from the pilot included the need for clarity in terms of vehicle types to survey(e.g.whether to include motor-cycles and taxis),and of the phonetic alphabet used by surveyors to avoid transcription ambiguities.Based on the twin considerations above of planned interventions and survey approach,several candidate studies were identified.For a candidate study,detailed design issues involved identifying:likely affected movements and alternative routes(using local knowledge of York CC,together with an existing network model of the city),in order to determine the number and location of survey sites;feasible viewpoints,based on site visits;the timing of surveys,e.g.visibility issues in the dark,winter evening peak period;the peak duration from automatic trafficflow data;and specific survey days,in view of public/school holidays.Our budget led us to survey the majority of licence plate sites manually(partial plates by audio-tape or,in lowflows,pen and paper),with video surveys limited to a small number of high-flow sites.From this combination of techniques,100%sampling rate was feasible at each site.Surveys took place in the morning peak due both to visibility considerations and to minimise conflicts with tourist/special event traffic.From automatic traffic count data it was decided to survey the period7:45–9:15as the main morning peak period.This design process led to the identification of two studies:2.1.Lendal Bridge study(Fig.1)Lendal Bridge,a critical part of York’s inner ring road,was scheduled to be closed for maintenance from September2000 for a duration of several weeks.To avoid school holidays,the‘before’surveys were scheduled for June and early September.It was decided to focus on investigating a significant southwest-to-northeast movement of traffic,the river providing a natural barrier which suggested surveying the six river crossing points(C,J,H,K,L,M in Fig.1).In total,13locations were identified for survey,in an attempt to capture traffic on both sides of the river as well as a crossing.2.2.Fishergate study(Fig.2)The partial closure(capacity reduction)of the street known as Fishergate,again part of York’s inner ring road,was scheduled for July2001to allow repairs to a collapsed sewer.Survey locations were chosen in order to intercept clockwiseFig.1.Intervention and survey locations for Lendal Bridge study.around the inner ring road,this being the direction of the partial closure.A particular aim wasFulford Road(site E in Fig.2),the main radial affected,with F and K monitoring local diversion I,J to capture wider-area diversion.studies,the plan was to survey the selected locations in the morning peak over a period of approximately covering the three periods before,during and after the intervention,with the days selected so holidays or special events.Fig.2.Intervention and survey locations for Fishergate study.In the Lendal Bridge study,while the‘before’surveys proceeded as planned,the bridge’s actualfirst day of closure on Sep-tember11th2000also marked the beginning of the UK fuel protests(BBC,2000a;Lyons and Chaterjee,2002).Trafficflows were considerably affected by the scarcity of fuel,with congestion extremely low in thefirst week of closure,to the extent that any changes could not be attributed to the bridge closure;neither had our design anticipated how to survey the impacts of the fuel shortages.We thus re-arranged our surveys to monitor more closely the planned re-opening of the bridge.Unfor-tunately these surveys were hampered by a second unanticipated event,namely the wettest autumn in the UK for270years and the highest level offlooding in York since records began(BBC,2000b).Theflooding closed much of the centre of York to road traffic,including our study area,as the roads were impassable,and therefore we abandoned the planned‘after’surveys. As a result of these events,the useable data we had(not affected by the fuel protests orflooding)consisted offive‘before’days and one‘during’day.In the Fishergate study,fortunately no extreme events occurred,allowing six‘before’and seven‘during’days to be sur-veyed,together with one additional day in the‘during’period when the works were temporarily removed.However,the works over-ran into the long summer school holidays,when it is well-known that there is a substantial seasonal effect of much lowerflows and congestion levels.We did not believe it possible to meaningfully isolate the impact of the link fully re-opening while controlling for such an effect,and so our plans for‘after re-opening’surveys were abandoned.3.Estimation of vehicle movements and travel timesThe data resulting from the surveys described in Section2is in the form of(for each day and each study)a set of time-stamped,partial licence plates,observed at a number of locations across the network.Since the data include only partial plates,they cannot simply be matched across observation points to yield reliable estimates of vehicle movements,since there is ambiguity in whether the same partial plate observed at different locations was truly caused by the same vehicle. Indeed,since the observed system is‘open’—in the sense that not all points of entry,exit,generation and attraction are mon-itored—the question is not just which of several potential matches to accept,but also whether there is any match at all.That is to say,an apparent match between data at two observation points could be caused by two separate vehicles that passed no other observation point.Thefirst stage of analysis therefore applied a series of specially-designed statistical techniques to reconstruct the vehicle movements and point-to-point travel time distributions from the observed data,allowing for all such ambiguities in the data.Although the detailed derivations of each method are not given here,since they may be found in the references provided,it is necessary to understand some of the characteristics of each method in order to interpret the results subsequently provided.Furthermore,since some of the basic techniques required modification relative to the published descriptions,then in order to explain these adaptations it is necessary to understand some of the theoretical basis.3.1.Graphical method for estimating point-to-point travel time distributionsThe preliminary technique applied to each data set was the graphical method described in Watling and Maher(1988).This method is derived for analysing partial registration plate data for unidirectional movement between a pair of observation stations(referred to as an‘origin’and a‘destination’).Thus in the data study here,it must be independently applied to given pairs of observation stations,without regard for the interdependencies between observation station pairs.On the other hand, it makes no assumption that the system is‘closed’;there may be vehicles that pass the origin that do not pass the destina-tion,and vice versa.While limited in considering only two-point surveys,the attraction of the graphical technique is that it is a non-parametric method,with no assumptions made about the arrival time distributions at the observation points(they may be non-uniform in particular),and no assumptions made about the journey time probability density.It is therefore very suitable as afirst means of investigative analysis for such data.The method begins by forming all pairs of possible matches in the data,of which some will be genuine matches(the pair of observations were due to a single vehicle)and the remainder spurious matches.Thus, for example,if there are three origin observations and two destination observations of a particular partial registration num-ber,then six possible matches may be formed,of which clearly no more than two can be genuine(and possibly only one or zero are genuine).A scatter plot may then be drawn for each possible match of the observation time at the origin versus that at the destination.The characteristic pattern of such a plot is as that shown in Fig.4a,with a dense‘line’of points(which will primarily be the genuine matches)superimposed upon a scatter of points over the whole region(which will primarily be the spurious matches).If we were to assume uniform arrival rates at the observation stations,then the spurious matches would be uniformly distributed over this plot;however,we shall avoid making such a restrictive assumption.The method begins by making a coarse estimate of the total number of genuine matches across the whole of this plot.As part of this analysis we then assume knowledge of,for any randomly selected vehicle,the probabilities:h k¼Prðvehicle is of the k th type of partial registration plateÞðk¼1;2;...;mÞwhereX m k¼1h k¼1172 D.Watling et al./Transportation Research Part A46(2012)167–189。
History of infrared detectorsA.ROGALSKI*Institute of Applied Physics, Military University of Technology, 2 Kaliskiego Str.,00–908 Warsaw, PolandThis paper overviews the history of infrared detector materials starting with Herschel’s experiment with thermometer on February11th,1800.Infrared detectors are in general used to detect,image,and measure patterns of the thermal heat radia−tion which all objects emit.At the beginning,their development was connected with thermal detectors,such as ther−mocouples and bolometers,which are still used today and which are generally sensitive to all infrared wavelengths and op−erate at room temperature.The second kind of detectors,called the photon detectors,was mainly developed during the20th Century to improve sensitivity and response time.These detectors have been extensively developed since the1940’s.Lead sulphide(PbS)was the first practical IR detector with sensitivity to infrared wavelengths up to~3μm.After World War II infrared detector technology development was and continues to be primarily driven by military applications.Discovery of variable band gap HgCdTe ternary alloy by Lawson and co−workers in1959opened a new area in IR detector technology and has provided an unprecedented degree of freedom in infrared detector design.Many of these advances were transferred to IR astronomy from Departments of Defence ter on civilian applications of infrared technology are frequently called“dual−use technology applications.”One should point out the growing utilisation of IR technologies in the civilian sphere based on the use of new materials and technologies,as well as the noticeable price decrease in these high cost tech−nologies.In the last four decades different types of detectors are combined with electronic readouts to make detector focal plane arrays(FPAs).Development in FPA technology has revolutionized infrared imaging.Progress in integrated circuit design and fabrication techniques has resulted in continued rapid growth in the size and performance of these solid state arrays.Keywords:thermal and photon detectors, lead salt detectors, HgCdTe detectors, microbolometers, focal plane arrays.Contents1.Introduction2.Historical perspective3.Classification of infrared detectors3.1.Photon detectors3.2.Thermal detectors4.Post−War activity5.HgCdTe era6.Alternative material systems6.1.InSb and InGaAs6.2.GaAs/AlGaAs quantum well superlattices6.3.InAs/GaInSb strained layer superlattices6.4.Hg−based alternatives to HgCdTe7.New revolution in thermal detectors8.Focal plane arrays – revolution in imaging systems8.1.Cooled FPAs8.2.Uncooled FPAs8.3.Readiness level of LWIR detector technologies9.SummaryReferences 1.IntroductionLooking back over the past1000years we notice that infra−red radiation(IR)itself was unknown until212years ago when Herschel’s experiment with thermometer and prism was first reported.Frederick William Herschel(1738–1822) was born in Hanover,Germany but emigrated to Britain at age19,where he became well known as both a musician and an astronomer.Herschel became most famous for the discovery of Uranus in1781(the first new planet found since antiquity)in addition to two of its major moons,Tita−nia and Oberon.He also discovered two moons of Saturn and infrared radiation.Herschel is also known for the twenty−four symphonies that he composed.W.Herschel made another milestone discovery–discov−ery of infrared light on February11th,1800.He studied the spectrum of sunlight with a prism[see Fig.1in Ref.1],mea−suring temperature of each colour.The detector consisted of liquid in a glass thermometer with a specially blackened bulb to absorb radiation.Herschel built a crude monochromator that used a thermometer as a detector,so that he could mea−sure the distribution of energy in sunlight and found that the highest temperature was just beyond the red,what we now call the infrared(‘below the red’,from the Latin‘infra’–be−OPTO−ELECTRONICS REVIEW20(3),279–308DOI: 10.2478/s11772−012−0037−7*e−mail: rogan@.pllow)–see Fig.1(b)[2].In April 1800he reported it to the Royal Society as dark heat (Ref.1,pp.288–290):Here the thermometer No.1rose 7degrees,in 10minu−tes,by an exposure to the full red coloured rays.I drew back the stand,till the centre of the ball of No.1was just at the vanishing of the red colour,so that half its ball was within,and half without,the visible rays of theAnd here the thermometerin 16minutes,degrees,when its centre was inch out of the raysof the sun.as had a rising of 9de−grees,and here the difference is almost too trifling to suppose,that latter situation of the thermometer was much beyond the maximum of the heating power;while,at the same time,the experiment sufficiently indi−cates,that the place inquired after need not be looked for at a greater distance.Making further experiments on what Herschel called the ‘calorific rays’that existed beyond the red part of the spec−trum,he found that they were reflected,refracted,absorbed and transmitted just like visible light [1,3,4].The early history of IR was reviewed about 50years ago in three well−known monographs [5–7].Many historical information can be also found in four papers published by Barr [3,4,8,9]and in more recently published monograph [10].Table 1summarises the historical development of infrared physics and technology [11,12].2.Historical perspectiveFor thirty years following Herschel’s discovery,very little progress was made beyond establishing that the infrared ra−diation obeyed the simplest laws of optics.Slow progress inthe study of infrared was caused by the lack of sensitive and accurate detectors –the experimenters were handicapped by the ordinary thermometer.However,towards the second de−cade of the 19th century,Thomas Johann Seebeck began to examine the junction behaviour of electrically conductive materials.In 1821he discovered that a small electric current will flow in a closed circuit of two dissimilar metallic con−ductors,when their junctions are kept at different tempera−tures [13].During that time,most physicists thought that ra−diant heat and light were different phenomena,and the dis−covery of Seebeck indirectly contributed to a revival of the debate on the nature of heat.Due to small output vol−tage of Seebeck’s junctions,some μV/K,the measurement of very small temperature differences were prevented.In 1829L.Nobili made the first thermocouple and improved electrical thermometer based on the thermoelectric effect discovered by Seebeck in 1826.Four years later,M.Melloni introduced the idea of connecting several bismuth−copper thermocouples in series,generating a higher and,therefore,measurable output voltage.It was at least 40times more sensitive than the best thermometer available and could de−tect the heat from a person at a distance of 30ft [8].The out−put voltage of such a thermopile structure linearly increases with the number of connected thermocouples.An example of thermopile’s prototype invented by Nobili is shown in Fig.2(a).It consists of twelve large bismuth and antimony elements.The elements were placed upright in a brass ring secured to an adjustable support,and were screened by a wooden disk with a 15−mm central aperture.Incomplete version of the Nobili−Melloni thermopile originally fitted with the brass cone−shaped tubes to collect ra−diant heat is shown in Fig.2(b).This instrument was much more sensi−tive than the thermometers previously used and became the most widely used detector of IR radiation for the next half century.The third member of the trio,Langley’s bolometer appea−red in 1880[7].Samuel Pierpont Langley (1834–1906)used two thin ribbons of platinum foil connected so as to form two arms of a Wheatstone bridge (see Fig.3)[15].This instrument enabled him to study solar irradiance far into its infrared region and to measure theintensityof solar radia−tion at various wavelengths [9,16,17].The bolometer’s sen−History of infrared detectorsFig.1.Herschel’s first experiment:A,B –the small stand,1,2,3–the thermometers upon it,C,D –the prism at the window,E –the spec−trum thrown upon the table,so as to bring the last quarter of an inch of the read colour upon the stand (after Ref.1).InsideSir FrederickWilliam Herschel (1738–1822)measures infrared light from the sun– artist’s impression (after Ref. 2).Fig.2.The Nobili−Meloni thermopiles:(a)thermopile’s prototype invented by Nobili (ca.1829),(b)incomplete version of the Nobili−−Melloni thermopile (ca.1831).Museo Galileo –Institute and Museum of the History of Science,Piazza dei Giudici 1,50122Florence, Italy (after Ref. 14).Table 1. Milestones in the development of infrared physics and technology (up−dated after Refs. 11 and 12)Year Event1800Discovery of the existence of thermal radiation in the invisible beyond the red by W. HERSCHEL1821Discovery of the thermoelectric effects using an antimony−copper pair by T.J. SEEBECK1830Thermal element for thermal radiation measurement by L. NOBILI1833Thermopile consisting of 10 in−line Sb−Bi thermal pairs by L. NOBILI and M. MELLONI1834Discovery of the PELTIER effect on a current−fed pair of two different conductors by J.C. PELTIER1835Formulation of the hypothesis that light and electromagnetic radiation are of the same nature by A.M. AMPERE1839Solar absorption spectrum of the atmosphere and the role of water vapour by M. MELLONI1840Discovery of the three atmospheric windows by J. HERSCHEL (son of W. HERSCHEL)1857Harmonization of the three thermoelectric effects (SEEBECK, PELTIER, THOMSON) by W. THOMSON (Lord KELVIN)1859Relationship between absorption and emission by G. KIRCHHOFF1864Theory of electromagnetic radiation by J.C. MAXWELL1873Discovery of photoconductive effect in selenium by W. SMITH1876Discovery of photovoltaic effect in selenium (photopiles) by W.G. ADAMS and A.E. DAY1879Empirical relationship between radiation intensity and temperature of a blackbody by J. STEFAN1880Study of absorption characteristics of the atmosphere through a Pt bolometer resistance by S.P. LANGLEY1883Study of transmission characteristics of IR−transparent materials by M. MELLONI1884Thermodynamic derivation of the STEFAN law by L. BOLTZMANN1887Observation of photoelectric effect in the ultraviolet by H. HERTZ1890J. ELSTER and H. GEITEL constructed a photoemissive detector consisted of an alkali−metal cathode1894, 1900Derivation of the wavelength relation of blackbody radiation by J.W. RAYEIGH and W. WIEN1900Discovery of quantum properties of light by M. PLANCK1903Temperature measurements of stars and planets using IR radiometry and spectrometry by W.W. COBLENTZ1905 A. EINSTEIN established the theory of photoelectricity1911R. ROSLING made the first television image tube on the principle of cathode ray tubes constructed by F. Braun in 18971914Application of bolometers for the remote exploration of people and aircrafts ( a man at 200 m and a plane at 1000 m)1917T.W. CASE developed the first infrared photoconductor from substance composed of thallium and sulphur1923W. SCHOTTKY established the theory of dry rectifiers1925V.K. ZWORYKIN made a television image tube (kinescope) then between 1925 and 1933, the first electronic camera with the aid of converter tube (iconoscope)1928Proposal of the idea of the electro−optical converter (including the multistage one) by G. HOLST, J.H. DE BOER, M.C. TEVES, and C.F. VEENEMANS1929L.R. KOHLER made a converter tube with a photocathode (Ag/O/Cs) sensitive in the near infrared1930IR direction finders based on PbS quantum detectors in the wavelength range 1.5–3.0 μm for military applications (GUDDEN, GÖRLICH and KUTSCHER), increased range in World War II to 30 km for ships and 7 km for tanks (3–5 μm)1934First IR image converter1939Development of the first IR display unit in the United States (Sniperscope, Snooperscope)1941R.S. OHL observed the photovoltaic effect shown by a p−n junction in a silicon1942G. EASTMAN (Kodak) offered the first film sensitive to the infrared1947Pneumatically acting, high−detectivity radiation detector by M.J.E. GOLAY1954First imaging cameras based on thermopiles (exposure time of 20 min per image) and on bolometers (4 min)1955Mass production start of IR seeker heads for IR guided rockets in the US (PbS and PbTe detectors, later InSb detectors for Sidewinder rockets)1957Discovery of HgCdTe ternary alloy as infrared detector material by W.D. LAWSON, S. NELSON, and A.S. YOUNG1961Discovery of extrinsic Ge:Hg and its application (linear array) in the first LWIR FLIR systems1965Mass production start of IR cameras for civil applications in Sweden (single−element sensors with optomechanical scanner: AGA Thermografiesystem 660)1970Discovery of charge−couple device (CCD) by W.S. BOYLE and G.E. SMITH1970Production start of IR sensor arrays (monolithic Si−arrays: R.A. SOREF 1968; IR−CCD: 1970; SCHOTTKY diode arrays: F.D.SHEPHERD and A.C. YANG 1973; IR−CMOS: 1980; SPRITE: T. ELIOTT 1981)1975Lunch of national programmes for making spatially high resolution observation systems in the infrared from multielement detectors integrated in a mini cooler (so−called first generation systems): common module (CM) in the United States, thermal imaging commonmodule (TICM) in Great Britain, syteme modulaire termique (SMT) in France1975First In bump hybrid infrared focal plane array1977Discovery of the broken−gap type−II InAs/GaSb superlattices by G.A. SAI−HALASZ, R. TSU, and L. ESAKI1980Development and production of second generation systems [cameras fitted with hybrid HgCdTe(InSb)/Si(readout) FPAs].First demonstration of two−colour back−to−back SWIR GaInAsP detector by J.C. CAMPBELL, A.G. DENTAI, T.P. LEE,and C.A. BURRUS1985Development and mass production of cameras fitted with Schottky diode FPAs (platinum silicide)1990Development and production of quantum well infrared photoconductor (QWIP) hybrid second generation systems1995Production start of IR cameras with uncooled FPAs (focal plane arrays; microbolometer−based and pyroelectric)2000Development and production of third generation infrared systemssitivity was much greater than that of contemporary thermo−piles which were little improved since their use by Melloni. Langley continued to develop his bolometer for the next20 years(400times more sensitive than his first efforts).His latest bolometer could detect the heat from a cow at a dis−tance of quarter of mile [9].From the above information results that at the beginning the development of the IR detectors was connected with ther−mal detectors.The first photon effect,photoconductive ef−fect,was discovered by Smith in1873when he experimented with selenium as an insulator for submarine cables[18].This discovery provided a fertile field of investigation for several decades,though most of the efforts were of doubtful quality. By1927,over1500articles and100patents were listed on photosensitive selenium[19].It should be mentioned that the literature of the early1900’s shows increasing interest in the application of infrared as solution to numerous problems[7].A special contribution of William Coblenz(1873–1962)to infrared radiometry and spectroscopy is marked by huge bib−liography containing hundreds of scientific publications, talks,and abstracts to his credit[20,21].In1915,W.Cob−lentz at the US National Bureau of Standards develops ther−mopile detectors,which he uses to measure the infrared radi−ation from110stars.However,the low sensitivity of early in−frared instruments prevented the detection of other near−IR sources.Work in infrared astronomy remained at a low level until breakthroughs in the development of new,sensitive infrared detectors were achieved in the late1950’s.The principle of photoemission was first demonstrated in1887when Hertz discovered that negatively charged par−ticles were emitted from a conductor if it was irradiated with ultraviolet[22].Further studies revealed that this effect could be produced with visible radiation using an alkali metal electrode [23].Rectifying properties of semiconductor−metal contact were discovered by Ferdinand Braun in1874[24],when he probed a naturally−occurring lead sulphide(galena)crystal with the point of a thin metal wire and noted that current flowed freely in one direction only.Next,Jagadis Chandra Bose demonstrated the use of galena−metal point contact to detect millimetre electromagnetic waves.In1901he filed a U.S patent for a point−contact semiconductor rectifier for detecting radio signals[25].This type of contact called cat’s whisker detector(sometimes also as crystal detector)played serious role in the initial phase of radio development.How−ever,this contact was not used in a radiation detector for the next several decades.Although crystal rectifiers allowed to fabricate simple radio sets,however,by the mid−1920s the predictable performance of vacuum−tubes replaced them in most radio applications.The period between World Wars I and II is marked by the development of photon detectors and image converters and by emergence of infrared spectroscopy as one of the key analytical techniques available to chemists.The image con−verter,developed on the eve of World War II,was of tre−mendous interest to the military because it enabled man to see in the dark.The first IR photoconductor was developed by Theodore W.Case in1917[26].He discovered that a substance com−posed of thallium and sulphur(Tl2S)exhibited photocon−ductivity.Supported by the US Army between1917and 1918,Case adapted these relatively unreliable detectors for use as sensors in an infrared signalling device[27].The pro−totype signalling system,consisting of a60−inch diameter searchlight as the source of radiation and a thallous sulphide detector at the focus of a24−inch diameter paraboloid mir−ror,sent messages18miles through what was described as ‘smoky atmosphere’in1917.However,instability of resis−tance in the presence of light or polarizing voltage,loss of responsivity due to over−exposure to light,high noise,slug−gish response and lack of reproducibility seemed to be inhe−rent weaknesses.Work was discontinued in1918;commu−nication by the detection of infrared radiation appeared dis−tinctly ter Case found that the addition of oxygen greatly enhanced the response [28].The idea of the electro−optical converter,including the multistage one,was proposed by Holst et al.in1928[29]. The first attempt to make the converter was not successful.A working tube consisted of a photocathode in close proxi−mity to a fluorescent screen was made by the authors in 1934 in Philips firm.In about1930,the appearance of the Cs−O−Ag photo−tube,with stable characteristics,to great extent discouraged further development of photoconductive cells until about 1940.The Cs−O−Ag photocathode(also called S−1)elabo−History of infrared detectorsFig.3.Longley’s bolometer(a)composed of two sets of thin plati−num strips(b),a Wheatstone bridge,a battery,and a galvanometer measuring electrical current (after Ref. 15 and 16).rated by Koller and Campbell[30]had a quantum efficiency two orders of magnitude above anything previously studied, and consequently a new era in photoemissive devices was inaugurated[31].In the same year,the Japanese scientists S. Asao and M.Suzuki reported a method for enhancing the sensitivity of silver in the S−1photocathode[32].Consisted of a layer of caesium on oxidized silver,S−1is sensitive with useful response in the near infrared,out to approxi−mately1.2μm,and the visible and ultraviolet region,down to0.3μm.Probably the most significant IR development in the United States during1930’s was the Radio Corporation of America(RCA)IR image tube.During World War II, near−IR(NIR)cathodes were coupled to visible phosphors to provide a NIR image converter.With the establishment of the National Defence Research Committee,the develop−ment of this tube was accelerated.In1942,the tube went into production as the RCA1P25image converter(see Fig.4).This was one of the tubes used during World War II as a part of the”Snooperscope”and”Sniperscope,”which were used for night observation with infrared sources of illumination.Since then various photocathodes have been developed including bialkali photocathodes for the visible region,multialkali photocathodes with high sensitivity ex−tending to the infrared region and alkali halide photocatho−des intended for ultraviolet detection.The early concepts of image intensification were not basically different from those today.However,the early devices suffered from two major deficiencies:poor photo−cathodes and poor ter development of both cathode and coupling technologies changed the image in−tensifier into much more useful device.The concept of image intensification by cascading stages was suggested independently by number of workers.In Great Britain,the work was directed toward proximity focused tubes,while in the United State and in Germany–to electrostatically focused tubes.A history of night vision imaging devices is given by Biberman and Sendall in monograph Electro−Opti−cal Imaging:System Performance and Modelling,SPIE Press,2000[10].The Biberman’s monograph describes the basic trends of infrared optoelectronics development in the USA,Great Britain,France,and Germany.Seven years later Ponomarenko and Filachev completed this monograph writ−ing the book Infrared Techniques and Electro−Optics in Russia:A History1946−2006,SPIE Press,about achieve−ments of IR techniques and electrooptics in the former USSR and Russia [33].In the early1930’s,interest in improved detectors began in Germany[27,34,35].In1933,Edgar W.Kutzscher at the University of Berlin,discovered that lead sulphide(from natural galena found in Sardinia)was photoconductive and had response to about3μm.B.Gudden at the University of Prague used evaporation techniques to develop sensitive PbS films.Work directed by Kutzscher,initially at the Uni−versity of Berlin and later at the Electroacustic Company in Kiel,dealt primarily with the chemical deposition approach to film formation.This work ultimately lead to the fabrica−tion of the most sensitive German detectors.These works were,of course,done under great secrecy and the results were not generally known until after1945.Lead sulphide photoconductors were brought to the manufacturing stage of development in Germany in about1943.Lead sulphide was the first practical infrared detector deployed in a variety of applications during the war.The most notable was the Kiel IV,an airborne IR system that had excellent range and which was produced at Carl Zeiss in Jena under the direction of Werner K. Weihe [6].In1941,Robert J.Cashman improved the technology of thallous sulphide detectors,which led to successful produc−tion[36,37].Cashman,after success with thallous sulphide detectors,concentrated his efforts on lead sulphide detec−tors,which were first produced in the United States at Northwestern University in1944.After World War II Cash−man found that other semiconductors of the lead salt family (PbSe and PbTe)showed promise as infrared detectors[38]. The early detector cells manufactured by Cashman are shown in Fig. 5.Fig.4.The original1P25image converter tube developed by the RCA(a).This device measures115×38mm overall and has7pins.It opera−tion is indicated by the schematic drawing (b).After1945,the wide−ranging German trajectory of research was essentially the direction continued in the USA, Great Britain and Soviet Union under military sponsorship after the war[27,39].Kutzscher’s facilities were captured by the Russians,thus providing the basis for early Soviet detector development.From1946,detector technology was rapidly disseminated to firms such as Mullard Ltd.in Southampton,UK,as part of war reparations,and some−times was accompanied by the valuable tacit knowledge of technical experts.E.W.Kutzscher,for example,was flown to Britain from Kiel after the war,and subsequently had an important influence on American developments when he joined Lockheed Aircraft Co.in Burbank,California as a research scientist.Although the fabrication methods developed for lead salt photoconductors was usually not completely under−stood,their properties are well established and reproducibi−lity could only be achieved after following well−tried reci−pes.Unlike most other semiconductor IR detectors,lead salt photoconductive materials are used in the form of polycrys−talline films approximately1μm thick and with individual crystallites ranging in size from approximately0.1–1.0μm. They are usually prepared by chemical deposition using empirical recipes,which generally yields better uniformity of response and more stable results than the evaporative methods.In order to obtain high−performance detectors, lead chalcogenide films need to be sensitized by oxidation. The oxidation may be carried out by using additives in the deposition bath,by post−deposition heat treatment in the presence of oxygen,or by chemical oxidation of the film. The effect of the oxidant is to introduce sensitizing centres and additional states into the bandgap and thereby increase the lifetime of the photoexcited holes in the p−type material.3.Classification of infrared detectorsObserving a history of the development of the IR detector technology after World War II,many materials have been investigated.A simple theorem,after Norton[40],can be stated:”All physical phenomena in the range of about0.1–1 eV will be proposed for IR detectors”.Among these effects are:thermoelectric power(thermocouples),change in elec−trical conductivity(bolometers),gas expansion(Golay cell), pyroelectricity(pyroelectric detectors),photon drag,Jose−phson effect(Josephson junctions,SQUIDs),internal emis−sion(PtSi Schottky barriers),fundamental absorption(in−trinsic photodetectors),impurity absorption(extrinsic pho−todetectors),low dimensional solids[superlattice(SL), quantum well(QW)and quantum dot(QD)detectors], different type of phase transitions, etc.Figure6gives approximate dates of significant develop−ment efforts for the materials mentioned.The years during World War II saw the origins of modern IR detector tech−nology.Recent success in applying infrared technology to remote sensing problems has been made possible by the successful development of high−performance infrared de−tectors over the last six decades.Photon IR technology com−bined with semiconductor material science,photolithogra−phy technology developed for integrated circuits,and the impetus of Cold War military preparedness have propelled extraordinary advances in IR capabilities within a short time period during the last century [41].The majority of optical detectors can be classified in two broad categories:photon detectors(also called quantum detectors) and thermal detectors.3.1.Photon detectorsIn photon detectors the radiation is absorbed within the material by interaction with electrons either bound to lattice atoms or to impurity atoms or with free electrons.The observed electrical output signal results from the changed electronic energy distribution.The photon detectors show a selective wavelength dependence of response per unit incident radiation power(see Fig.8).They exhibit both a good signal−to−noise performance and a very fast res−ponse.But to achieve this,the photon IR detectors require cryogenic cooling.This is necessary to prevent the thermalHistory of infrared detectorsFig.5.Cashman’s detector cells:(a)Tl2S cell(ca.1943):a grid of two intermeshing comb−line sets of conducting paths were first pro−vided and next the T2S was evaporated over the grid structure;(b) PbS cell(ca.1945)the PbS layer was evaporated on the wall of the tube on which electrical leads had been drawn with aquadag(afterRef. 38).。
计算机视觉领域事件定义(中英文版)英文文档:The field of computer vision is concerned with enabling computers to interpret and understand visual information from the world around us.It encompasses a wide range of technologies and techniques, including image recognition, image processing, and video analysis.One aspect of computer vision that is gaining increasing attention is the ability to detect and recognize events from visual data.Event detection in computer vision involves the identification of specific occurrences or changes in a scene that are relevant or interesting.This could include recognizing actions such as walking, running, or jumping, or identifying events like a person entering or leaving an area.Event detection can be used in a variety of applications, including surveillance, robotics, and autonomous vehicles.In order to detect events in computer vision, researchers use various methods, including machine learning and deep learning algorithms.These algorithms are trained on large datasets of labeled images or videos to learn to recognize specific events.Once trained, these algorithms can be used to automatically detect events in new, unseen data.Challenges in event detection include the variability of events andthe complexity of real-world scenes.Events can occur under a wide variety of conditions, and the same event can look different depending on the lighting, viewpoint, or other factors.Additionally, scenes can be cluttered and contain multiple objects and events that need to be distinguished.Despite these challenges, research in event detection in computer vision is making progress and has the potential to enable a wide range of useful applications.中文文档:计算机视觉领域关注的是使计算机能够解释和理解周围世界的视觉信息。
The Detection of Orientation Continuity and Discontinuity by Cat V1NeuronsTao Xu1,Ling Wang1,Xue-Mei Song2*,Chao-Yi Li1,2*1Key Laboratory for Neuroinformation of Ministry of Education,University of Electronic Science and Technology of China,Chengdu,China,2Shanghai Institutes of Biological Sciences,Chinese Academy of Sciences,Shanghai,ChinaAbstractThe orientation tuning properties of the non-classical receptive field(nCRF or‘‘surround’’)relative to that of the classical receptive field(CRF or‘‘center’’)were tested for119neurons in the cat primary visual cortex(V1).The stimuli were concentric sinusoidal gratings generated on a computer screen with the center grating presented at an optimal orientation to stimulate the CRF and the surround grating with variable orientations stimulating the nCRF.Based on the presence or absence of surround suppression,measured by the suppression index at the optimal orientation of the cells,we subdivided the neurons into two categories:surround-suppressive(SS)cells and surround-non-suppressive(SN)cells.When stimulated with an optimally oriented grating centered at CRF,the SS cells showed increasing surround suppression when the stimulus grating was expanded beyond the boundary of the CRF,whereas for the SN cells,expanding the stimulus grating beyond the CRF caused no suppression of the center response.For the SS cells,strength of surround suppression was dependent on the relative orientation between CRF and nCRF:an iso-orientation grating over center and surround at the optimal orientation evoked strongest suppression and a surround grating orthogonal to the optimal center grating evoked the weakest or no suppression.By contrast,the SN cells showed slightly increased responses to an iso-orientation stimulus and weak suppression to orthogonal surround gratings.This iso-/orthogonal orientation selectivity between center and surround was analyzed in22SN and97SS cells,and for the two types of cells,the different center-surround orientation selectivity was dependent on the suppressive strength of the cells.We conclude that SN cells are suitable to detect orientation continuity or similarity between CRF and nCRF,whereas the SS cells are adapted to the detection of discontinuity or differences in orientation between CRF and nCRF.Cita tion:Xudoi:10.1371/journal.pone.0079723Editor:Luis M.Martinez,CSIC-Univ Miguel Hernandez,SpainReceived February25,2013;Accepted October4,2013;Published November21,2013Copyright:ß2013Xu et al.This is an open-access article distributed under the terms of the Creative Commons Attribution License,which permits unrestricted use,distribution,and reproduction in any medium,provided the original author and source are credited.Funding:This work was supported by the Major State Basic Research Program(2013CB329401),the Natural Science Foundations of China(90820301,31100797 and31000492),and Shanghai Municipal Committee of Science and Technology(088014158,098014026).The funders had no role in the study design,data collection and analysis,decision to publish,or preparation of the manuscript.Competing Interests:The authors have declared that no competing interests exist.*E-mail:xmsong@(XMS);cyli@(CYL)IntroductionThe responses of V1neurons to visual stimuli presented in the classical receptive field(CRF or center)can be facilitated or suppressed by stimuli in the non-classical-receptive field(nCRF or surround)[1–14].The center responses of most V1cells are suppressed by surround stimuli[14–16]with maximal sup-pressive modulation occurring when the center and surround are stimulated at the identical orientation,drifting direction and spatiotemporal frequency[7,9,11,17,18].Suppression is weaker if the surrounding orientation deviates from the optimal central grating,particularly when the orientation in the surround is90u from the preferred orientation(orthogonal) [15,19].This effect may be linked to figure/ground segregation [7,10,20].For some cells,facilitatory effects have also been observed when the surround stimuli match the central optimal orientation[21–23]and are eliminated by orthogonal surround stimuli[21,23].Previous studies[11,19]of the primary visual cortex of anaesthetized primates revealed that surround orientation tuning depends on the contrast of the central bining intrinsic signal optical imaging and single-unit recording in the V1of anesthetized cats,a recent study[24]reported that surround orientation selectivity might also depended on the location of the neurons in the optical orientation map of V1.In the experiments presented here,we recorded single-unit activities from the V1of anesthetized cats and determined the CRF size and suppression index(SI)of the neurons using a size-tuning test.In the experiment,a high-contrast grating,which was drifting at the optimal orientation/direction of the neuron, was varied in size,and the SI of the neuron was measured from the size-tuning curve.We classified the recorded neurons into two categories according to their SI,the‘‘surround-non-suppressive’’(SN,0ƒSIƒ0.1)and the‘‘surround-suppressive’’(SS,0.1S SIƒ1).We then performed surround orientation tuning tests.In these tests,the nCRF was stimulated by a larger circular grating (20u|20u)that drifted randomly at different orientations/direc-tions while a smaller optimally oriented grating was drifted within the CRF.We observed that the surround suppression was stronger at an iso-compared to ortho-orientation for SS neurons and vice versa for SN cells.These results may imply that SN neurons detect continuity of a preferred orientation,and SS neurons detect discontinuity of a preferred orientation.T, Wang L,Song X-M,Li C-Y(2013)The Detection of Orientation Continuity and Discontinuity by Cat V1Neurons.PLoS ONE8(11):e79723.Materials and MethodsEthics statementThe experimental data were obtained from anesthetized and paralyzed adult cats(weighing 2.0,4.0kg each)bred in the facilities of the Institute of Neuroscience,Chinese Academy of Sciences(Permit Number:SYXK2009-0066).This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals from the National Institute of Health.The protocol was specifically approved by the Committee on the Ethics of Animal Experiments of the Shanghai Institute for Biological Sciences,Chinese Academy of Sciences(Permit Number:ER-SIBS-621001C).All surgeries were performed under general anesthesia combined with the local application of Lidocaine(for details see‘‘animal preparation’’),and all efforts were made to minimize suffering. Animal preparationAcute experiments were performed on10cats.Detailed descriptions of the procedures for animal surgery,anesthesia, and recording techniques can be observed in a previous study[25]. Briefly,the cats were anesthetized prior to surgery with ketamine hydrochloride(30mg/kg,intravenously[iv]),and then tracheal and venous canulations were performed.After surgery,the animal was placed in a stereotaxic frame for performing a craniotomy and neurophysiological procedures.During recording,anesthesia and paralysis were maintained with urethane(20mg/kg/h)and gallamine triethiodide(10mg/kg/h),respectively,and glucose (200mg/kg/h)in Ringer’s solution(3mL/kg/h)was infused. Heart rate,electrocardiography,electroencephalography(EEG), end-expiratory CO2,and rectal temperature were monitored continuously.Anesthesia was considered sufficient when the EEG indicated a permanent sleep-like state.Reflexes,including cornea, eyelid,and withdrawal reflexes,were tested at appropriate intervals.Additional urethane was given immediately if necessary. The nictitating membranes were retracted,and the pupils were dilated.Artificial pupils of3mm diameter were used.Contact lenses and additional corrective lenses were applied to focus the retina on a screen during stimulus presentation.At the end of the experiment,the animal was sacrificed by an overdose of barbiturate administered i.v.Single-unit recordingThe recordings were performed with a multi-electrode tungsten array consisting of2|8channels with250m m between neigh-boring channels from MicroProbes.The impedance of the electrodes was5M V on average(specified by the arrays’manufacture).The signals were recorded using the Cerebus system.Spike signals were band-pass filtered at250–7500Hz and sampled at30kHz.Only well-isolated cells satisfying the strict criteria for single-unit recordings(fixed shape of the action potential and the absence of spikes during the absolute refractory period)were collected for further analyses.We mapped and optimized the stimuli for each individual neuron independently and ignored the data simultaneously collected from other neurons. Recordings were made mainly from layers2/3and4.Visual stimulationThe visual stimuli were generated by a Cambridge Systems VSG graphics board.The stimuli were patches of drifting sinusoidal gratings presented on a high-resolution monitor screen (40|30cm)at a100Hz vertical refresh rate.The screen was maintained at the identical mean luminance as the stimulus patches(10cd/m2).The monitor was placed57cm from the cat’s eyes.All cells recorded were obtained from the area of the cortex that represented the central10u of the visual field.When the single-cell action potentials were isolated,the preferred orienta-tion,drifting direction,spatial frequency and temporal frequency of each cell were determined.Each cell was stimulated monoc-ularly through the dominant eye with the non-dominant eye occluded.To locate the center of the CRF,a narrow rectangular sine-wave grating patch(0.5u–1.0u wide|3.0u–5.0u long at a40% contrast)was moved at successive positions along axes perpendic-ular or parallel to the optimal orientation of the cell,and the response to its drift was measured.The grating was set at the optimal orientation and spatial frequency and drifted in the preferred direction at the optimal speed for the recorded cells.The peak of the response profiles for both axes was defined as the center of the CRF.We further confirmed that the stimulus was positioned in the center of the receptive field by performing an occlusion test,in which a mask consisting of a circular blank patch and concentric with the CRF,was gradually increased in size on a background drifting grating[15,25,26].If the center of the CRF was accurately determined,the mask curve should begin at peak,and the response decreased as more of the receptive field was masked.If the curve obtained with the mask did not begin at the peak value, we considered the stimulus to be offset in relation to the receptive field center,and the position of the receptive field was reassessed. In the size-tuning tests,the circular sinusoidal gratings(40% contrast)were centered over the receptive field center and randomly presented with different diameters(from0.1to20u). The optimized values for these parameters(orientation,spatial and temporal frequency)were used in these tests.Each grating size was presented for5–10cycles of the grating drift,and standard errors were calculated for3–10repeats.We defined the CRF size as the aperture size of the peak response diameter(the stimulus diameter at which the response was maximal if the responses decreased at larger stimulus diameters or reached95%of the peak value if they did not)[27,28].Contrast in the subsequent experiments was selected to elicit responses that reached approximately90%of the saturation response for each cell with the center(CRF)contrast response function.Our contrast had a range of20–70%and a mode of40%.To measure the orientation tuning of the cell’s surround,the optimal orientation,aperture,and spatiotemporal frequencies for the center stimulus remained constant.Directly abutting the outer circumference of the center stimulus was a surround grating(with an outer diameter of20u)of the identical phase and spatial and temporal frequencies.Whereas the center stimulus was maintained at the preferred orientation/direction throughout the experiment, the surround stimulus was shown with variable orientations/ directions(in15u increments).Both the center and surround stimuli were shown at a high contrast(with a range of20–70%and a mode of40%).The responses to each patch were recorded for5–10cycles of the grating drift,and standard errors were calculated for3–10repeats.The cells were classified as‘‘simple’’if the first harmonic(F1)of their response to the sine-wave gratings was greater than the mean firing rate(F0)of the response(F1/F0ratio T1)or‘‘complex’’if the F1/F0ratio was S1[29].Data analysisThe size-tuning curves for all recorded cells were fit using a DOG model[9].In this model,the narrower positive Gaussian represented the excitatory center(the CRF),whereas the broader negative Gaussian represented the suppressive surround(the nCRF).The two Gaussians were considered to be concentricallyoverlapping,and the summation profile could be represented as the difference of the two Gaussian integrals.The model was defined by the following equation:R(s)~R0z K eðs=2{s=2e{(2y=a)2dy{K iðs=2{s=2e{(2y=b)2dywhere R0is the spontaneous firing rate,and each integral represents the relative contribution from putative excitatory and inhibitory components,respectively.The excitatory Gaussian is described by gain,K e,and a space constant,a,and the inhibitory Gaussian by its gain,K i,and space constant,b.All population values are provided below as the mean+SEM.ResultsThe data were obtained from119neurons in the striate cortex of10cats.Of these neurons,46were simple cells and73were complex cells.The results did not differ significantly between these two classes.The determination of the extent of the classical receptive field and strength of surround suppressionThe spatial extent of the classical receptive field(CRF)and the strength of surround suppression were determined using size-tuning tests(see materials and methods).Two patterns of behavior were distinguished based on the presence or absence of surround suppression.Fig.1shows the size-response curves of two representative cells.For some cells,the spike rate rose with increasing stimulus diameter and reached an asymptote(cell A). For other cells,the responses rose to a peak and then decreased as stimulus diameter further increased(cell B).The peak response diameter(as in Fig.1B)or the diameter of the saturation point (reaching95%of the peak value,as in Fig.1A)was defined as theCRF size.Surround suppression was present for many of the cells tested,although its strength varied substantially between cells.We quantified the degree of surround suppression for each cell using the SI(1-asymptotic response/peak response).Across the popula-tion,suppression equal to or less than10%was observed in18.5% (22/119)of the cells tested,and these cells were assigned to the ‘‘surround-no-suppressive’’category(SN).Cells that exhibited suppression greater than10%were categorized as‘‘surround-suppressive’’(SS),which comprised81.5%(97/119)of the population.The relationship between surround orientation selectivity and the degree of surround suppressionWe initially determined the contrast-response function of the classical receptive field(CRF)for each individual neuron,and then a high-contrast that elicited a response of90%of the saturation response(with a range of20–70%and a mode of 40%)was selected to determine the neuron’s preferred orientation by stimulating the CRF alone.We then varied the orientation of the surround grating and held the CRF stimuli in the cell’s preferred orientation/direction.Altogether,we pre-sented24surround orientations/directions shifted in15u increments.Figure2shows four representative examples of the surround orientation tuning curves when the central stimulus was in the preferred orientation/direction.The cells in Fig.2A and B were identical to those in Fig.1A and B.The solid circles plotted represent the center-alone condition,and the open circles plotted represent the orientation tuning measured by the surround grating when the center grating was optimal.The abscissa indicated the stimulus orientation relative to the neuron’s preferred orientation/ direction;0on the abscissa indicated that the gratings were oriented in the optimal orientation and drifted in the optimal direction.The cell in Fig.2A possessed no surround suppression (SI=0)in the size-tuning test(see Fig.1A).A slight facilitation was shown on the orientation tuning curve when the surround grating was drifting in the neuron’s preferred orientation/direction(0 position on the x-axis),and a suppression drop occurred when the surround orientation was45u away from the preferred central grating.The cell illustrated in Fig.2B displayed significant surround suppression(SI=0.65)in the size-tuning test(see Fig.1B)and on the surround orientation tuning curve.The surround suppression was strongest in the cell’s optimal orienta-tion/direction and weak in the orthogonal orientations(+90u on the abscissa).The cell in Fig.2C exhibited extremely strong suppression(SI=0.98,size-tuning test)around the optimal orientation/direction(0and+180u)and facilitation in the orthogonal orientations/directions.Fig.2D shows a cell (SI=0.67,size-tuning test)with moderate strength surround suppression at all surround orientations and a maximal suppres-sion at230u.We consistently observed that the surround was more selective toward ortho-orientations compared to iso-orientations in the strong suppression neurons than in the weak suppression neurons. To quantify this observation,we used an index for iso-/orthogonal orientation selectivity(OSI).OSI is a measure of the average responses from2orthogonal surround orientations relative to the responses to the iso-oriented surroundstimuli.Figure1.Size-tuning curves obtained from two representative cells.The response,in spikes per second(y-axis),is plotted against the diameter of the circular grating patch(x-axis).Solid lines indicate the best-fitting difference of integrals of the Gaussian functions.The arrows indicate stimulus diameter at which the responses were maximal and/or became asymptotic.(A)A cell exhibiting a no-suppression surround (SI=0).(B)A cell showing a suppressive surround(SI=0.65).The error bars represent+SEM.doi:10.1371/journal.pone.0079723.g001OSI ~Rh -90z R h z 902-R h 0R refwhere R h 0represents the response magnitude (spikes per second)at iso-orientation,R h 290and R h +90represent the response magnitude in the two orthogonal orientations,and R ref is the response to the center stimulus presented alone.OSI directly compared the suppression strength resulting from iso-and ortho-oriented surround stimuli.A value of 1indicated that the surround had no suppressive effect in ortho-orientations and exerted complete suppression at iso-orientation,a value of 0indicated that the suppression was equal in magnitude for both iso-and ortho-oriented surround stimuli,and negative values (OSI S 0)showed that the surround suppression was weaker in the iso-compared to ortho-orientation.The OSI values for the 4cells in Fig.2A,B,C and D were 20.54,0.41,0.98and 0.25,respectively.The relationship between OSI and SI is shown in Fig.3.The data revealed a significant positive correlation between the two variables (r =0.59,P S 0.001).This result indicated that the stronger the surround suppression was,the greater the sensitivity to the ortho-orientation between the center and surround,whereas the weaker the surround suppression was,the greater the sensitivity to the iso-orientation between the center and surround.Of the 119neurons analyzed,there were 22SN cells and 97SS cells.The distribution of OSI within the two groups is shown in Fig.4.The black columns indicate the proportion of cells with an OSI S 0,and the gray columns indicate the proportion of cells with an OSI T 0.Within the 22SN neurons,there were 14cells (64%)with an OSI S 0and 8(36%)cells with an OSI T 0.A comparison within the SS neuron group indicated that the proportion was 19%(18/97)with an OSI S 0and 81%(79/97)with an OSI T 0.Because SN neurons exert weaker suppression toiso-orientedFigure 2.The orientation tuning curves of the center and surround for four representative neurons.The solid circles represent the orientation tuning of the center;0u represents the optimal orientation for clarity.The open circles show the tuning of the surround when the center was stimulated with an optimally oriented grating.The error bars are +SEM.(A)The cell was identical to that shown in Fig.1A,thus showing no surround suppression (SI =0)in the size-tuning curve.A slight facilitation is shown when the surround grating was drifting in the preferred orientation/direction (0u on the x -axis)and a suppression drop occurred as the surround orientation was 45u away from the preferred orientation.(B)The neuron was identical to that in Fig.1B,thus showing significant suppression (SI =0.65)in the size-tuning test.The surround suppression was maximal in theoptimalFigure 3.The relationship between the surround iso/orthogo-nal orientation selectivity index (OSI)and suppression index (SI).Each point represents data from one cell.The x -axis indicates the SI in the size-tuning tests,the y -axis indicates the surround iso-/orthogonal orientation selectivity index (OSI).The straight line is a linear regression of the data points.A significant correlation was observed between SI and OSI (r =0.59,P S 0.001).doi:10.1371/journal.pone.0079723.g003orientation/direction (0u )and weaker in the orthogonal orientations (+90u on the abscissa).(C)The neuron showed strong surround suppression around the optimal orientation/direction and significant facilitation in the orthogonal orientations.(D)The cell (SI =0.67,size-tuning test)revealed a moderate-strength surround suppression to all surround orientations with a maximal suppression at 230u .doi:10.1371/journal.pone.0079723.g002surround stimuli and vice versa for the SS cells,these results implied that the SN neurons were more suitable for detecting orientation similarity or continuity,whereas the SS neurons detected differences or discontinuity in orientation.The relation-ship between the preferred orientations of the center and minimally suppressive orientation of the surround for the two types of neurons (SN and SS groups)is compared in Fig.5.We included only those cells in the sample that showed a statistically significant peak in the surround tuning curve.The solid circles represent the SN cells (n =11),and the open circles represent the SS cells (n =62).The positions of the circles on the x -axis indicate the difference between the center optimal orientation and minimally suppressed orientation of the surround,and the positions on the y -axis indicate the strength of the surround effect on the center response.The horizontal line represents the response amplitude to the center stimulus alone.The circles underneath the line indicate suppressive effect,and those above the line indicate facilitative effect,whereas the surround stimuli were oriented in the minimally suppressive orientation.Although the positions of the minimally suppressive orientation varied between cells,the maxima of the SS cells tended to aggregate around the orthogonal or oblique orientation,and for the maxima of SN cells,the cells tended to aggregate around the preferred orientation of center (CRF).The remaining cells (46/119)showed a broadly tuned surround or were nearly uniformly suppressive at all orientations.The population response of the two types of neuronsTo assess the overall effects of surround stimulation on the center response,we pooled the data individually for the two categories of cells by normalizing the magnitude of the surround effect to center stimulation alone.Figures 6A and B show the mean surround orientation tuning curves for the SN and SS cells,respectively.Fig.6A shows that the SN neurons (n =22)produced no significant effects from the surround stimulation at most orientations but significant facilitation in the neuron’s preferred orientation/direction.Fig.6B shows that the SS neurons (n =97)were suppressed by surround stimuli at all orientations with the strongest suppression (minimum response)in the neuron’s preferred orientation and direction.DiscussionPrevious studies [10,30]have shown that V1neurons can detect orientation discontinuity between CRF and its surround astheseFigure 4.A comparison of OSI between the SN and SS neuron groups.The black columns indicate the proportion of cells with an OSI S 0,and the gray columns indicate the proportion of cells with an OSI T 0.Within the 22SN neurons,there were 14cells (64%)with an OSI S 0and 8(36%)cells with an OSI T paring within the SS neuron group,the proportion was 19%(18/97)with an OSI S 0and 81%(79/97)with an OSI T 0.doi:10.1371/journal.pone.0079723.g004Figure 5.The distribution of differences between the preferred orientation of center and minimally suppressive orientation of the surround.The solid circles represent data from the SN cells (n =11),and the open circles represent data from the SS cells (n =62).The positions of the circles on the x -axis indicate the difference between the center optimal orientation and surround minimally suppressed orientation,and the position on the y -axis indicate the strength of the surround effect on the center response.The horizontal line represents response amplitude to the center stimulus alone,the circles underneath the line indicate the amplitude (in %)of the suppressive effect,and those above the line indicate the amplitude (in %)of the facilitative effect when the surround stimuli was oriented in the minimally suppressive orientation.doi:10.1371/journal.pone.0079723.g005Figure 6.A population analysis of the orientation/direction-dependent surround effects of the SN and SS neurons.The surround response amplitude of each neuron was normalized relative to the center response.The horizontal line indicates the response to the preferred center stimulus alone.(A)The averaged surround tuning curve of 22SN cells shows that the surround stimuli have no significant effects at most orientations but display significant facilitation in the neuron’s preferred orientation/direction.(B)The averaged surround tuning curve of 97SS cells.The neurons were maximally suppressed by surround stimuli in the neuron’s preferred orientation/direction and were least suppressive in the orthogonal orientation/direction (+90u ).doi:10.1371/journal.pone.0079723.g006neurons respond to any combination of center-surround orienta-tions as long as the two orientations are not identical.Other studies[7,9,11,15,19,31–34]have reported that the response tended to be stronger when the orientations of the surround stimuli were orthogonal to the optimal orientation of the center. Levitt and Lund[11]reported that for some neurons,the sensitivity to the surround orthogonal orientation became weaker by using a lower contrast of the central stimulus.A recent study [24]reported that the surround orientation tuning could be related to the position of the cells in the optical orientation map,and the authors observed that orthogonal orientation selectivity was higher in the domain cells and lower in the pinwheel cells.In the present experiments,we recorded119neurons from the V1of cats using high-contrast stimuli.According to the presence or absence of a suppressive surround in the size-tuning tests,we divided these neurons into two categories:surround-suppressive (SS)and surround-non-suppressive(SN).Of the sample we recorded,82%(97/119)were SS cells with a significant suppressive surround,and18%(22/119)were SN cells with no suppression or slight facilitation over the surround.We analyzed the relative orientation selectivity between the center and surround using the iso/orthogonal orientation selectivity index(OSI),and observed that the OSI of the surround depended significantly on the suppressive strength(SI)of the neurons.The stronger the surround suppression was,the greater the sensitivity was to the orthogonal orientation between the center and surround,whereas the weaker the surround suppression was,the greater the sensitivity was to the iso-orientation between the center and surround.When both the center and surround were stimulated by the optimal orientation of the cells,the response of the SS cells were maximally suppressed,whereas the response of the SN cells were activated or facilitated.From these results,it can be concluded that SS cells are possibly suitable for the detection of differences or discontinuity in orientations(ortho-or oblique-orientation between the surround and optimal center),and conversely,SN cells are most likely adapted to detecting similarities or continuity of orientations(the identical preferred orientation and drift direction between the surround and center).The functionally different iso/orthogonal center-surround interactions in orientation selectivity might be based on distinct local cortical circuitry for the two types of neurons.Earlier studies reported that some pyramidal cells in the cat visual cortex exhibit extensively spreading horizontal axon collaterals that link neigh-boring regions over several millimeters[35–43].In our previous paper[44],we compared the morphological features of the suppressive and non-suppressive(or facilitative)neurons in the primary visual cortex of anesthetized cats(Fig.1and Fig.2in[44]). We observed that only the non-suppressive(or facilitative)neurons possessed such extensively spreading axon collaterals,whereas the strong-suppressive neurons had restricted local axons.The SN neurons most likely detect orientation continuity by using their long-distance horizontal connections,which provide connections be-tween neighboring cells with similar orientation preferences [37,38,41,45–47],whereas the strong suppression neurons detect orientation discrepancy using local suppressive connections[48]. ConclusionsOur results demonstrate that1)the surround-non-suppressive (SN)neurons tended to detect center-surround continuing orientation and surround-suppressive(SS)neurons possibly to select center-surround discontinuing orientation,and2)the magnitude of the iso-/orthogonal orientation sensitivity of the surround significantly depended on the suppressive SI of the neurons.AcknowledgmentsWe thank Dr.D.A.Tigwell for comments on the manuscript and X.Z.Xu for technical assistance.We also thank Dr.K.Chen for help with the data analysis.Author ContributionsConceived and designed the experiments:XMS CYL.Performed the experiments:TX XMS.Analyzed the data:TX XMS.Contributed reagents/materials/analysis tools:LW.Wrote the paper:XMS CYL.References1.Blakemore C,Tobin EA(1972)Lateral inhibition between orientation detectorsin the cat’s visual cortex.Exp Brain Res15:439–440.2.Maffei L,Fiorentini A(1976)The unresponsive regions of visual corticalreceptive fields.Vision Res16:1131–1139.3.Rose D(1977)Responses of single units in cat visual cortex to moving bars oflight as a function of bar length.J Physiol271:1–23.4.Nelson JI,Frost BJ(1978)Orientation-selective inhibition from beyond theclassic visual receptive field.Brain Res139:359–365.5.Allman J,Miezin F,McGuinness E(1985)Stimulus specific responses frombeyond the classical receptive field:neurophysiological mechanisms for local-global comparisons in visual neurons.Annu Rev Neurosci8:407–430.6.Gilbert CD,Wiesel TN(1990)The influence of contextual stimuli on theorientation selectivity of cells in primary visual cortex of the cat.Vision Res30: 1689–1701.7.Knierim JJ,van Essen DC(1992)Neuronal responses to static texture patterns inarea V1of the alert macaque monkey.J Neurophysiol67:961–980.8.Li CY,Li W(1994)Extensive integration field beyond the classical receptivefield of cat’s striate cortical neurons—classification and tuning properties.Vision Res34:2337–2355.9.DeAngelis GC,Freeman RD,Ohzawa I(1994)Length and width tuning ofneurons in the cat’s primary visual cortex.J Neurophysiol71:347–374.10.Sillito AM,Grieve KL,Jones HE,Cudeiro J,Davis J(1995)Visual corticalmechanisms detecting focal orientation discontinuities.Nature378:492–496.11.Levitt JB,Lund JS(1997)Contrast dependence of contextual effects in primatevisual cortex.Nature387:73–76.mme VA,Super H,Spekreijse H(1998)Feedforward,horizontal,andfeedback processing in the visual cortex.Curr Opin Neurobiol8:529–535. 13.Nothdurft HC,Gallant JL,Van Essen DC(2000)Response profiles to textureborder patterns in area V1.Vis Neurosci17:421–436.14.Jones HE,Grieve KL,Wang W,Sillito AM(2001)Surround suppression inprimate V1.J Neurophysiol86:2011–2028.15.Sengpiel F,Sen A,Blakemore C(1997)Characteristics of surround inhibition incat area17.Exp Brain Res116:216–228.16.Walker GA,Ohzawa I,Freeman RD(2000)Suppression outside the classicalcortical receptive field.Vis Neurosci17:369–379.17.Akasaki T,Sato H,Yoshimura Y,Ozeki H,Shimegi S(2002)Suppressive effectsof receptive field surround on neuronal activity in the cat primary visual cortex.Neurosci Res43:207–220.18.Walker GA,Ohzawa I,Freeman RD(1999)Asymmetric suppressionoutside the classical receptive field of the visual cortex.J Neurosci19: 10536–10553.19.Cavanaugh JR,Bair W,Movshon JA(2002b)Selectivity and spatial distributionof signals from the receptive field surround in macaque V1neurons.J Neurophysiol88:2547–2556.20.Series P,Lorenceau J,Fregnac Y(2003)The‘‘silent’’surround of V1receptivefields:theory and experiments.J Physiol Paris97:453–474.21.Kapadia MK,Ito M,Gilbert CD,Westheimer G(1995)Improvement in visualsensitivity by changes in local context:parallel studies in human observers and in V1of alert monkeys.Neuron15:843–856.22.Polat U,Mizobe K,Pettet MW,Kasamatsu T,Norcia AM(1998)Collinearstimuli regulate visual responses depending on cell’s contrast threshold,Nature 391:580–584.23.Mizobe K,Polat U,Pettet MW,Kasamatsu T(2001)Facilitation andsuppression of single striate-cell activity by spatially discrete pattern stimuli presented beyond the receptive field.Vis Neurosci18:377–391.24.Hashemi-Nezhad M,Lyon DC(2012)Orientation Tuning of the SuppressiveExtraclassical Surround Depends on Intrinsic Organization of V1.Cereb Cortex 22:308–326.25.Song XM,Li CY(2008)Contrast-dependent and contrast-independent spatialsummation of primary visual cortical neurons of the cat.Cereb Cortex18:331–336.26.Levitt JB,Lund JS(2002)The spatial extent over which neurons in macaquestriate cortex pool visual signals.Vis Neurosci19:439–452.。