The detection of orientation continuity and discontinuity by cat V1 neurons
- 格式:pdf
- 大小:515.59 KB
- 文档页数:7
人脸识别论文中英文附录(原文及译文)翻译原文来自Thomas David Heselt ine BSc. Hons. The Un iversity of YorkDepartme nt of Computer Scie neeFor the Qualification of PhD. -- September 2005 -《Face Recog niti on: Two-Dime nsio nal and Three-Dime nsional Tech nique》4 Two-dimensional Face Recognition4.1 Feature LocalizationBefore discuss ing the methods of compari ng two facial images we now take a brief look at some at the prelimi nary processes of facial feature alig nment. This process typically con sists of two stages: face detect ion and eye localisati on. Depe nding on the applicati on, if the positi on of the face with in the image is known beforeha nd (for a cooperative subject in a door access system for example) the n the face detect ion stage can ofte n be skipped, as the regi on of in terest is already known. Therefore, we discuss eye localisati on here, with a brief discussi on of face detect ion in the literature review(sect ion 3.1.1).The eye localisati on method is used to alig n the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented are represe ntative of the face recog niti on accuracy and not a product of the performa nee of the eye localisati on rout ine, all image alig nments are manu ally checked and any errors corrected, prior to testi ng and evaluati on.We detect the position of the eyes within an image using a simple template based method. A training set of manually pre-aligned images of faces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template.Figure 4-1 - The average eyes. Used as a template for eye detection.Both eyes are in cluded in a sin gle template, rather tha n in dividually search ing for each eye in turn, as the characteristic symmetry of the eyes either side of the no se, provides a useful feature that helps disti nguish betwee n the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale(i.e. subject distance from the camera) and also in troduces the assumpti on that eyes in the image appear n ear horiz on tai. Some preliminary experimentation also reveals that it is advantageous to include the area of skin just ben eath the eyes. The reas on being that in some cases the eyebrows can closely match the template, particularly if thereare shadows in the eye-sockets, but the area of skin below the eyes helps to disti nguish the eyes from eyebrows (the area just below the eyebrows con tai n eyes, whereas the area below the eyes contains only plain skin).A window is passed over the test images and the absolute difference taken to that of the average eye image shown above. The area of the image with the lowest difference is taken as the region of interest containing the eyes. Applying the same procedure using a smaller template of the in dividual left and right eyes the n refi nes each eye positi on.This basic template-based method of eye localisati on, although provid ing fairly preciselocalisati ons, ofte n fails to locate the eyes completely. However, we are able to improve performa nce by in cludi ng a weighti ng scheme.Eye localisati on is performed on the set of training images, which is the n separated in to two sets: those in which eye detect ion was successful; and those in which eye detect ion failed. Taking the set of successful localisatio ns we compute the average dista nce from the eye template (Figure 4-2 top). Note that the image is quite dark, indicating that the detected eyes correlate closely to the eye template, as we would expect. However, bright points do occur near the whites of the eye, suggesting that this area is often inconsistent, varying greatly from the average eye template.Figure 4-2 -Distance to the eye template for successful detections (top) indicating variance due to noise and failed detections (bottom) showing credible variance due to miss-detected features.In the lower image (Figure 4-2 bottom), we have take n the set of failed localisati on s(images of the forehead, no se, cheeks, backgro und etc. falsely detected by the localisati on routi ne) and once aga in computed the average dista nce from the eye template. The bright pupils surr oun ded by darker areas in dicate that a failed match is ofte n due to the high correlati on of the nose and cheekb one regi ons overwhel ming the poorly correlated pupils. Wanting to emphasise the differenee of the pupil regions for these failed matches and minimise the varianee of the whites of the eyes for successful matches, we divide the lower image values by the upper image to produce a weights vector as show n in Figure 4-3. When applied to the differe nee image before summi ng a total error, this weight ing scheme provides a much improved detect ion rate.Figure 4-3 - Eye template weights used to give higher priority to those pixels that best represent the eyes.4.2 The Direct Correlation ApproachWe begi n our inv estigatio n into face recog niti on with perhaps the simplest approach,k nown as the direct correlation method (also referred to as template matching by Brunelli and Poggio [29 ]) inv olvi ng the direct comparis on of pixel inten sity values take n from facial images. We use the term ‘ Direct Correlation ' to encompass all techniques in which face images are compareddirectly, without any form of image space an alysis, weight ing schemes or feature extracti on, regardless of the dsta nee metric used. Therefore, we do not infer that Pears on ' s correlat applied as the similarity fun cti on (although such an approach would obviously come un der our definition of direct correlation). We typically use the Euclidean distance as our metric in these inv estigati ons (in versely related to Pears on ' s correlati on and can be con sidered as a scale tran slati on sen sitive form of image correlati on), as this persists with the con trast made betwee n image space and subspace approaches in later sect ions.Firstly, all facial images must be alig ned such that the eye cen tres are located at two specified pixel coord in ates and the image cropped to remove any backgro und in formati on. These images are stored as greyscale bitmaps of 65 by 82 pixels and prior to recog niti on con verted into a vector of 5330 eleme nts (each eleme nt containing the corresp onding pixel inten sity value). Each corresp onding vector can be thought of as describ ing a point with in a 5330 dime nsional image space. This simple prin ciple can easily be exte nded to much larger images: a 256 by 256 pixel image occupies a si ngle point in 65,536-dime nsional image space and again, similar images occupy close points within that space. Likewise, similar faces are located close together within the image space, while dissimilar faces are spaced far apart. Calculati ng the Euclidea n dista need, betwee n two facial image vectors (ofte n referred to as the query image q, and gallery imageg), we get an indication of similarity. A threshold is then applied to make the final verification decision.d q g (d threshold ? accept) d threshold ? reject ) . Equ. 4-14.2.1 Verification TestsThe primary concern in any face recognition system is its ability to correctly verify aclaimed identity or determine a person's most likely identity from a set of potential matches in a database. In order to assess a given system ' s ability to perform these tasks, a variety of evaluati on methodologies have arise n. Some of these an alysis methods simulate a specific mode of operatio n (i.e. secure site access or surveilla nee), while others provide a more mathematical description of data distribution in some classificatio n space. In additi on, the results gen erated from each an alysis method may be prese nted in a variety of formats. Throughout the experime ntatio ns in this thesis, weprimarily use the verification test as our method of analysis and comparison, although we also use Fisher Lin ear Discrim inant to an alyse in dividual subspace comp onents in secti on 7 and the iden tificati on test for the final evaluatio ns described in sect ion 8. The verificati on test measures a system ' s ability to correctly accept or reject the proposed ide ntity of an in dividual. At a fun cti on al level, this reduces to two images being prese nted for comparis on, for which the system must return either an accepta nee (the two images are of the same pers on) or rejectio n (the two images are of differe nt people). The test is desig ned to simulate the applicati on area of secure site access. In this scenario, a subject will present some form of identification at a point of en try, perhaps as a swipe card, proximity chip or PIN nu mber. This nu mber is the n used to retrieve a stored image from a database of known subjects (ofte n referred to as the target or gallery image) and compared with a live image captured at the point of entry (the query image). Access is the n gran ted depe nding on the accepta nce/rejecti on decisi on.The results of the test are calculated accord ing to how many times the accept/reject decisi on is made correctly. In order to execute this test we must first define our test set of face images. Although the nu mber of images in the test set does not affect the results produced (as the error rates are specified as percentages of image comparisons), it is important to ensure that the test set is sufficie ntly large such that statistical ano malies become in sig ni fica nt (for example, a couple of badly aligned images matching well). Also, the type of images (high variation in lighting, partial occlusions etc.) will significantly alter the results of the test. Therefore, in order to compare multiple face recog niti on systems, they must be applied to the same test set.However, it should also be no ted that if the results are to be represe ntative of system performance in a real world situation, then the test data should be captured under precisely the same circumsta nces as in the applicati on en vir onmen t. On the other han d, if the purpose of the experime ntati on is to evaluate and improve a method of face recog niti on, which may be applied to a range of applicati on en vir onmen ts, the n the test data should prese nt the range of difficulties that are to be overcome. This may mea n in cludi ng a greater perce ntage of ‘ difficult would be expected in the perceived operati ng con diti ons and hence higher error rates in the results produced. Below we provide the algorithm for execut ing the verificati on test. The algorithm is applied to a sin gle test set of face images, using a sin gle fun cti on call to the face recog niti on algorithm: CompareFaces(FaceA, FaceB). This call is used to compare two facial images, returni ng a dista nce score in dicat ing how dissimilar the two face images are: the lower the score the more similar the two face images. Ideally, images of the same face should produce low scores, while images of differe nt faces should produce high scores.Every image is compared with every other image, no image is compared with itself and no pair is compared more tha n once (we assume that the relati on ship is symmetrical). Once two images have been compared, producing a similarity score, the ground-truth is used to determine if the images are ofthe same person or different people. In practical tests this information is ofte n en capsulated as part of the image file name (by means of a unique pers on ide ntifier). Scores are the n stored in one of two lists: a list containing scores produced by compari ng images of differe nt people and a list containing scores produced by compari ng images of the same pers on. The final accepta nce/reject ion decisi on is made by applicati on of a threshold. Any in correct decision is recorded as either a false acceptance or false rejection. The false rejection rate (FRR) is calculated as the perce ntage of scores from the same people that were classified as rejectio ns. The false accepta nce rate (FAR) is calculated as the perce ntage of scores from differe nt people that were classified as accepta nces.For IndexA = 0 to length (TestSet)For IndexB = lndexA+1 to length (T estSet)Score = CompareFaces (T estSet[IndexA], TestSet[IndexB]) If IndexA and IndexB are the same person Append Score to AcceptScoresListElseAppend Score to RejectScoresListFor Threshold = Minimum Score to Maximum Score:FalseAcceptCount, FalseRejectCount = 0For each Score in RejectScoresListIf Score <= ThresholdIncrease FalseAcceptCountFor each Score in AcceptScoresListIf Score > ThresholdIncrease FalseRejectCountFalseAcceptRate = FalseAcceptCount / Length(AcceptScoresList) FalseRejectRate = FalseRejectCount / length(RejectScoresList) Add plot to error curve at (FalseRejectRate, FalseAcceptRate)These two error rates express the in adequacies of the system whe n operat ing at aspecific threshold value. Ideally, both these figures should be zero, but in reality reducing either the FAR or FRR (by alteri ng the threshold value) will in evitably resultin increasing the other. Therefore, in order to describe the full operating range of a particular system, we vary the threshold value through the en tire range of scores produced. The applicati on of each threshold value produces an additi onal FAR, FRR pair, which when plotted on a graph produces the error rate curve shown below.Figure 4-5 - Example Error Rate Curve produced by the verification test.The equal error rate (EER) can be see n as the point at which FAR is equal to FRR. This EER value is often used as a single figure representing the general recognition performa nee of a biometric system and allows for easy visual comparis on of multiple methods. However, it is important to note that the EER does not indicate the level of error that would be expected in a real world applicati on .It is un likely that any real system would use a threshold value such that the perce ntage of false accepta nces were equal to the perce ntage of false rejecti ons. Secure site access systems would typically set the threshold such that false accepta nces were sig nifica ntly lower tha n false rejecti ons: unwilling to tolerate intruders at the cost of inconvenient access denials.Surveilla nee systems on the other hand would require low false rejectio n rates to successfully ide ntify people in a less con trolled en vir onment. Therefore we should bear in mind that a system with a lower EER might not n ecessarily be the better performer towards the extremes of its operating capability.There is a strong conn ecti on betwee n the above graph and the receiver operat ing characteristic (ROC) curves, also used in such experime nts. Both graphs are simply two visualisati ons of the same results, in that the ROC format uses the True Accepta nee Rate(TAR), where TAR = 1.0 -FRR in place of the FRR, effectively flipping the graph vertically. Another visualisation of the verification test results is to display both the FRR and FAR as functions of the threshold value. This prese ntati on format provides a refere nee to determ ine the threshold value necessary to achieve a specific FRR and FAR. The EER can be seen as the point where the two curves in tersect.ThrasholdFigure 4-6 - Example error rate curve as a function of the score thresholdThe fluctuati on of these error curves due to no ise and other errors is depe ndant on the nu mber of face image comparis ons made to gen erate the data. A small dataset that on ly allows for a small nu mber of comparis ons will results in a jagged curve, in which large steps corresp ond to the in flue nce of a si ngle image on a high proporti on of thecomparis ons made. A typical dataset of 720 images (as used in sect ion 422) provides 258,840 verificatio n operati ons, hence a drop of 1% EER represe nts an additi onal 2588 correct decisions, whereas the quality of a single image could cause the EER tofluctuate by up to 0.28.4.2.2 ResultsAs a simple experiment to test the direct correlation method, we apply the technique described above to a test set of 720 images of 60 different people, taken from the AR Face Database [ 39 ]. Every image is compared with every other image in the test set to produce a like ness score, provid ing 258,840 verificati on operati ons from which to calculate false accepta nce rates and false rejecti on rates. The error curve produced is show n in Figure 4-7.Figure 4-7 - Error rate curve produced by the direct correlation method using no image preprocessing.We see that an EER of 25.1% is produced, meaning that at the EER thresholdapproximately one quarter of all verification operations carried out resulted in anin correct classificati on. There are a nu mber of well-k nown reas ons for this poor levelof accuracy. Tiny changes in lighting, expression or head orientation cause the location in image space to cha nge dramatically. Images in face space are moved far apart due to these image capture conditions, despite being of the same person ' s face. The distanee between images differe nt people becomes smaller tha n the area of face space covered by images of the same pers on and hence false accepta nces and false rejecti ons occur freque ntly. Other disadva ntages in clude the large amount of storage n ecessary for holdi ng many face images and the inten sive process ing required for each comparis on, making this method un suitable for applicati ons applied to a large database. In secti on 4.3 we explore the eige nface method, which attempts to address some of these issues.4二维人脸识别4.1功能定位在讨论比较两个人脸图像,我们现在就简要介绍的方法一些在人脸特征的初步调整过程。
The Beginning of Model Checking:A Personal PerspectiveE.Allen Emerson1,21.Department of Computer Sciencesputer Engineering Research CenterThe University of Texas at Austin,Austin TX78712,USAAbstract.Model checking provides an automated method for verify-ing concurrent systems.Correctness specifications are given in tempo-ral logic.The method hinges on an efficient andflexible graph-theoreticreachability algorithm.At the time of its introduction in the early1980’s,the prevailing paradigm for verification was a manual one of proof-theoretic reasoning using formal axioms and inference rules oriented to-wards sequential programs.The need to encompass concurrent programs,the desire to avoid the difficulties with manual deductive proofs,and thesmall model theorem for temporal logic motivated the development ofmodel checking.Keywords:model checking,model-theoretic,synthesis,history,origins1IntroductionIt has long been known that computer software programs,computer hardware designs,and computer systems in general exhibit errors.Working programmers may devote more than half of their time on testing and debugging in order to increase reliability.A great deal of research effort has been and is devoted to developing improved testing methods.Testing successfully identifies many significant errors.Yet,serious errors still afflict many computer systems including systems that are safety critical,mission critical,or economically vital.The US National Institute of Standards and Technology has estimated that programming errors cost the US economy$60B annually[Ni02].Given the incomplete coverage of testing,alternative approaches have been sought.The most promising approach depends on the fact that programs and more generally computer systems may be viewed as mathematical objects with behavior that is in principle well-determined.This makes it possible to specify using mathematical logic what constitutes the intended(correct)behavior.Then one can try to give a formal proof or otherwise establish that the program meets This work was supported in part by National Science Foundation grants CCR-009-8141&CCR-020-5483and funding from Fujitsu Labs of America. email:emerson@ URL:/∼emerson/its specification.This line of study has been active for about four decades now. It is often referred to as formal methods.The verification problem is:Given program M and specification h determine whether or not the behavior of M meets the specification h.Formulated in terms of Turing Machines,the verification problem was considered by Turing[Tu36]. Given a Turing Machine M and the specification h that it should eventually halt (say on blank input tape),one has the halting problem which is algorithmically unsolvable.In a later paper[Tu49]Turing argued for the need to give a(manual) proof of termination using ordinals,thereby presaging work by Floyd[Fl67]and others.The model checking problem is an instance of the verification problem.Model checking provides an automated method for verifying concurrent(nominally)finite state systems that uses an efficient andflexible graph search,to determine whether or not the ongoing behavior described by a temporal property holds of the system’s state graph.The method is algorithmic and often efficient because the system isfinite state,despite reasoning about infinite behavior.If the answer is yes then the system meets its specification.If the answer is no then the system violates its specification;in practice,the model checker can usually produce a counterexample for debugging purposes.At this point it should be emphasized that the verification problem and the model checking problem are mathematical problems.The specification is for-mulated in mathematical logic.The verification problem is distinct from the pleasantness problem[Di89]which concerns having a specification capturing a system that is truly needed and wanted.The pleasantness problem is inherently pre-formal.Nonetheless,it has been found that carefully writing a formal specifi-cation(which may be the conjunction of many sub-specifications)is an excellent way to illuminate the murk associated with the pleasantness problem.At the time of its introduction in the early1980’s,the prevailing paradigm for verification was a manual one of proof-theoretic reasoning using formal axioms and inference rules oriented towards sequential programs.The need to encom-pass concurrent programs,and the desire to avoid the difficulties with manual deductive proofs,motivated the development of model checking.In my experience,constructing proofs was sufficiently difficult that it did seem there ought to be an easier alternative.The alternative was suggested by temporal logic.Temporal logic possessed a nice combination of expressiveness and decidability.It could naturally capture a variety of correctness properties, yet was decidable on account of the“Small”Finite Model Theorem which en-sured that any satisfiable formula was true in somefinite model that was small. It should be stressed that the Small Finite Model Theorem concerns the satisfi-ability problem of propositional temporal logic,i.e.,truth in some state graph. This ultimately lead to model checking,i.e.,truth in a given state graph.The origin and development of model checking will be described below.De-spite being hampered by state explosion,over the past25years model checking has had a substantive impact on program verification efforts.Formal verification2has progressed from discussions of how to manually prove programs correct to the routine,algorithmic,model-theoretic verification of many programs.The remainder of the paper is organized as follows.Historical background is discussed in section2largely related to verification in the Floyd-Hoare paradigm; protocol verification is also considered.Section3describes temporal logic.A very general type of temporal logic,the mu-calculus,that defines correctness in terms offixpoint expressions is described in section4.The origin of model checking is described in section5along with some relevant personal influences on me.A discussion of model checking today is given in section6.Some concluding remarks are made in section7.2Background of Model CheckingAt the time of the introduction of model checking in the early1980’s,axiomatic verification was the prevailing verification paradigm.The orientation of this paradigm was manual proofs of correctness for(deterministic)sequential pro-grams,that nominally started with their input and terminated with their out-put.The work of Floyd[Fl67]established basic principles for proving partial correctness,a type of safety property,as well as termination and total correct-ness,forms of liveness properties.Hoare[Ho69]proposed an axiomatic basis for verification of partial correctness using axioms and inference rules in a formal deductive system.An important advantage of Hoare’s approach is that it was compositional so that the proof a program was obtained from the proofs of its constituent subprograms.The Floyd-Hoare framework was a tremendous success intellectually.It en-gendered great interest among researchers.Relevant notions from logic such as soundness and(relative)completeness as well as compositionality were in-vestigated.Proof systems were proposed for new programming languages and constructs.Examples of proofs of correctness were given for small programs.However,this framework turned out to be of limited use in practice.It did not scale up to“industrial strength”programs,despite its merits.Problems start with the approach being one of manual proof construction.These are formal proofs that can involve the manipulations of extremely long logical formulae. This can be inordinately tedious and error-prone work for a human.In practice, it may be wholly infeasible.Even if strict formal reasoning were used throughout, the plethora of technical detail could be overwhelming.By analogy,consider the task of a human adding100,000decimal numbers of1,000digits each.This is rudimentary in principle,but likely impossible in practice for any human to perform reliably.Similarly,the manual verification of100,000or10,000or even 1,000line programs by hand is not feasible.Transcription errors alone would be prohibitive.Furthermore,substantial ingenuity may also be required on the part of the human to devise suitable assertions for loop invariants.One can attempt to partially automate the process of proof construction using an interactive theorem prover.This can relieve much of the clerical burden.3However,human ingenuity is still required for invariants and various lemmas. Theorem provers may also require an expert operator to be used effectively.Moreover,the proof-theoretic framework is one-sided.It focuses on providing a way to(syntactically)prove correct programs that are genuinely(semantically) correct.If one falters or fails in the laborious process of constructing a proof of a program,what then?Perhaps the program is really correct but one has not been clever enough to prove it so.On the other hand,if the program is really incorrect,the proof systems do not cater for proving incorrectness.Since in practice programs contain bugs in the overwhelming majority of the cases,the inability to identify errors is a serious drawback of the proof-theoretic approach.It seemed there ought to be a better way.It would be suggested by temporal logic as discussed below.Remark.We mention that the term verification is sometimes used in a specific sense meaning to establish correctness,while the term refutation(or falsification) is used meaning to detect an error.More generally,verification refers to the two-sided process of determining whether the system is correct or erroneous.Lastly,we should also mention in this section the important and useful area of protocol work protocols are commonlyfinite state.This makes it possible to do simple graph reachability analysis to determine if a bad state is accessible(cf.[vB78],[Su78]).What was lacking here was aflexible and expres-sive means to specify a richer class of properties.3Temporal LogicModal and temporal logics provided key inspiration for model checking.Origi-nally developed by philosophers,modal logic deals with different modalities of truth,distinguishing between P being true in the present circumstances,pos-sibly P holding under some circumstances,and necessarily P holding under all circumstances.When the circumstances are points in time,we have a modal tense logic or temporal logic.Basic temporal modalities include sometimes P and always P.Several writers including Prior[Pr67]and Burstall[Bu74]suggested that temporal logic might be useful in reasoning about computer programs.For in-stance,Prior suggested that it could be used to describe the“workings of a digital computer”.But it was the seminal paper of Pnueli[Pn77]that made the crit-ical suggestion of using temporal logic for reasoning about ongoing concurrent programs which are often characterized as reactive systems.Reactive systems typically exhibit ideally nonterminating behavior so that they do not conform to the Floyd-Hoare paradigm.They are also typically non-deterministic so that their non-repeatable behavior was not amenable to testing. Their semantics can be given as infinite sequences of computation states(paths) or as computation trees.Examples of reactive systems include microprocessors, operating systems,banking networks,communication protocols,on-board avion-ics systems,automotive electronics,and many modern medical devices.4Pnueli used a temporal logic with basic temporal operators F(sometimes) and G(always);augmented with X(next-time)and U(until)this is today known as LTL(Linear Time Logic).Besides the basic temporal operators applied to propositional arguments,LTL permitted formulae to be built up by forming nestings and boolean combinations of subformulae.For example,G¬(C1∧C2) expresses mutual exclusion of critical sections C1and C2;formula G(T1⇒(T1U C1)specifies that if process1is in its trying region it remains there until it eventually enters its critical section.The advantages of such a logic include a high degree of expressiveness permit-ting the ready capture of a wide range of correctness properties of concurrent programs,and a great deal offlexibility.Pnueli focussed on a proof-theoretic approach,giving a proof in a deductive system for temporal logic of a small example program.Pnueli does sketch a decision procedure for truth overfinite state graphs.However,the complexity would be nonelementary,growing faster than anyfixed composition of exponential functions,as it entails a reduction to S1S,the monadic Second order theory of1Successor,(or SOLLO;see be-low).In his second paper[Pn79]on temporal logic the focus is again on the proof-theoretic approach.I would claim that temporal logic has been a crucial factor in the success of model checking.We have one logical framework with a few basic temporal operators permitting the expression of limitless specifications.The connection with natural language is often significant as well.Temporal logic made it possible, by and large,to express the correctness properties that needed to be expressed. Without that ability,there would be no reason to use model checking.Alternative temporal formalisms in some cases may be used as they can be more expressive or succinct than temporal logic.But historically it was temporal logic that was the driving force.These alternative temporal formalisms include:(finite state)automata(on infinite strings)which accept infinite inputs by infinitely often entering a desig-nated set of automaton states[Bu62].An expressively equivalent but less succinct formalism is that ofω-regular expressions;for example,ab cωdenotes strings of the form:one a,0or more b s,and infinitely many copies of c;and a property not expressible in LTL(true P)ωensuring that at every even moment P holds. FOLLO(First Order Language of Linear Order)which allows quantification over individual times,for example,∀i≥0Q(i);and SOLLO(Second Order Language of Linear Order)which also allows quantification over sets of times corresponding to monadic predicates such as∃Q(Q(0)∧∀i≥0(Q(i)⇒Q(i+1)).1These alterna-tives are sometimes used for reasons of familiarity,expressiveness or succinctness. LTL is expressively equivalent to FOLLO,but FOLLO can be nonelementarily more succinct.This succinctness is generally found to offer no significant practi-cal advantage.Moreover,model checking is intractably(nonelementarily)hard for FOLLO.Similarly,SOLLO is expressively equivalent toω−regular expres-sions but nonelementarily more succinct.See[Em90]for further discussion.1Technically,the latter abbreviates∃Q(Q(0)∧∀i,j≥0(i<j∧¬∃k(i<k<j))⇒(Q(i)⇒Q(j)).5Temporal logic comes in two broad styles.A linear time LTL assertion h is interpreted with respect to a single path.When interpreted over a program there is an implicit universal quantifications over all paths of the program.An assertion of a branching time logic is interpreted over computation trees.The universal A (for all futures)and existential E(for some future)path quantifiers are important in this context.We can distinguish between AF P(along all futures,P eventually holds and is thus inevitable))and EF P(along some future,P eventually holds and is thus possible).One widely used branching time logic is CTL(Computation Tree Logic)(cf. [CE81]).Its basic temporal modalities are A(for all futures)or E(for some fu-ture)followed by one of F(sometime),G(always),X(next-time),and U(until); compound formulae are built up from nestings and propositional combinations of CTL subformulae.CTL derives from[EC80].There we defined the precursor branching time logic CTF which has path quantifiers∀fullpath and∃fullpath, and is very similar to CTL.In CTF we could write∀fullpath∃state P as well as ∃fullpath∃state P These would be rendered in CTL as AF P and EF P,respec-tively.The streamlined notation was derived from[BMP81].We also defined a modal mu-calculus FPF,and then showed how to translate CTF into FPF. The heart of the translation was characterizing the temporal modalities such as AF P and EF P asfixpoints.Once we had thefixpoint characterizations of these temporal operators,we were close to having model checking.CTL and LTL are of incomparable expressive power.CTL can assert the existence of behaviors,e.g.,AGEF start asserts that it is always possible to re-initialize a circuit.LTL can assert certain more complex behaviors along a computation,such as GF en⇒F ex relating to fairness.(It turns out this formula is not expressible in CTL,but it is in“FairCTL”[EL87])The branching time logic CTL*[EH86]provides a uniform framework that subsumes both LTL and CTL,but at the higher cost of deciding satisfiability.There has been an ongoing debate as to whether linear time logic or branching time logic is better for program reasoning(cf.[La80],[EH86],[Va01]).Remark.The formal semantics of temporal logic formulae are defined with respect to a(Kripke)structure M=(S,S0,R,L)where S is a set of states,S0 comprises the initial states,R⊆S×S is a total binary relation,and L is a labelling of states with atomic facts(propositions)true there.An LTL formula h such as F P is defined over path x=t0,t1,t2...through M by the rule M,x|= F P iff∃i≥0P∈L(t i).Similarly a CTL formula f such as EGP holds of a state t0,denoted M,t0|=EGP,iffthere exists a path x=t0,t1,t2,...in M such that∀i≥0P∈L(t i).For LTL h,we define M|=h ifffor all paths x starting in S0,M,x|=h.For CTL formula f we define M|=f ifffor each s∈S0,M,s|=f.A structure is also known as a state graph or state transition graph or transition system.See[Em90]for details.64The Mu-calculusThe mu-calculus may be viewed as a particular but very general temporal logic. Some formulations go back to the work of de Bakker and Scott[deBS69];we deal specifically with the(propositional or)modal mu-calculus(cf.[EC80],[Ko83]). The mu-calculus provides operators for defining correctness properties using re-cursive definitions plus leastfixpoint and greatestfixpoint operators.Leastfix-points correspond to well-founded or terminating recursion,and are used to capture liveness or progress properties asserting that something does happen. Greatestfixpoints permit infinite recursion.They can be used to capture safety or invariance properties.The mu-calculus is very expressive and veryflexible.It has been referred to as a“Theory of Everything”.The formulae of the mu-calculus are built up from atomic proposition con-stants P,Q,...,atomic proposition variables Y,Z,...,propositional connectives ∨,∧,¬,and the leastfixpoint operator,µas well as the greatestfixpoint opera-tor,ν.Eachfixpoint formula such asµZ.τ(Z)should be syntactically monotone meaning Z occurs under an even number of negations,and similarly forν.The mu-calculus is interpreted with respect to a structure M=(S,R,L). The power set of S,2S,may be viewed as the complete lattice(2S,S,∅,⊆,∪,∩).Intuitively,each(closed)formula may be identified with the set of states of S where it is true.Thus,false which corresponds to the empty set is the bottom element,true which corresponds to S is the top element,and implication (∀s∈S[P(s)⇒Q(s)])which corresponds to simple set-theoretic containment (P⊆Q)provides the partial ordering on the lattice.An open formulaτ(Z) defines a mapping from2S→2S whose value varies as Z varies.A givenτ: 2S→2S is monotone provided that P⊆Q impliesτ(P)⊆τ(Q).Tarski-Knaster Theorem.(cf.[Ta55],[Kn28])Letτ:2S→2S be a monotone functional.Then(a)µY.τ(Y)=∩{Y:τ(Y)=Y}=∩{Y:τ(Y)⊆Y},(b)νY.τ(Y)=∪{Y:τ(Y)=Y}=∪{Y:τ(Y)⊇Y},(c)µY.τ(Y)=∪iτi(false)where i ranges over all ordinals of cardinality at mostthat of the state space S,so that when S isfinite i ranges over[0:|S|],and (d)νY.τ(Y)=∩iτi(true)where i ranges over all ordinals of cardinality at mostthat of the state space S,so that when S isfinite i ranges over[0:|S|].Consider the CTL property AF P.Note that it is afixed point orfixpoint of the functionalτ(Z)=P∨AXZ.That is,as the value of the input Z varies,the value of the outputτ(Z)varies,and we have AF P=τ(AF P)=P∨AXAF P. It can be shown that AF P is the leastfixpoint ofτ(Z),meaning the set of states associated with AF P is a subset of the set of states associated with Z,for any fixpoint Z=τ(Z).This might be denotedµZ.Z=τ(Z).More succinctly,we normally write justµZ.τ(Z).In this case we have AF P=µZ.P∨AXZ.We can get some intuition for the the mu-calculus by noting the following fixpoint characterizations for CTL properties:7EF P=µZ.P∨EXZAGP=νZ.P∧AXZAF P=µZ.P∨AXZEGP=νZ.P∧EXZA(P U Q)=µZ.Q∨(P∧AXZ)E(P U Q)=µZ.Q∨(P∧EXZ)For all these properties,as we see,thefixpoint characterizations are simple and plausible.It is not too difficult to give rigorous proofs of their correctness (cf.[EC80],[EL86]).We emphasize that the mu-calculus is a rich and powerful formalism;its formulae are really representations of alternatingfinite state au-tomata on infinite trees[EJ91].Since even such basic automata as deterministic finite state automata onfinite strings can form quite complex“cans of worms”, we should not be so surprised that it is possible to write down highly inscrutable mu-calculus formulae for which there is no readily apparent intuition regard-ing their intended meaning.The mu-calculus has also been referred to as the “assembly language of program logics”reflecting both its comprehensiveness and potentially intricate syntax.On the other hand,many mu-calculus char-acterizations of correctness properties are elegant due to its simple underlying mathematical organization.In[EL86]we introduced the idea of model checking for the mu-calculus in-stead of testing satisfiability.We catered for efficient model checking in fragments of the the mu-calculus.This provides a basis for practical(symbolic)model checking algorithms.We gave an algorithm essentially of complexity n d,where d is the alternation depth reflecting the number of significantly nested least and greatestfixpoint operators.We showed that common logics such as CTL,LTL, and CTL*were of low alternation depth d=1or d=2.We also provided succinctfixpoint characterizations for various natural fair scheduling criteria.A symbolic fair cycle detection method,known as the“Emerson-Lei”algorithm, is comprised of a simplefixpoint characterization plus the Tarski-Knaster theo-rem.It is widely used in practice even though it has worst case quadratic cost. Empirically,it usually outperforms alternatives.5The Origin of Model CheckingThere were several influences in my personal background that facilitated the de-velopment of model checking.In1975Zohar Manna gave a talk at the University of Texas onfixpoints and the Tarski-Knaster Theorem.I was familiar with Di-jkstra’s book[Di76]extending the Floyd-Hoare framework with wlp the weakest liberal precondition for partial correctness and wp the weakest precondition for to-tal correctness.It turns out that wlp and wp may be viewed as modal operators, for which Dijkstra implicitly gavefixpoint characterizations,although Dijkstra did not favor this viewpoint.Basu and Yeh[BY75]at Texas gavefixpoint char-acterizations of weakest preconditions for while loops.Ed Clarke[Cl79]gave similarfixpoint characterizations for both wp and wlp for a variety of control structures.8I will now describe how model checking originated at Harvard University.In prior work[EC80]we gavefixpoint characterizations for the main modalities of a logic that was essentially CTL.These would ultimately provide thefirst key ingredient of model checking.Incidentally,[EC80]is a paper that could very well not have appeared.Some-how the courier service delivering the hard-copies of the submission to Amster-dam for the program chair at CWI(Dutch for“Center for Mathematics and Computer Science”)sent the package in bill-the-recipient mode.Fortunately, CWI was gracious and accepted the package.All that remained to undo this small misfortune was to get an overseas bank draft to reimburse them.The next work,entitled“Design and Synthesis of Synchronization Skeletons using Branching Time Logic”,was devoted to program synthesis and model checking.I suggested to Ed Clarke that we present the paper,which would be known as[CE81],at the IBM Logics of Programs workshop,since he had an invitation to participate.Both the idea and the term model checking were introduced by Clarke and Emerson in[CE81].Intuitively,this is a method to establish that a given program meets a given specification where:–The program defines afinite state graph M.–M is searched for elaborate patterns to determine if the specification f holds.–Pattern specification isflexible.–The method is efficient in the sizes of M and,ideally,f.–The method is algorithmic.–The method is practical.The conception of model checking was inspired by program synthesis.I was interested in verification,but struck by the difficulties associated with manual proof-theoretic verification as noted above.It seemed that it might be possible to avoid verification altogether and mechanically synthesize a correct program directly from its CTL specification.The idea was to exploit the small model property possessed by certain decidable temporal logics:any satisfiable formula must have a“small”finite model of size that is a function of the formula size. The synthesis method would be sound:if the input specification was satisfiable, it built afinite global state graph that was a model of the specification,from which individual processes could be extracted The synthesis method should also be complete:If the specification was unsatisfiable,it should say so.Initially,it seemed to me technically problematic to develop a sound and complete synthesis method for CTL.However,it could always be ensured that an alleged synthesis method was at least sound.This was clear because given anyfinite model M and CTL specification f one can algorithmically check that M is a genuine model of f by evaluating(verifying)the basic temporal modal-ities over M based on the Tarski-Knaster theorem.This was the second key ingredient of model posite temporal formulae comprised of nested subformulae and boolean combinations of subformulae could be verified by re-cursive descent.Thus,fixpoint characterizations,the Tarski-Knaster theorem, and recursion yielded model checking.9Thus,we obtained the model checking framework.A model checker could be quite useful in practice,given the prevalence offinite state concurrent systems. The temporal logic CTL had theflexibility and expressiveness to capture many important correctness properties.In addition the CTL model checking algorithm was of reasonable efficiency,polynomial in the structure and specification sizes. Incidentally,in later years we sometimes referred to temporal logic model check-ing.The crucial roles of abstraction,synchronization skeletons,andfinite state spaces were discussed in[CE81]:The synchronization skeleton is an abstraction where detail irrelevant to synchronization is suppressed.Most solutions to synchronization prob-lems are in fact given as synchronization skeletons.Because synchronization skeletons are in generalfinite state...proposi-tional temporal logic can be used to specify their properties.Thefinite model property ensures that any program whose synchroniza-tion properties can be expressed in propositional temporal logic can be realized by afinite state machine.Conclusions of[CE81]included the following prognostications,which seem to have been on target:[Program Synthesis]may in the long run be quite practical.Much addi-tional research will be needed,however,to make it feasible in practice....We believe that practical[model checking]tools could be developed in the near future.To sum up,[CE81]made several contributions.It introduced model check-ing,giving an algorithm of quadratic complexity O(|f||M|2).It introduced the logic CTL.It gave an algorithmic method for concurrent program synthesis(that was both sound and complete).It argued that most concurrent systems can be abstracted tofinite state synchronization skeletons.It described a method for efficiently model checking basic fairness using strongly connected components. An NP-hardness result was established for checking certain assertions in a richer logic than CTL.A prototype(and non-robust)model checking tool BMoC was developed,primarily by a Harvard undergraduate,to permit verification of syn-chronization protocols.A later paper[CES86]improved the complexity of CTL model checking to linear O(|f||M|).It showed how to efficiently model check relative to uncondi-tional and weak fairness.The EMC model checking tool was described,and a version of the alternating bit protocol verified.A general framework for efficiently model checking fairness properties was given in[EL87],along with a reduction showing that CTL*model checking could be done as efficiently as LTL model checking.Independently,a similar method was developed in France by Sifakis and his student[QS82].Programs were interpreted over transition systems.A branching10。
Global Environmental Change 16(2006)268–281VulnerabilityW.Neil AdgerTyndall Centre for Climate Change Research,School of Environmental Sciences,University of East Anglia,Norwich NR47TJ,UKReceived 8May 2005;received in revised form 13February 2006;accepted 15February 2006AbstractThis paper reviews research traditions of vulnerability to environmental change and the challenges for present vulnerability research in integrating with the domains of resilience and adaptation.Vulnerability is the state of susceptibility to harm from exposure to stresses associated with environmental and social change and from the absence of capacity to adapt.Antecedent traditions include theories of vulnerability as entitlement failure and theories of hazard.Each of these areas has contributed to present formulations of vulnerability to environmental change as a characteristic of social-ecological systems linked to resilience.Research on vulnerability to the impacts of climate change spans all the antecedent and successor traditions.The challenges for vulnerability research are to develop robust and credible measures,to incorporate diverse methods that include perceptions of risk and vulnerability,and to incorporate governance research on the mechanisms that mediate vulnerability and promote adaptive action and resilience.These challenges are common to the domains of vulnerability,adaptation and resilience and form common ground for consilience and integration.r 2006Elsevier Ltd.All rights reserved.Keywords:Vulnerability;Disasters;Food insecurity;Hazards;Social-ecological systems;Surprise;Governance;Adaptation;Resilience1.IntroductionThe purpose of this article is to review existing knowl-edge on analytical approaches to vulnerability to environ-mental change in order to propose synergies between research on vulnerability and on resilience of social-ecological systems.The concept of vulnerability has been a powerful analytical tool for describing states of suscept-ibility to harm,powerlessness,and marginality of both physical and social systems,and for guiding normative analysis of actions to enhance well-being through reduction of risk.In this article,I argue that emerging insights into the resilience of social-ecological systems complement and can significantly add to a converging research agenda on the challenges faced by human environment interactions under stresses caused by global environmental and social change.I review the precursors and the present emphases of vulnerability research.I argue that,following decades of vulnerability assessment that distinguished between processand outcome,much exciting current research emphasizes multiple stressors and multiple pathways of vulnerability.This current research can potentially contribute to emer-ging resilience science through methods and conceptualiza-tion of the stresses and processes that lead to threshold changes,particularly those involved in the social and institutional dynamics of social-ecological systems.Part of the potential convergence and learning across vulnerability and resilience research comes from a con-sistent focus on social-ecological systems.The concept of a social-ecological system reflects the idea that human action and social structures are integral to nature and hence any distinction between social and natural systems is arbitrary.Clearly natural systems refer to biological and biophysical processes while social systems are made up of rules and institutions that mediate human use of resources as well as systems of knowledge and ethics that interpret natural systems from a human perspective (Berkes and Folke,1998).In the context of these social-ecological systems,resilience refers to the magnitude of disturbance that can be absorbed before a system changes to a radically different state as well as the capacity to self-organise and the/locate/gloenvcha0959-3780/$-see front matter r 2006Elsevier Ltd.All rights reserved.doi:10.1016/j.gloenvcha.2006.02.006E-mail address:n.adger@.capacity for adaptation to emerging circumstances(e.g. Carpenter et al.,2001;Berkes et al.,2003;Folke,2006). Vulnerability,by contrast,is usually portrayed in negative terms as the susceptibility to be harmed.The central idea of the often-cited IPCC definition(McCarthy et al.,2001)is that vulnerability is degree to which a system is susceptible to and is unable to cope with adverse effects (of climate change).In all formulations,the key parameters of vulnerability are the stress to which a system is exposed, its sensitivity,and its adaptive capacity.Thus,vulnerability research and resilience research have common elements of interest—the shocks and stresses experienced by the social-ecological system,the response of the system,and the capacity for adaptive action.The points of convergence are more numerous and more fundamental than the points of divergence.The different formulations of research needs,research methods,and normative implications of resilience and vulnerability research stem from,I believe,the formulation of the objectives of study(or the system)in each case.As Berkes and Folke(1998,p.9)point out,‘there is no single universally accepted way of formulating the linkages between human and natural systems’.Other areas of research in the human–environment interaction(such as common property,ecological economics or adaptive management)conceptualize social-ecological linkages in different ways.The common property resource tradition, for example,stresses the importance of social,political and economic organizations in social-ecological systems,with institutions as mediating factors that govern the relation-ship between social systems and ecosystems on which they depend(Dols ak and Ostrom,2003).Ecological economics, by contrast,links social and natural systems through analysis of the interactions and substitutability of natural capital with other forms of capital(human,social and physical)(e.g.the‘containing and sustaining ecosystem’idea of Daly and Farley,2004).Adaptive management,by contrast,deals with the unpredictable interactions between humans and ecosystems that evolve together—it is the science of explaining how social and natural systems learn through experimentation(Berkes and Folke,1998).All of these other traditions(and both vulnerability and resilience research in effect)seek to elaborate the nature of social-ecological systems while using theories with explanatory power for particular dimensions of human–environment interactions.Evolving insights into the vulnerability of social-ecolo-gical systems show that vulnerability is influenced by the build up or erosion of the elements of social-ecological resilience.These are the ability to absorb the shocks,the autonomy of self-organisation and the ability to adapt both in advance and in reaction to shocks.The impacts and recovery from Asian tsunami of2004,or the ability of small islands to cope with weather-related extremes,for example,demonstrate how discrete events in nature expose underlying vulnerability and push systems into new domains where resilience may be reduced(Adger et al.,2005b).In a world of global change,such discrete events are becoming more common.Indeed,risk and perturbation in many ways define and constitute the landscape of decision-making for social-ecological systems.I proceed by examining the traditions within vulner-ability research including thefields of disasters research (delineated into human ecology,hazards,and the‘Pressure and Release’model)and research on entitlements.This discussion is complementary to other reviews that discern trends and strategies for useful and analytically powerful vulnerability research.Eakin and Luers(2006),Bankoff et al.(2004),Pelling(2003),Fu ssel and Klein(2006),Cutter (2003),Ionescu et al.(2005)and Kasperson et al.(2005), for example,present significant reviews of the evolution and present application of vulnerability tools and methods across resource management,social change and urbaniza-tion and climate change.These build on earlier elabora-tions by Liverman(1990),Dow(1992),Ribot et al.(1996), and others(see the paper by Janssen et al.(2006)for an evaluation of the seminal articles).Elements of disasters and entitlements theories have contributed to current use of vulnerability in the analysis of social-ecological systems and in sustainable livelihoods research.Livelihoods research remains,I argue,firmly rooted in social systems rather than integrative of risks across social-ecological systems.All these traditions and approaches are found in applications of vulnerability in the context of climate change.The remaining sections of the paper examine methodological developments and chal-lenges to human dimensions research,particularly on measurement of vulnerability,dealing with perceptions of risk,and issues of governance.The paper demonstrates that these challenges are common to thefields of vulnerability,adaptation and resilience and hence point to common ground for learning between presently dis-parate traditions and communities.2.Evolution of approaches to vulnerability2.1.Antecedents:hazards and entitlementsA number of traditions and disciplines,from economics and anthropology to psychology and engineering,use the term vulnerability.It is only in the area of human–envir-onment relationships that vulnerability has common, though contested,meaning.Human geography and human ecology have,in particular,theorized vulnerability to environmental change.Both of these disciplines have made contributions to present understanding of social-ecological systems,while related insights into entitlements grounded vulnerability analysis in theories of social change and decision-making.In this section,I argue that all these disciplines traditions continue to contribute to emerging methods and concepts around social-ecological systems and their inherent and dynamic vulnerability.While there are differences in approaches,there are many commonalities in vulnerability research in theW.N.Adger/Global Environmental Change16(2006)268–281269environmental arena.First,it is widely noted that vulnerability to environmental change does not exist in isolation from the wider political economy of resource use.Vulnerability is driven by inadvertent or deliberate human action that reinforces self-interest and the distribution of power in addition to interacting with physical and ecological systems.Second,there are common terms across theoretical approaches:vulnerability is most often con-ceptualized as being constituted by a components that include exposure and sensitivity to perturbations or external stresses,and the capacity to adapt.Exposure is the nature and degree to which a system experiences environmental or socio-political stress.The characteristics of these stresses include their magnitude,frequency,duration and areal extent of the hazard (Burton et al.,1993).Sensitivity is the degree to which a system is modified or affected by perturbations.1Adaptive capacity is the ability of a system to evolve in order to accommodate environmental hazards or policy change and to expand the range of variability with which it can cope.There are,I believe,two relevant existing theories that relate to human use of environmental resources and to environmental risks:the vulnerability and related resilience research on social-ecological systems and the separate literature on vulnerability of livelihoods to poverty.Fig.1is an attempt to portray the overlap in ideas and those ideas,which are distinct from each other and is based on my reading of this literature.2Two major research traditions in vulnerability acted as seedbeds for ideas that eventually translated into current research on vulnerability of social and physical systems in an integrated manner.These two antecedents are the analysis of vulnerability as lack of entitlements and the analysis of vulnerability to natural hazards.These are depicted in the upper part of Fig.1,with the hazards tradition delineated into three overlapping areas of human ecology (or political ecology),natural hazards,and the so-called ‘Pressure and Release’model that spans the space between hazards and political ecology approaches.Other reviews of vulnerability have come to different conclusions on intellectual traditions.Cutter (1996)and Cutter et al.(2003),for example,classify research into first,vulnerability as exposure (conditions that make people or places vulnerable to hazard),second,vulnerability as social condition (measure of resilience to hazards),and third,‘the integration of potential exposures and societal resiliencewith a specific focus on places or regions (Cutter et al.,2003,p.243).O’Brien et al.(2005)identify similar trends in ‘vulnerability as outcome’and ‘contextual vulnerability’as two opposing research foci and traditions,relating to debates within the climate change area (see also Kelly and Adger,2000).These distinctions between outcome and processes of vulnerability are also important,though not captured in Fig.1,which portrays more of the disciplinary divide between those endeavours which largely ignore physical and biological systems (entitlements and liveli-hoods)and those that try to integrate social and ecological systems.The impetus for research on entitlements in livelihoods has been the need to explain food insecurity,civil strife and social upheaval.Research on the social impacts of natural hazards came from explaining commonalities between apparently different types of natural disasters and their impacts on society.But clearly these phenomena (of entitlement failure leading to famine and natural hazards)themselves are not independent of each other.While some famines can be triggered by extreme climate events,such as drought or flood,for example,vulnerability researchers have increasingly shown that famines and food insecurity are much more often caused by disease,war or other factors (Sen,1981;Swift,1989;Bohle et al.,1994;Blaikie et al.,1994).Entitlements-based explanations of vulnerability focussed almost exclusively on the social realm of institu-tions,well-being and on class,social status and gender as important variables while vulnerability research on natural hazards developed an integral knowledge of environmental risks with human response drawing on geographical and psychological perspectives in addition to social parameters of risk.Vulnerability to food insecurity is explained,through so-called entitlement theory,as a set of linked economic and institutional factors.Entitlements are the actual or potential resources available to individuals based on their own production,assets or reciprocal arrangements.Food insecurity is therefore a consequence of human activity,which can be prevented by modified behaviour and by political interventions.Vulnerability is the result of processes in which humans actively engage and which they can almost always prevent.The theory of entitlements as an explanation for famine causes was developed in the early 1980s (Sen,1981,1984)and displaced prior notions that shortfalls in food production through drought,flood,or pest,were the principal cause of famine.It focused instead on the effective demand for food,and the social and economic means of obtaining it.Entitlements are sources of welfare or income that are realized or are latent.They are ‘the set of alternative commodity bundles that a person can command in a society using the totality of rights and opportunities that he or she faces’(Sen,1984,p.497).Essentially,vulnerability of livelihoods to shocks occurs when people have insufficient real income and wealth,and when there is a breakdown in other previously held endowments.1The generic meaning of sensitivity is applied in the climate change field where McCarthy et al.(2001)in the IPCC report of 2001defines sensitivity and illustrates the generic meaning with reference to climate change risks thus:‘the degree to which a system is affected,either adversely or beneficially,by climate-related stimuli.The effect may be direct (e.g.,a change in crop yield in response to a change in the mean,range,or variability of temperature)or indirect (e.g.,damages caused by an increase in the frequency of coastal flooding due to sea level rise)’.2The observations leading to Fig.1are confirmed to an extent by the findings of Janssen et al.(2006)on the importance of Sen (1981)as a seminal reference across many areas of vulnerability research and the non-inclusion of the present livelihood vulnerability literature.W.N.Adger /Global Environmental Change 16(2006)268–281270The advantage of the entitlements approach to famine is that it can be used to explain situations where populations have been vulnerable to famine even where there are no absolute shortages of food or obvious environmental drivers at work.Famines and other crises occur when entitlements fail.While the entitlements approach to analysing vulner-ability to famine often underplayed ecological or physical risk,it succeeded in highlighting social differentiation in cause and outcome of vulnerability.The second research tradition(upper right in Fig.1)on natural hazards,by contrast has since its inception attempted to incorporate physical science,engineering and social science to explain linkages between system elements.The physical elements of exposure,probability and impacts of hazards,both seemingly natural and unnatural, are the basis for this tradition.Burton et al.(1978and 1993)summarized and synthesized decades of research and practice onflood management,geo-hazards and major technological hazards,deriving lessons on individual perceptions of risk,through to international collective action.They demonstrated that virtually all types of natural hazard and all social and political upheaval have vastly different impacts on different groups in society.For many natural hazards the vulnerability of human popula-tions is based on where they reside,their use of the natural resources,and the resources they have to cope.The human ecology tradition(sometimes labelled the political ecology stream—Cutter,1996)within analysis of vulnerability to hazards(upper right in Fig.1)argued that the discourse of hazard management,because of a perceived dominance of engineering approaches,failed to engage with the political and structural causes of vulner-ability within society.Human ecologists attempted to explain why the poor and marginalized have been most at risk from natural hazards(Hewitt,1983;Watts,1983), what Hewitt(1997)termed‘the human ecology of endangerment’.Poorer households tend to live in riskier areas in urban settlements,putting them at risk from flooding,disease and other chronic stresses.Women are differentially at risk from many elements of environmental hazards,including,for example,the burden of work in recovery of home and livelihood after an event(Fordham, 2003).Flooding in low-lying coastal areas associated with monsoon climates or hurricane impacts,for example,are seasonal and usually short lived,yet can have significant unexpected impacts for vulnerable sections of society. Burton et al.(1993),from a mainstream hazards tradition,argued that hazards are essentially mediated by institutional structures,and that increased economic activity does not necessarily reduce vulnerability to impacts of hazards in general.As with food insecurity,vulnerability to natural hazards has often been explained by technical and institutional factors.By contrast the human ecology approach emphasizes the role of economic development in adapting to changing exogenous risk and hence differences in class structure,governance,and economic dependency in the differential impacts of hazards(Hewitt,1983).Much of the world’s‘vulnerability as experienced’comes from perceptions of insecurity.Insecurity at its most basic level is not only a lack of security of food supply and availability and of economic well-being,but also freedom from strife and conflict.Hewitt(1994,1997)argues that violence and the‘disasters of war’have been pervasive sources of danger for societies,accounting for up to half of all reported famines in the past century.While war andresilience of social-ecologicalDirect flow of ideasIndirect flow of ideasFig.1.Traditions in vulnerability research and their evolution.W.N.Adger/Global Environmental Change16(2006)268–281271civil strife often exacerbate natural hazards,the perceptions of vulnerability associated with them are fundamentally different in that food insecurity,displacement and violence to create vulnerabilities are deliberate acts perpetrated towards political ends(Hewitt,1994,1997).In Fig.1,I portray these two traditions of hazards research as being successfully bridged by Blaikie and colleagues(1994)in their‘Pressure and Release’model of hazards.They proposed that physical or biological hazards represent one pressure and characteristic of vulnerability and that a further pressure comes from the cumulative progression of vulnerability,from root causes through to local geography and social differentiation.These two pressures culminate in the disasters that result from the additive pressures of hazard and vulnerability(Blaikie et al.,1994).The analysis captured the essence of vulner-ability from the physical hazards tradition while also identifying the proximate and underlying causes of vulner-ability within a human ecology framework.The analysis was also comprehensive in seeking to explain physical and biological hazards(though deliberately omitting technolo-gical hazards).Impacts associated with geological hazards often occur without much effective warning and with a speed of onset of only a few minutes.By contrast,the HIV/ AIDS epidemic is a long wave disaster with a slow onset but catastrophic impact(Barnett and Blaikie,1994; Stabinski et al.,2003).Blaikie et al.(1994)also prescribed actions and principles for recovery and mitigation of disasters that focussed explicitly on reducing vulnerability.The pressure and release model is portrayed in Fig.1as successfully synthesizing social and physical vulnerability.In being comprehensive and in giving equal weight to‘hazard’and ‘vulnerability’as pressures,the analysis fails to provide a systematic view of the mechanisms and processes of vulnerability.Operationalising the pressure and release model necessarily involves typologies of causes and categorical data on hazards types,limiting the analysis in terms of quantifiable or predictive relationships.In Fig.1,a separate stream on sustainable livelihoods and vulnerability to poverty is shown as a successor to vulnerability as entitlement failure.This research tradition, largely within development economics,tends not to consider integrative social-ecological systems and hence, but nevertheless complements the hazards-based ap-proaches in Fig.1through conceptualization and measure-ment of the links between risk and well-being at the individual level(Alwang et al.,2001;Adger and Winkels, 2006).A sustainable livelihood refers to the well-being of a person or household(Ellis,2000)and comprises the capabilities,assets and activities that lead to well-being (Chambers and Conway,1992;Allison and Ellis,2001). Vulnerability in this context refers to the susceptibility to circumstances of not being able to sustain a livelihood:the concepts are most often applied in the context of development assistance and poverty alleviation.While livelihoods are conceptualized asflowing from capital assets that include ecosystem services(natural capital),the physical and ecological dynamics of risk remain largely unaccounted for in this area of research.The principal focus is on consumption of poor households as a manifestation of vulnerability(Dercon,2004).Given the importance of this tradition and the contribution that researchers in thisfield make to methods(see section below),it seems that cross-fertilization of development economics with vulnerability,adaptation and resilience research would yield new insights.2.2.Successors and current research frontiersThe upper part of Fig.1and the discussions here portray a somewhat linear relationship between antecedent and successor traditions of vulnerability research.This is,of course,a caricature,given the influence of particular researchers across traditions and the overlap and cross-fertilization of ideas and methods.Nevertheless,from its origins in disasters and entitlement theories,there is a newly emerging synthesis of systems-oriented research attempting,through advances in methods,to understand vulnerability in a holistic manner in natural and social systems.Portraying vulnerability as a property of a social-ecological system,and seeking to elaborate the mechanisms and processes in a coupled manner,represents a conceptual advance in analysis(Turner et al.,2003a).Rather than focusing on multiple outcomes from a single physical stress,the approach proposed by Turner and colleagues (2003a)seeks to analyse the elements of vulnerability(its exposure,sensitivity and resilience)of a bounded system at a particular spatial scale.It also seeks to quantify and make explicit both the links to other scales and to quantify the impact of action to cope and responsibility on other elements of the system(such as the degree of exposure of ecological components or communities).The interdisci-plinary and integrative nature of the framework is part of a wider effort to identify science that supports goals of sustainability(e.g.Kates et al.,2001)and is mirrored in other system-oriented vulnerability research such as that developed at the Potsdam Institute for Climate Impacts Research(Schro ter et al.,2005;Ionescu et al.,2005). Integrative frameworks focused on interaction between properties of social-ecological systems have built on pioneering work,for example by Liverman(1990)that crucially developed robust methods for vulnerability assessment.In her work on vulnerability to drought in Mexico,Liverman(1990)argued for integrative ap-proaches based on comparative quantitative assessment of the drivers of vulnerability.She showed that irrigation and land tenure have the greatest impact on the incidence of vulnerability to drought making collectively owned ejido land more susceptible.Thus,using diverse sources of quantitative data,this study showed the places and the people and the drivers within the social-ecological system that led to vulnerability.W.N.Adger/Global Environmental Change16(2006)268–281 272Following in that tradition,Luers and colleagues(2003) utilize the Turner et al.(2003a)framework to also examine vulnerability of social-ecological systems in an agricultural region of Mexico and demonstrate innovations in methods associated with this approach.In recognizing many of the constraints they make a case for measuring the vulner-ability of specific variables:they argue that vulnerability should shift away from quantifying critical areas or vulnerable places towards measures that can be applied at any scale.They argue for assessing the vulnerability of the most important variables in the causal chain of vulnerability to specific sets of stressors.They develop generic metrics that attempt to assess the relationship between a wide range of stressors and the outcome variables of concern(Luers et al.,2003).In their most general form:Vulnerability¼sensitivity to stressstate relative to thresholdÂProb:of exposure to stress:The parameter under scrutiny here could be a physical or social parameter.In the case of Luers et al.(2003)they investigate the vulnerability of farming systems in an irrigated area of Mexico through examining agricultural yields.But the same generalized equation could examine disease prevalence,mortality in human populations,or income of households—all of which are legitimate potential measures within vulnerability analysis.But other research presently argues that the key to understanding vulnerability lies in the interaction between social dynamics within a social-ecological system and that these dynamics are important for resilience. For example,livelihood specialization and diversity have been shown to be important elements in vulner-ability to drought in Kenya and Tanzania(Eriksen et al., 2005).While these variables can be measured directly, it is the social capital and social relations that translate these parameters into vulnerability of place.Eriksen et al. (2005)show that women are excluded from particular high-value activities:hence vulnerability is reproduced within certain parts of social systems through deep structural elements.Similarly,Eakin(2005)shows for Mexican farmers that diversity is key to avoiding vulnerability and that investment in commercial high-yielding irrigated agriculture can exacerbate vulnerability compared to a farming strategy based on maize(that is in effect more sensitive to drought).It is the multi-level interactions between system components(livelihoods, social structures and agricultural policy)that determine system vulnerability.Hence vulnerability assessment incorporates a significant range of parameters in building quantitative and qualita-tive pictures of the processes and outcomes of vulner-ability.These relate to ideas of resilience by identifying key elements of the system that represent adaptive capacity (often social capital and other assets—Pelling and High,2005;Adger,2003)and the impact of extreme event thresholds on creating vulnerabilities within systems.2.3.Traditions exemplified in vulnerability to climate changeResearch on vulnerability applied to the issue of climate change impacts and risk demonstrates the full range of research traditions while contributing in a significant way to the development of newly emerging systems vulner-ability analysis.Vulnerability research in climate change has,in some ways,a unique distinction of being a widely accepted and used term and an integral part of its scientific agenda.Climate change represents a classic multi-scale global change problem in that it is characterized by infinitely diverse actors,multiple stressors and multiple time scales.The existing evidence suggests that climate change impacts will substantially increase burdens on those populations that are already vulnerable to climate ex-tremes,and bear the brunt of projected(and increasingly observed)changes that are attributable to global climate change.The2003European heatwave and even the impacts of recent Atlantic hurricanes demonstrate essential ele-ments of system vulnerability(Poumade re et al.,2005; Stott et al.,2004;Kovats et al.,2005;O’Brien,2006) Groups that are already marginalized bear a dispropor-tionate burden of climate impacts,both in the developed countries and in the developing world.The science of climate change relies on insights from multiple disciplines and is founded on multiple epistemol-ogies.Climate change is,in addition,unusually focused on consensus(Oreskes,2004)because of the nature of evidence and interaction of science with a highly contested legal instrument,the UN Framework Convention on Climate Change.Within climate change,therefore,the reports of the Intergovernmental Panel on Climate Change (IPCC)have become an authoritative source that sets agendas and acts as a legitimizing device for research.It is therefore worth examining primary research on vulner-ability to climate change and its interpretation within the reports of the IPCC.The full range of concepts and approaches highlighted in Fig.1are used within vulnerability assessments of climate change.O’Brien et al.(2005)argues that this diversity of approaches confuses policy-makers in this arena—research is often not explicit about whether it portrays vulnerability as an outcome or vulnerability as a context in which climate risks are dealt with and adapted to.The IPCC defines vulnerability within the latest assessment report (McCarthy et al.,2001)as‘the degree to which a system is susceptible to,or unable to cope with,adverse effects of climate change,including climate variability and extremes. Vulnerability is a function of the character,magnitude,and rate of climate variation to which a system is exposed,its sensitivity,and its adaptive capacity’.Vulnerability to climate change in this context is therefore defined as a characteristic of a system and as aW.N.Adger/Global Environmental Change16(2006)268–281273。
多方位看待问题的能力的英文English:The ability to perceive issues from multiple perspectives is crucial in navigating the complexities of our interconnected world. It involves adopting a flexible mindset that acknowledges diverse viewpoints, cultural backgrounds, and contextual factors. By cultivating this skill, individuals can transcend narrow-mindedness and develop a more holistic understanding of various situations. Multidimensional thinking enables one to anticipate potential challenges, identify innovative solutions, and foster effective collaboration across diverse teams. Moreover, it promotes empathy and fosters constructive dialogue, as individuals strive to comprehend others' viewpoints and find common ground amidst differences. In a rapidly changing global landscape, the capacity to see beyond one's own vantage point is essential for informed decision-making and sustainable problem-solving. Embracing a multidimensional perspective empowers individuals to navigate ambiguity with confidence, adapt to evolving circumstances, and contribute positively to societal progress.中文翻译:从多个角度看待问题的能力在应对我们日益紧密联系的世界复杂性方面至关重要。
a r X i v :p h y s i c s /0302066v 1 [p h y s i c s .g e n -p h ] 19 F eb 2003Detection of the Neutrino Fluxes from Several SourcesD.L.KhokhlovSumy State University ∗(Dated:February 2,2008)It is considered the detection of neutrinos moving from the opposite directions.The states of the particle of the detector interacting with the neutrinos are connected with the P-transformation.Hence only a half of neutrinos gives contribution into the superposition of the neutrino states.Taking into account the effect of the opposite neutrino directions the total neutrino flux from several sources are in the range 0.5–1of that without the effect.The neutrino flux from nuclear reactors measured in the KamLAND experiment is 0.611±0.085(stat)±0.041(syst)of the expected flux.Calculations for the conditions of the KamLAND experiment yield the neutrino flux taking into account the effect of the opposite neutrino directions,0.555,of that without the effect that may account for the neutrino flux observed in the KamLAND experiment.PACS numbers:03.65.-w,28.50.HwLet two equal neutrino fluxes move towards the neu-trino detector from the opposite directions.The first neutrino flux moves from the x direction,and the second neutrino flux moves from the −x direction.The probabil-ity of the weak interaction of the particle of the detector with a single neutrino is given byw =| ψ1ψ2|g |ψνψp |2(1)where g is the weak coupling,ψp is the state of the parti-cle of the detector,ψνis the state of neutrino,ψ1,ψ2are the states of the particles arising in the interaction.The probability given by eq.(1)is too small to register the event in a single interaction of the particle of the detector with the neutrino.The probability of the event is a sum of probabilities of a number of interactions.To define the probability of interaction of the particle of the detector with two neutrinos moving from the oppo-site directions one should form the superposition of the particle of the detector and neutrino states.The state of the particle interacting with the neutrino moving from the x direction and the state of the particle interacting with the neutrino moving from the −x direction are con-nected with the P-transformation|ψp (−x ) =P |ψp (x ) .(2)Then the superposition of the particle and neutrino states should be given by|ψ =12|ψν [|ψp (x ) +|ψp (−x ) ]=12|ψν [|ψp (x ) +P |ψp (x ) ]=12[|ψν +P |ψν ]|ψp (x ) .(3)2where one should sum thefluxes F ix, F iy as vectors.One can see that the neutrinoflux given by eq.(9)are within the range0.5–1of the neutrinoflux given by eq.(5).In the KamLAND experiment[1],it is measured the flux of¯νe’s from distant nuclear reactors.The ratio of the number of observed events to the expected num-ber of events is reported[2]to be0.611±0.085(stat)±0.041(syst).Using the data on the location and ther-mal power of reactors[3]one can estimate the neutrino flux from reactors which is approximately proportional to the thermal powerflux P th/4πL2,where P th is the re-actor thermal power,L is the distance from the detector to the reactor.Hence one can estimate the effect of the opposite neutrino directions through the ratio of the neu-trinoflux calculated with eq.(9)and that calculated with eq.(5).More than99%of the neutrinoflux measured in the KamLAND experiment comes from Japanese and Ko-rean nuclear reactors.So in thefirst approximation one can calculate the neutrinoflux only from Japanese and Korean nuclear reactors.Calculations for the conditions of the KamLAND experiment yield the ratio of the neu-trinoflux taking into account the effect of the opposite neutrino directions and that without this effect,0.555. Thus the effect of the opposite neutrino directions may account for the neutrinoflux observed in the KamLAND experiment.Note that the value0.555is obtained with the use of the specification data on the reactor thermal power.To obtain the real value one should use the op-eration data on the reactor thermal power for the period of measurement.[1]http://www.awa.tohoku.ac.jp/KamLAND/[2]K.Eguchi et al.,KamLAND Collaboration,submitted toPhys.Rev.Lett.,hep-ex/0212021.[3]/。
Unit 1 Schooling●Vocabulary (passage 1)1.striking2.slender impeccable3.discernible4.sloppy5.Sagacity6.arrogance7.vow8.homonym9.glistening 10.fix the blame on●Vocabulary (passage 2)ABCAB DADDC●Translation1.我曾经遇到过这样一位管弦乐队指挥严师。
当有人弹错时,他怒骂他为白痴;当有人谈走音时,他暂停指挥,怒吼。
他就是杰瑞.帕卡瑞斯基——乌克兰移民。
2.传统的观念认为老师应该为学生梳理知识,而不是一味的把知识塞进他们的脑袋里。
作业和小组学习都是备受青睐的学习手段。
传统的方法,如讲授和背诵,都被讽刺为“专杀”,被人反对,被贬为是用正确的方法来蚕食年轻一代的创造力和积极性。
3.死记硬背现在被作为解释来自印度(印度人的记忆力让人赞不绝口)家庭的孩子在全国拼字游戏比赛中大胜对手的一个原因。
4.当然,我们也担心失败会给孩子造成精神创伤,削弱他们的自尊。
5.研究人员曾认为,最有效的老师会通过小组学习和讨论带领学生学习知识。
Unit 2 Music●Vocabulary (passage 1)1.molecular2.hyperactive3.integrated4.retention5.condense6.clerical7.alert8.aestheticallypelling 10.undeniably●Vocabulary (passage 2)1-10 BDABC DABDC●Translation1.一项新的研究消除了某些美国人所珍视的观点,及音乐能够提高孩子的智力。
2.对于这一点,仅有几十项研究检验过传闻中的学习音乐对智力的益处,其中只有5项采用了随机对照试验,即所设计的研究将因果效应分离出来。
3.尽管这一效应只持续了大约15分钟,但却被吹嘘成智力有可持续的提高,乔治亚州州长泽尔.米勒就在1998年承诺每年拿出超过1000000美元的资金用于给该州每位儿童提供一张古典音乐的唱片。
Network impacts of a road capacity reduction:Empirical analysisand model predictionsDavid Watling a ,⇑,David Milne a ,Stephen Clark baInstitute for Transport Studies,University of Leeds,Woodhouse Lane,Leeds LS29JT,UK b Leeds City Council,Leonardo Building,2Rossington Street,Leeds LS28HD,UKa r t i c l e i n f o Article history:Received 24May 2010Received in revised form 15July 2011Accepted 7September 2011Keywords:Traffic assignment Network models Equilibrium Route choice Day-to-day variabilitya b s t r a c tIn spite of their widespread use in policy design and evaluation,relatively little evidencehas been reported on how well traffic equilibrium models predict real network impacts.Here we present what we believe to be the first paper that together analyses the explicitimpacts on observed route choice of an actual network intervention and compares thiswith the before-and-after predictions of a network equilibrium model.The analysis isbased on the findings of an empirical study of the travel time and route choice impactsof a road capacity reduction.Time-stamped,partial licence plates were recorded across aseries of locations,over a period of days both with and without the capacity reduction,and the data were ‘matched’between locations using special-purpose statistical methods.Hypothesis tests were used to identify statistically significant changes in travel times androute choice,between the periods of days with and without the capacity reduction.A trafficnetwork equilibrium model was then independently applied to the same scenarios,and itspredictions compared with the empirical findings.From a comparison of route choice pat-terns,a particularly influential spatial effect was revealed of the parameter specifying therelative values of distance and travel time assumed in the generalised cost equations.When this parameter was ‘fitted’to the data without the capacity reduction,the networkmodel broadly predicted the route choice impacts of the capacity reduction,but with othervalues it was seen to perform poorly.The paper concludes by discussing the wider practicaland research implications of the study’s findings.Ó2011Elsevier Ltd.All rights reserved.1.IntroductionIt is well known that altering the localised characteristics of a road network,such as a planned change in road capacity,will tend to have both direct and indirect effects.The direct effects are imparted on the road itself,in terms of how it can deal with a given demand flow entering the link,with an impact on travel times to traverse the link at a given demand flow level.The indirect effects arise due to drivers changing their travel decisions,such as choice of route,in response to the altered travel times.There are many practical circumstances in which it is desirable to forecast these direct and indirect impacts in the context of a systematic change in road capacity.For example,in the case of proposed road widening or junction improvements,there is typically a need to justify econom-ically the required investment in terms of the benefits that will likely accrue.There are also several examples in which it is relevant to examine the impacts of road capacity reduction .For example,if one proposes to reallocate road space between alternative modes,such as increased bus and cycle lane provision or a pedestrianisation scheme,then typically a range of alternative designs exist which may differ in their ability to accommodate efficiently the new traffic and routing patterns.0965-8564/$-see front matter Ó2011Elsevier Ltd.All rights reserved.doi:10.1016/j.tra.2011.09.010⇑Corresponding author.Tel.:+441133436612;fax:+441133435334.E-mail address:d.p.watling@ (D.Watling).168 D.Watling et al./Transportation Research Part A46(2012)167–189Through mathematical modelling,the alternative designs may be tested in a simulated environment and the most efficient selected for implementation.Even after a particular design is selected,mathematical models may be used to adjust signal timings to optimise the use of the transport system.Road capacity may also be affected periodically by maintenance to essential services(e.g.water,electricity)or to the road itself,and often this can lead to restricted access over a period of days and weeks.In such cases,planning authorities may use modelling to devise suitable diversionary advice for drivers,and to plan any temporary changes to traffic signals or priorities.Berdica(2002)and Taylor et al.(2006)suggest more of a pro-ac-tive approach,proposing that models should be used to test networks for potential vulnerability,before any reduction mate-rialises,identifying links which if reduced in capacity over an extended period1would have a substantial impact on system performance.There are therefore practical requirements for a suitable network model of travel time and route choice impacts of capac-ity changes.The dominant method that has emerged for this purpose over the last decades is clearly the network equilibrium approach,as proposed by Beckmann et al.(1956)and developed in several directions since.The basis of using this approach is the proposition of what are believed to be‘rational’models of behaviour and other system components(e.g.link perfor-mance functions),with site-specific data used to tailor such models to particular case studies.Cross-sectional forecasts of network performance at specific road capacity states may then be made,such that at the time of any‘snapshot’forecast, drivers’route choices are in some kind of individually-optimum state.In this state,drivers cannot improve their route selec-tion by a unilateral change of route,at the snapshot travel time levels.The accepted practice is to‘validate’such models on a case-by-case basis,by ensuring that the model—when supplied with a particular set of parameters,input network data and input origin–destination demand data—reproduces current mea-sured mean link trafficflows and mean journey times,on a sample of links,to some degree of accuracy(see for example,the practical guidelines in TMIP(1997)and Highways Agency(2002)).This kind of aggregate level,cross-sectional validation to existing conditions persists across a range of network modelling paradigms,ranging from static and dynamic equilibrium (Florian and Nguyen,1976;Leonard and Tough,1979;Stephenson and Teply,1984;Matzoros et al.,1987;Janson et al., 1986;Janson,1991)to micro-simulation approaches(Laird et al.,1999;Ben-Akiva et al.,2000;Keenan,2005).While such an approach is plausible,it leaves many questions unanswered,and we would particularly highlight two: 1.The process of calibration and validation of a network equilibrium model may typically occur in a cycle.That is to say,having initially calibrated a model using the base data sources,if the subsequent validation reveals substantial discrep-ancies in some part of the network,it is then natural to adjust the model parameters(including perhaps even the OD matrix elements)until the model outputs better reflect the validation data.2In this process,then,we allow the adjustment of potentially a large number of network parameters and input data in order to replicate the validation data,yet these data themselves are highly aggregate,existing only at the link level.To be clear here,we are talking about a level of coarseness even greater than that in aggregate choice models,since we cannot even infer from link-level data the aggregate shares on alternative routes or OD movements.The question that arises is then:how many different combinations of parameters and input data values might lead to a similar link-level validation,and even if we knew the answer to this question,how might we choose between these alternative combinations?In practice,this issue is typically neglected,meaning that the‘valida-tion’is a rather weak test of the model.2.Since the data are cross-sectional in time(i.e.the aim is to reproduce current base conditions in equilibrium),then in spiteof the large efforts required in data collection,no empirical evidence is routinely collected regarding the model’s main purpose,namely its ability to predict changes in behaviour and network performance under changes to the network/ demand.This issue is exacerbated by the aggregation concerns in point1:the‘ambiguity’in choosing appropriate param-eter values to satisfy the aggregate,link-level,base validation strengthens the need to independently verify that,with the selected parameter values,the model responds reliably to changes.Although such problems–offitting equilibrium models to cross-sectional data–have long been recognised by practitioners and academics(see,e.g.,Goodwin,1998), the approach described above remains the state-of-practice.Having identified these two problems,how might we go about addressing them?One approach to thefirst problem would be to return to the underlying formulation of the network model,and instead require a model definition that permits analysis by statistical inference techniques(see for example,Nakayama et al.,2009).In this way,we may potentially exploit more information in the variability of the link-level data,with well-defined notions(such as maximum likelihood)allowing a systematic basis for selection between alternative parameter value combinations.However,this approach is still using rather limited data and it is natural not just to question the model but also the data that we use to calibrate and validate it.Yet this is not altogether straightforward to resolve.As Mahmassani and Jou(2000) remarked:‘A major difficulty...is obtaining observations of actual trip-maker behaviour,at the desired level of richness, simultaneously with measurements of prevailing conditions’.For this reason,several authors have turned to simulated gaming environments and/or stated preference techniques to elicit information on drivers’route choice behaviour(e.g. 1Clearly,more sporadic and less predictable reductions in capacity may also occur,such as in the case of breakdowns and accidents,and environmental factors such as severe weather,floods or landslides(see for example,Iida,1999),but the responses to such cases are outside the scope of the present paper. 2Some authors have suggested more systematic,bi-level type optimization processes for thisfitting process(e.g.Xu et al.,2004),but this has no material effect on the essential points above.D.Watling et al./Transportation Research Part A46(2012)167–189169 Mahmassani and Herman,1990;Iida et al.,1992;Khattak et al.,1993;Vaughn et al.,1995;Wardman et al.,1997;Jou,2001; Chen et al.,2001).This provides potentially rich information for calibrating complex behavioural models,but has the obvious limitation that it is based on imagined rather than real route choice situations.Aside from its common focus on hypothetical decision situations,this latter body of work also signifies a subtle change of emphasis in the treatment of the overall network calibration problem.Rather than viewing the network equilibrium calibra-tion process as a whole,the focus is on particular components of the model;in the cases above,the focus is on that compo-nent concerned with how drivers make route decisions.If we are prepared to make such a component-wise analysis,then certainly there exists abundant empirical evidence in the literature,with a history across a number of decades of research into issues such as the factors affecting drivers’route choice(e.g.Wachs,1967;Huchingson et al.,1977;Abu-Eisheh and Mannering,1987;Duffell and Kalombaris,1988;Antonisse et al.,1989;Bekhor et al.,2002;Liu et al.,2004),the nature of travel time variability(e.g.Smeed and Jeffcoate,1971;Montgomery and May,1987;May et al.,1989;McLeod et al., 1993),and the factors affecting trafficflow variability(Bonsall et al.,1984;Huff and Hanson,1986;Ribeiro,1994;Rakha and Van Aerde,1995;Fox et al.,1998).While these works provide useful evidence for the network equilibrium calibration problem,they do not provide a frame-work in which we can judge the overall‘fit’of a particular network model in the light of uncertainty,ambient variation and systematic changes in network attributes,be they related to the OD demand,the route choice process,travel times or the network data.Moreover,such data does nothing to address the second point made above,namely the question of how to validate the model forecasts under systematic changes to its inputs.The studies of Mannering et al.(1994)and Emmerink et al.(1996)are distinctive in this context in that they address some of the empirical concerns expressed in the context of travel information impacts,but their work stops at the stage of the empirical analysis,without a link being made to net-work prediction models.The focus of the present paper therefore is both to present thefindings of an empirical study and to link this empirical evidence to network forecasting models.More recently,Zhu et al.(2010)analysed several sources of data for evidence of the traffic and behavioural impacts of the I-35W bridge collapse in Minneapolis.Most pertinent to the present paper is their location-specific analysis of linkflows at 24locations;by computing the root mean square difference inflows between successive weeks,and comparing the trend for 2006with that for2007(the latter with the bridge collapse),they observed an apparent transient impact of the bridge col-lapse.They also showed there was no statistically-significant evidence of a difference in the pattern offlows in the period September–November2007(a period starting6weeks after the bridge collapse),when compared with the corresponding period in2006.They suggested that this was indicative of the length of a‘re-equilibration process’in a conceptual sense, though did not explicitly compare their empiricalfindings with those of a network equilibrium model.The structure of the remainder of the paper is as follows.In Section2we describe the process of selecting the real-life problem to analyse,together with the details and rationale behind the survey design.Following this,Section3describes the statistical techniques used to extract information on travel times and routing patterns from the survey data.Statistical inference is then considered in Section4,with the aim of detecting statistically significant explanatory factors.In Section5 comparisons are made between the observed network data and those predicted by a network equilibrium model.Finally,in Section6the conclusions of the study are highlighted,and recommendations made for both practice and future research.2.Experimental designThe ultimate objective of the study was to compare actual data with the output of a traffic network equilibrium model, specifically in terms of how well the equilibrium model was able to correctly forecast the impact of a systematic change ap-plied to the network.While a wealth of surveillance data on linkflows and travel times is routinely collected by many local and national agencies,we did not believe that such data would be sufficiently informative for our purposes.The reason is that while such data can often be disaggregated down to small time step resolutions,the data remains aggregate in terms of what it informs about driver response,since it does not provide the opportunity to explicitly trace vehicles(even in aggre-gate form)across more than one location.This has the effect that observed differences in linkflows might be attributed to many potential causes:it is especially difficult to separate out,say,ambient daily variation in the trip demand matrix from systematic changes in route choice,since both may give rise to similar impacts on observed linkflow patterns across re-corded sites.While methods do exist for reconstructing OD and network route patterns from observed link data(e.g.Yang et al.,1994),these are typically based on the premise of a valid network equilibrium model:in this case then,the data would not be able to give independent information on the validity of the network equilibrium approach.For these reasons it was decided to design and implement a purpose-built survey.However,it would not be efficient to extensively monitor a network in order to wait for something to happen,and therefore we required advance notification of some planned intervention.For this reason we chose to study the impact of urban maintenance work affecting the roads,which UK local government authorities organise on an annual basis as part of their‘Local Transport Plan’.The city council of York,a historic city in the north of England,agreed to inform us of their plans and to assist in the subsequent data collection exercise.Based on the interventions planned by York CC,the list of candidate studies was narrowed by considering factors such as its propensity to induce significant re-routing and its impact on the peak periods.Effectively the motivation here was to identify interventions that were likely to have a large impact on delays,since route choice impacts would then likely be more significant and more easily distinguished from ambient variability.This was notably at odds with the objectives of York CC,170 D.Watling et al./Transportation Research Part A46(2012)167–189in that they wished to minimise disruption,and so where possible York CC planned interventions to take place at times of day and of the year where impacts were minimised;therefore our own requirement greatly reduced the candidate set of studies to monitor.A further consideration in study selection was its timing in the year for scheduling before/after surveys so to avoid confounding effects of known significant‘seasonal’demand changes,e.g.the impact of the change between school semesters and holidays.A further consideration was York’s role as a major tourist attraction,which is also known to have a seasonal trend.However,the impact on car traffic is relatively small due to the strong promotion of public trans-port and restrictions on car travel and parking in the historic centre.We felt that we further mitigated such impacts by sub-sequently choosing to survey in the morning peak,at a time before most tourist attractions are open.Aside from the question of which intervention to survey was the issue of what data to collect.Within the resources of the project,we considered several options.We rejected stated preference survey methods as,although they provide a link to personal/socio-economic drivers,we wanted to compare actual behaviour with a network model;if the stated preference data conflicted with the network model,it would not be clear which we should question most.For revealed preference data, options considered included(i)self-completion diaries(Mahmassani and Jou,2000),(ii)automatic tracking through GPS(Jan et al.,2000;Quiroga et al.,2000;Taylor et al.,2000),and(iii)licence plate surveys(Schaefer,1988).Regarding self-comple-tion surveys,from our own interview experiments with self-completion questionnaires it was evident that travellersfind it relatively difficult to recall and describe complex choice options such as a route through an urban network,giving the po-tential for significant errors to be introduced.The automatic tracking option was believed to be the most attractive in this respect,in its potential to accurately map a given individual’s journey,but the negative side would be the potential sample size,as we would need to purchase/hire and distribute the devices;even with a large budget,it is not straightforward to identify in advance the target users,nor to guarantee their cooperation.Licence plate surveys,it was believed,offered the potential for compromise between sample size and data resolution: while we could not track routes to the same resolution as GPS,by judicious location of surveyors we had the opportunity to track vehicles across more than one location,thus providing route-like information.With time-stamped licence plates, the matched data would also provide journey time information.The negative side of this approach is the well-known poten-tial for significant recording errors if large sample rates are required.Our aim was to avoid this by recording only partial licence plates,and employing statistical methods to remove the impact of‘spurious matches’,i.e.where two different vehi-cles with the same partial licence plate occur at different locations.Moreover,extensive simulation experiments(Watling,1994)had previously shown that these latter statistical methods were effective in recovering the underlying movements and travel times,even if only a relatively small part of the licence plate were recorded,in spite of giving a large potential for spurious matching.We believed that such an approach reduced the opportunity for recorder error to such a level to suggest that a100%sample rate of vehicles passing may be feasible.This was tested in a pilot study conducted by the project team,with dictaphones used to record a100%sample of time-stamped, partial licence plates.Independent,duplicate observers were employed at the same location to compare error rates;the same study was also conducted with full licence plates.The study indicated that100%surveys with dictaphones would be feasible in moderate trafficflow,but only if partial licence plate data were used in order to control observation errors; for higherflow rates or to obtain full number plate data,video surveys should be considered.Other important practical les-sons learned from the pilot included the need for clarity in terms of vehicle types to survey(e.g.whether to include motor-cycles and taxis),and of the phonetic alphabet used by surveyors to avoid transcription ambiguities.Based on the twin considerations above of planned interventions and survey approach,several candidate studies were identified.For a candidate study,detailed design issues involved identifying:likely affected movements and alternative routes(using local knowledge of York CC,together with an existing network model of the city),in order to determine the number and location of survey sites;feasible viewpoints,based on site visits;the timing of surveys,e.g.visibility issues in the dark,winter evening peak period;the peak duration from automatic trafficflow data;and specific survey days,in view of public/school holidays.Our budget led us to survey the majority of licence plate sites manually(partial plates by audio-tape or,in lowflows,pen and paper),with video surveys limited to a small number of high-flow sites.From this combination of techniques,100%sampling rate was feasible at each site.Surveys took place in the morning peak due both to visibility considerations and to minimise conflicts with tourist/special event traffic.From automatic traffic count data it was decided to survey the period7:45–9:15as the main morning peak period.This design process led to the identification of two studies:2.1.Lendal Bridge study(Fig.1)Lendal Bridge,a critical part of York’s inner ring road,was scheduled to be closed for maintenance from September2000 for a duration of several weeks.To avoid school holidays,the‘before’surveys were scheduled for June and early September.It was decided to focus on investigating a significant southwest-to-northeast movement of traffic,the river providing a natural barrier which suggested surveying the six river crossing points(C,J,H,K,L,M in Fig.1).In total,13locations were identified for survey,in an attempt to capture traffic on both sides of the river as well as a crossing.2.2.Fishergate study(Fig.2)The partial closure(capacity reduction)of the street known as Fishergate,again part of York’s inner ring road,was scheduled for July2001to allow repairs to a collapsed sewer.Survey locations were chosen in order to intercept clockwiseFig.1.Intervention and survey locations for Lendal Bridge study.around the inner ring road,this being the direction of the partial closure.A particular aim wasFulford Road(site E in Fig.2),the main radial affected,with F and K monitoring local diversion I,J to capture wider-area diversion.studies,the plan was to survey the selected locations in the morning peak over a period of approximately covering the three periods before,during and after the intervention,with the days selected so holidays or special events.Fig.2.Intervention and survey locations for Fishergate study.In the Lendal Bridge study,while the‘before’surveys proceeded as planned,the bridge’s actualfirst day of closure on Sep-tember11th2000also marked the beginning of the UK fuel protests(BBC,2000a;Lyons and Chaterjee,2002).Trafficflows were considerably affected by the scarcity of fuel,with congestion extremely low in thefirst week of closure,to the extent that any changes could not be attributed to the bridge closure;neither had our design anticipated how to survey the impacts of the fuel shortages.We thus re-arranged our surveys to monitor more closely the planned re-opening of the bridge.Unfor-tunately these surveys were hampered by a second unanticipated event,namely the wettest autumn in the UK for270years and the highest level offlooding in York since records began(BBC,2000b).Theflooding closed much of the centre of York to road traffic,including our study area,as the roads were impassable,and therefore we abandoned the planned‘after’surveys. As a result of these events,the useable data we had(not affected by the fuel protests orflooding)consisted offive‘before’days and one‘during’day.In the Fishergate study,fortunately no extreme events occurred,allowing six‘before’and seven‘during’days to be sur-veyed,together with one additional day in the‘during’period when the works were temporarily removed.However,the works over-ran into the long summer school holidays,when it is well-known that there is a substantial seasonal effect of much lowerflows and congestion levels.We did not believe it possible to meaningfully isolate the impact of the link fully re-opening while controlling for such an effect,and so our plans for‘after re-opening’surveys were abandoned.3.Estimation of vehicle movements and travel timesThe data resulting from the surveys described in Section2is in the form of(for each day and each study)a set of time-stamped,partial licence plates,observed at a number of locations across the network.Since the data include only partial plates,they cannot simply be matched across observation points to yield reliable estimates of vehicle movements,since there is ambiguity in whether the same partial plate observed at different locations was truly caused by the same vehicle. Indeed,since the observed system is‘open’—in the sense that not all points of entry,exit,generation and attraction are mon-itored—the question is not just which of several potential matches to accept,but also whether there is any match at all.That is to say,an apparent match between data at two observation points could be caused by two separate vehicles that passed no other observation point.Thefirst stage of analysis therefore applied a series of specially-designed statistical techniques to reconstruct the vehicle movements and point-to-point travel time distributions from the observed data,allowing for all such ambiguities in the data.Although the detailed derivations of each method are not given here,since they may be found in the references provided,it is necessary to understand some of the characteristics of each method in order to interpret the results subsequently provided.Furthermore,since some of the basic techniques required modification relative to the published descriptions,then in order to explain these adaptations it is necessary to understand some of the theoretical basis.3.1.Graphical method for estimating point-to-point travel time distributionsThe preliminary technique applied to each data set was the graphical method described in Watling and Maher(1988).This method is derived for analysing partial registration plate data for unidirectional movement between a pair of observation stations(referred to as an‘origin’and a‘destination’).Thus in the data study here,it must be independently applied to given pairs of observation stations,without regard for the interdependencies between observation station pairs.On the other hand, it makes no assumption that the system is‘closed’;there may be vehicles that pass the origin that do not pass the destina-tion,and vice versa.While limited in considering only two-point surveys,the attraction of the graphical technique is that it is a non-parametric method,with no assumptions made about the arrival time distributions at the observation points(they may be non-uniform in particular),and no assumptions made about the journey time probability density.It is therefore very suitable as afirst means of investigative analysis for such data.The method begins by forming all pairs of possible matches in the data,of which some will be genuine matches(the pair of observations were due to a single vehicle)and the remainder spurious matches.Thus, for example,if there are three origin observations and two destination observations of a particular partial registration num-ber,then six possible matches may be formed,of which clearly no more than two can be genuine(and possibly only one or zero are genuine).A scatter plot may then be drawn for each possible match of the observation time at the origin versus that at the destination.The characteristic pattern of such a plot is as that shown in Fig.4a,with a dense‘line’of points(which will primarily be the genuine matches)superimposed upon a scatter of points over the whole region(which will primarily be the spurious matches).If we were to assume uniform arrival rates at the observation stations,then the spurious matches would be uniformly distributed over this plot;however,we shall avoid making such a restrictive assumption.The method begins by making a coarse estimate of the total number of genuine matches across the whole of this plot.As part of this analysis we then assume knowledge of,for any randomly selected vehicle,the probabilities:h k¼Prðvehicle is of the k th type of partial registration plateÞðk¼1;2;...;mÞwhereX m k¼1h k¼1172 D.Watling et al./Transportation Research Part A46(2012)167–189。
History of infrared detectorsA.ROGALSKI*Institute of Applied Physics, Military University of Technology, 2 Kaliskiego Str.,00–908 Warsaw, PolandThis paper overviews the history of infrared detector materials starting with Herschel’s experiment with thermometer on February11th,1800.Infrared detectors are in general used to detect,image,and measure patterns of the thermal heat radia−tion which all objects emit.At the beginning,their development was connected with thermal detectors,such as ther−mocouples and bolometers,which are still used today and which are generally sensitive to all infrared wavelengths and op−erate at room temperature.The second kind of detectors,called the photon detectors,was mainly developed during the20th Century to improve sensitivity and response time.These detectors have been extensively developed since the1940’s.Lead sulphide(PbS)was the first practical IR detector with sensitivity to infrared wavelengths up to~3μm.After World War II infrared detector technology development was and continues to be primarily driven by military applications.Discovery of variable band gap HgCdTe ternary alloy by Lawson and co−workers in1959opened a new area in IR detector technology and has provided an unprecedented degree of freedom in infrared detector design.Many of these advances were transferred to IR astronomy from Departments of Defence ter on civilian applications of infrared technology are frequently called“dual−use technology applications.”One should point out the growing utilisation of IR technologies in the civilian sphere based on the use of new materials and technologies,as well as the noticeable price decrease in these high cost tech−nologies.In the last four decades different types of detectors are combined with electronic readouts to make detector focal plane arrays(FPAs).Development in FPA technology has revolutionized infrared imaging.Progress in integrated circuit design and fabrication techniques has resulted in continued rapid growth in the size and performance of these solid state arrays.Keywords:thermal and photon detectors, lead salt detectors, HgCdTe detectors, microbolometers, focal plane arrays.Contents1.Introduction2.Historical perspective3.Classification of infrared detectors3.1.Photon detectors3.2.Thermal detectors4.Post−War activity5.HgCdTe era6.Alternative material systems6.1.InSb and InGaAs6.2.GaAs/AlGaAs quantum well superlattices6.3.InAs/GaInSb strained layer superlattices6.4.Hg−based alternatives to HgCdTe7.New revolution in thermal detectors8.Focal plane arrays – revolution in imaging systems8.1.Cooled FPAs8.2.Uncooled FPAs8.3.Readiness level of LWIR detector technologies9.SummaryReferences 1.IntroductionLooking back over the past1000years we notice that infra−red radiation(IR)itself was unknown until212years ago when Herschel’s experiment with thermometer and prism was first reported.Frederick William Herschel(1738–1822) was born in Hanover,Germany but emigrated to Britain at age19,where he became well known as both a musician and an astronomer.Herschel became most famous for the discovery of Uranus in1781(the first new planet found since antiquity)in addition to two of its major moons,Tita−nia and Oberon.He also discovered two moons of Saturn and infrared radiation.Herschel is also known for the twenty−four symphonies that he composed.W.Herschel made another milestone discovery–discov−ery of infrared light on February11th,1800.He studied the spectrum of sunlight with a prism[see Fig.1in Ref.1],mea−suring temperature of each colour.The detector consisted of liquid in a glass thermometer with a specially blackened bulb to absorb radiation.Herschel built a crude monochromator that used a thermometer as a detector,so that he could mea−sure the distribution of energy in sunlight and found that the highest temperature was just beyond the red,what we now call the infrared(‘below the red’,from the Latin‘infra’–be−OPTO−ELECTRONICS REVIEW20(3),279–308DOI: 10.2478/s11772−012−0037−7*e−mail: rogan@.pllow)–see Fig.1(b)[2].In April 1800he reported it to the Royal Society as dark heat (Ref.1,pp.288–290):Here the thermometer No.1rose 7degrees,in 10minu−tes,by an exposure to the full red coloured rays.I drew back the stand,till the centre of the ball of No.1was just at the vanishing of the red colour,so that half its ball was within,and half without,the visible rays of theAnd here the thermometerin 16minutes,degrees,when its centre was inch out of the raysof the sun.as had a rising of 9de−grees,and here the difference is almost too trifling to suppose,that latter situation of the thermometer was much beyond the maximum of the heating power;while,at the same time,the experiment sufficiently indi−cates,that the place inquired after need not be looked for at a greater distance.Making further experiments on what Herschel called the ‘calorific rays’that existed beyond the red part of the spec−trum,he found that they were reflected,refracted,absorbed and transmitted just like visible light [1,3,4].The early history of IR was reviewed about 50years ago in three well−known monographs [5–7].Many historical information can be also found in four papers published by Barr [3,4,8,9]and in more recently published monograph [10].Table 1summarises the historical development of infrared physics and technology [11,12].2.Historical perspectiveFor thirty years following Herschel’s discovery,very little progress was made beyond establishing that the infrared ra−diation obeyed the simplest laws of optics.Slow progress inthe study of infrared was caused by the lack of sensitive and accurate detectors –the experimenters were handicapped by the ordinary thermometer.However,towards the second de−cade of the 19th century,Thomas Johann Seebeck began to examine the junction behaviour of electrically conductive materials.In 1821he discovered that a small electric current will flow in a closed circuit of two dissimilar metallic con−ductors,when their junctions are kept at different tempera−tures [13].During that time,most physicists thought that ra−diant heat and light were different phenomena,and the dis−covery of Seebeck indirectly contributed to a revival of the debate on the nature of heat.Due to small output vol−tage of Seebeck’s junctions,some μV/K,the measurement of very small temperature differences were prevented.In 1829L.Nobili made the first thermocouple and improved electrical thermometer based on the thermoelectric effect discovered by Seebeck in 1826.Four years later,M.Melloni introduced the idea of connecting several bismuth−copper thermocouples in series,generating a higher and,therefore,measurable output voltage.It was at least 40times more sensitive than the best thermometer available and could de−tect the heat from a person at a distance of 30ft [8].The out−put voltage of such a thermopile structure linearly increases with the number of connected thermocouples.An example of thermopile’s prototype invented by Nobili is shown in Fig.2(a).It consists of twelve large bismuth and antimony elements.The elements were placed upright in a brass ring secured to an adjustable support,and were screened by a wooden disk with a 15−mm central aperture.Incomplete version of the Nobili−Melloni thermopile originally fitted with the brass cone−shaped tubes to collect ra−diant heat is shown in Fig.2(b).This instrument was much more sensi−tive than the thermometers previously used and became the most widely used detector of IR radiation for the next half century.The third member of the trio,Langley’s bolometer appea−red in 1880[7].Samuel Pierpont Langley (1834–1906)used two thin ribbons of platinum foil connected so as to form two arms of a Wheatstone bridge (see Fig.3)[15].This instrument enabled him to study solar irradiance far into its infrared region and to measure theintensityof solar radia−tion at various wavelengths [9,16,17].The bolometer’s sen−History of infrared detectorsFig.1.Herschel’s first experiment:A,B –the small stand,1,2,3–the thermometers upon it,C,D –the prism at the window,E –the spec−trum thrown upon the table,so as to bring the last quarter of an inch of the read colour upon the stand (after Ref.1).InsideSir FrederickWilliam Herschel (1738–1822)measures infrared light from the sun– artist’s impression (after Ref. 2).Fig.2.The Nobili−Meloni thermopiles:(a)thermopile’s prototype invented by Nobili (ca.1829),(b)incomplete version of the Nobili−−Melloni thermopile (ca.1831).Museo Galileo –Institute and Museum of the History of Science,Piazza dei Giudici 1,50122Florence, Italy (after Ref. 14).Table 1. Milestones in the development of infrared physics and technology (up−dated after Refs. 11 and 12)Year Event1800Discovery of the existence of thermal radiation in the invisible beyond the red by W. HERSCHEL1821Discovery of the thermoelectric effects using an antimony−copper pair by T.J. SEEBECK1830Thermal element for thermal radiation measurement by L. NOBILI1833Thermopile consisting of 10 in−line Sb−Bi thermal pairs by L. NOBILI and M. MELLONI1834Discovery of the PELTIER effect on a current−fed pair of two different conductors by J.C. PELTIER1835Formulation of the hypothesis that light and electromagnetic radiation are of the same nature by A.M. AMPERE1839Solar absorption spectrum of the atmosphere and the role of water vapour by M. MELLONI1840Discovery of the three atmospheric windows by J. HERSCHEL (son of W. HERSCHEL)1857Harmonization of the three thermoelectric effects (SEEBECK, PELTIER, THOMSON) by W. THOMSON (Lord KELVIN)1859Relationship between absorption and emission by G. KIRCHHOFF1864Theory of electromagnetic radiation by J.C. MAXWELL1873Discovery of photoconductive effect in selenium by W. SMITH1876Discovery of photovoltaic effect in selenium (photopiles) by W.G. ADAMS and A.E. DAY1879Empirical relationship between radiation intensity and temperature of a blackbody by J. STEFAN1880Study of absorption characteristics of the atmosphere through a Pt bolometer resistance by S.P. LANGLEY1883Study of transmission characteristics of IR−transparent materials by M. MELLONI1884Thermodynamic derivation of the STEFAN law by L. BOLTZMANN1887Observation of photoelectric effect in the ultraviolet by H. HERTZ1890J. ELSTER and H. GEITEL constructed a photoemissive detector consisted of an alkali−metal cathode1894, 1900Derivation of the wavelength relation of blackbody radiation by J.W. RAYEIGH and W. WIEN1900Discovery of quantum properties of light by M. PLANCK1903Temperature measurements of stars and planets using IR radiometry and spectrometry by W.W. COBLENTZ1905 A. EINSTEIN established the theory of photoelectricity1911R. ROSLING made the first television image tube on the principle of cathode ray tubes constructed by F. Braun in 18971914Application of bolometers for the remote exploration of people and aircrafts ( a man at 200 m and a plane at 1000 m)1917T.W. CASE developed the first infrared photoconductor from substance composed of thallium and sulphur1923W. SCHOTTKY established the theory of dry rectifiers1925V.K. ZWORYKIN made a television image tube (kinescope) then between 1925 and 1933, the first electronic camera with the aid of converter tube (iconoscope)1928Proposal of the idea of the electro−optical converter (including the multistage one) by G. HOLST, J.H. DE BOER, M.C. TEVES, and C.F. VEENEMANS1929L.R. KOHLER made a converter tube with a photocathode (Ag/O/Cs) sensitive in the near infrared1930IR direction finders based on PbS quantum detectors in the wavelength range 1.5–3.0 μm for military applications (GUDDEN, GÖRLICH and KUTSCHER), increased range in World War II to 30 km for ships and 7 km for tanks (3–5 μm)1934First IR image converter1939Development of the first IR display unit in the United States (Sniperscope, Snooperscope)1941R.S. OHL observed the photovoltaic effect shown by a p−n junction in a silicon1942G. EASTMAN (Kodak) offered the first film sensitive to the infrared1947Pneumatically acting, high−detectivity radiation detector by M.J.E. GOLAY1954First imaging cameras based on thermopiles (exposure time of 20 min per image) and on bolometers (4 min)1955Mass production start of IR seeker heads for IR guided rockets in the US (PbS and PbTe detectors, later InSb detectors for Sidewinder rockets)1957Discovery of HgCdTe ternary alloy as infrared detector material by W.D. LAWSON, S. NELSON, and A.S. YOUNG1961Discovery of extrinsic Ge:Hg and its application (linear array) in the first LWIR FLIR systems1965Mass production start of IR cameras for civil applications in Sweden (single−element sensors with optomechanical scanner: AGA Thermografiesystem 660)1970Discovery of charge−couple device (CCD) by W.S. BOYLE and G.E. SMITH1970Production start of IR sensor arrays (monolithic Si−arrays: R.A. SOREF 1968; IR−CCD: 1970; SCHOTTKY diode arrays: F.D.SHEPHERD and A.C. YANG 1973; IR−CMOS: 1980; SPRITE: T. ELIOTT 1981)1975Lunch of national programmes for making spatially high resolution observation systems in the infrared from multielement detectors integrated in a mini cooler (so−called first generation systems): common module (CM) in the United States, thermal imaging commonmodule (TICM) in Great Britain, syteme modulaire termique (SMT) in France1975First In bump hybrid infrared focal plane array1977Discovery of the broken−gap type−II InAs/GaSb superlattices by G.A. SAI−HALASZ, R. TSU, and L. ESAKI1980Development and production of second generation systems [cameras fitted with hybrid HgCdTe(InSb)/Si(readout) FPAs].First demonstration of two−colour back−to−back SWIR GaInAsP detector by J.C. CAMPBELL, A.G. DENTAI, T.P. LEE,and C.A. BURRUS1985Development and mass production of cameras fitted with Schottky diode FPAs (platinum silicide)1990Development and production of quantum well infrared photoconductor (QWIP) hybrid second generation systems1995Production start of IR cameras with uncooled FPAs (focal plane arrays; microbolometer−based and pyroelectric)2000Development and production of third generation infrared systemssitivity was much greater than that of contemporary thermo−piles which were little improved since their use by Melloni. Langley continued to develop his bolometer for the next20 years(400times more sensitive than his first efforts).His latest bolometer could detect the heat from a cow at a dis−tance of quarter of mile [9].From the above information results that at the beginning the development of the IR detectors was connected with ther−mal detectors.The first photon effect,photoconductive ef−fect,was discovered by Smith in1873when he experimented with selenium as an insulator for submarine cables[18].This discovery provided a fertile field of investigation for several decades,though most of the efforts were of doubtful quality. By1927,over1500articles and100patents were listed on photosensitive selenium[19].It should be mentioned that the literature of the early1900’s shows increasing interest in the application of infrared as solution to numerous problems[7].A special contribution of William Coblenz(1873–1962)to infrared radiometry and spectroscopy is marked by huge bib−liography containing hundreds of scientific publications, talks,and abstracts to his credit[20,21].In1915,W.Cob−lentz at the US National Bureau of Standards develops ther−mopile detectors,which he uses to measure the infrared radi−ation from110stars.However,the low sensitivity of early in−frared instruments prevented the detection of other near−IR sources.Work in infrared astronomy remained at a low level until breakthroughs in the development of new,sensitive infrared detectors were achieved in the late1950’s.The principle of photoemission was first demonstrated in1887when Hertz discovered that negatively charged par−ticles were emitted from a conductor if it was irradiated with ultraviolet[22].Further studies revealed that this effect could be produced with visible radiation using an alkali metal electrode [23].Rectifying properties of semiconductor−metal contact were discovered by Ferdinand Braun in1874[24],when he probed a naturally−occurring lead sulphide(galena)crystal with the point of a thin metal wire and noted that current flowed freely in one direction only.Next,Jagadis Chandra Bose demonstrated the use of galena−metal point contact to detect millimetre electromagnetic waves.In1901he filed a U.S patent for a point−contact semiconductor rectifier for detecting radio signals[25].This type of contact called cat’s whisker detector(sometimes also as crystal detector)played serious role in the initial phase of radio development.How−ever,this contact was not used in a radiation detector for the next several decades.Although crystal rectifiers allowed to fabricate simple radio sets,however,by the mid−1920s the predictable performance of vacuum−tubes replaced them in most radio applications.The period between World Wars I and II is marked by the development of photon detectors and image converters and by emergence of infrared spectroscopy as one of the key analytical techniques available to chemists.The image con−verter,developed on the eve of World War II,was of tre−mendous interest to the military because it enabled man to see in the dark.The first IR photoconductor was developed by Theodore W.Case in1917[26].He discovered that a substance com−posed of thallium and sulphur(Tl2S)exhibited photocon−ductivity.Supported by the US Army between1917and 1918,Case adapted these relatively unreliable detectors for use as sensors in an infrared signalling device[27].The pro−totype signalling system,consisting of a60−inch diameter searchlight as the source of radiation and a thallous sulphide detector at the focus of a24−inch diameter paraboloid mir−ror,sent messages18miles through what was described as ‘smoky atmosphere’in1917.However,instability of resis−tance in the presence of light or polarizing voltage,loss of responsivity due to over−exposure to light,high noise,slug−gish response and lack of reproducibility seemed to be inhe−rent weaknesses.Work was discontinued in1918;commu−nication by the detection of infrared radiation appeared dis−tinctly ter Case found that the addition of oxygen greatly enhanced the response [28].The idea of the electro−optical converter,including the multistage one,was proposed by Holst et al.in1928[29]. The first attempt to make the converter was not successful.A working tube consisted of a photocathode in close proxi−mity to a fluorescent screen was made by the authors in 1934 in Philips firm.In about1930,the appearance of the Cs−O−Ag photo−tube,with stable characteristics,to great extent discouraged further development of photoconductive cells until about 1940.The Cs−O−Ag photocathode(also called S−1)elabo−History of infrared detectorsFig.3.Longley’s bolometer(a)composed of two sets of thin plati−num strips(b),a Wheatstone bridge,a battery,and a galvanometer measuring electrical current (after Ref. 15 and 16).rated by Koller and Campbell[30]had a quantum efficiency two orders of magnitude above anything previously studied, and consequently a new era in photoemissive devices was inaugurated[31].In the same year,the Japanese scientists S. Asao and M.Suzuki reported a method for enhancing the sensitivity of silver in the S−1photocathode[32].Consisted of a layer of caesium on oxidized silver,S−1is sensitive with useful response in the near infrared,out to approxi−mately1.2μm,and the visible and ultraviolet region,down to0.3μm.Probably the most significant IR development in the United States during1930’s was the Radio Corporation of America(RCA)IR image tube.During World War II, near−IR(NIR)cathodes were coupled to visible phosphors to provide a NIR image converter.With the establishment of the National Defence Research Committee,the develop−ment of this tube was accelerated.In1942,the tube went into production as the RCA1P25image converter(see Fig.4).This was one of the tubes used during World War II as a part of the”Snooperscope”and”Sniperscope,”which were used for night observation with infrared sources of illumination.Since then various photocathodes have been developed including bialkali photocathodes for the visible region,multialkali photocathodes with high sensitivity ex−tending to the infrared region and alkali halide photocatho−des intended for ultraviolet detection.The early concepts of image intensification were not basically different from those today.However,the early devices suffered from two major deficiencies:poor photo−cathodes and poor ter development of both cathode and coupling technologies changed the image in−tensifier into much more useful device.The concept of image intensification by cascading stages was suggested independently by number of workers.In Great Britain,the work was directed toward proximity focused tubes,while in the United State and in Germany–to electrostatically focused tubes.A history of night vision imaging devices is given by Biberman and Sendall in monograph Electro−Opti−cal Imaging:System Performance and Modelling,SPIE Press,2000[10].The Biberman’s monograph describes the basic trends of infrared optoelectronics development in the USA,Great Britain,France,and Germany.Seven years later Ponomarenko and Filachev completed this monograph writ−ing the book Infrared Techniques and Electro−Optics in Russia:A History1946−2006,SPIE Press,about achieve−ments of IR techniques and electrooptics in the former USSR and Russia [33].In the early1930’s,interest in improved detectors began in Germany[27,34,35].In1933,Edgar W.Kutzscher at the University of Berlin,discovered that lead sulphide(from natural galena found in Sardinia)was photoconductive and had response to about3μm.B.Gudden at the University of Prague used evaporation techniques to develop sensitive PbS films.Work directed by Kutzscher,initially at the Uni−versity of Berlin and later at the Electroacustic Company in Kiel,dealt primarily with the chemical deposition approach to film formation.This work ultimately lead to the fabrica−tion of the most sensitive German detectors.These works were,of course,done under great secrecy and the results were not generally known until after1945.Lead sulphide photoconductors were brought to the manufacturing stage of development in Germany in about1943.Lead sulphide was the first practical infrared detector deployed in a variety of applications during the war.The most notable was the Kiel IV,an airborne IR system that had excellent range and which was produced at Carl Zeiss in Jena under the direction of Werner K. Weihe [6].In1941,Robert J.Cashman improved the technology of thallous sulphide detectors,which led to successful produc−tion[36,37].Cashman,after success with thallous sulphide detectors,concentrated his efforts on lead sulphide detec−tors,which were first produced in the United States at Northwestern University in1944.After World War II Cash−man found that other semiconductors of the lead salt family (PbSe and PbTe)showed promise as infrared detectors[38]. The early detector cells manufactured by Cashman are shown in Fig. 5.Fig.4.The original1P25image converter tube developed by the RCA(a).This device measures115×38mm overall and has7pins.It opera−tion is indicated by the schematic drawing (b).After1945,the wide−ranging German trajectory of research was essentially the direction continued in the USA, Great Britain and Soviet Union under military sponsorship after the war[27,39].Kutzscher’s facilities were captured by the Russians,thus providing the basis for early Soviet detector development.From1946,detector technology was rapidly disseminated to firms such as Mullard Ltd.in Southampton,UK,as part of war reparations,and some−times was accompanied by the valuable tacit knowledge of technical experts.E.W.Kutzscher,for example,was flown to Britain from Kiel after the war,and subsequently had an important influence on American developments when he joined Lockheed Aircraft Co.in Burbank,California as a research scientist.Although the fabrication methods developed for lead salt photoconductors was usually not completely under−stood,their properties are well established and reproducibi−lity could only be achieved after following well−tried reci−pes.Unlike most other semiconductor IR detectors,lead salt photoconductive materials are used in the form of polycrys−talline films approximately1μm thick and with individual crystallites ranging in size from approximately0.1–1.0μm. They are usually prepared by chemical deposition using empirical recipes,which generally yields better uniformity of response and more stable results than the evaporative methods.In order to obtain high−performance detectors, lead chalcogenide films need to be sensitized by oxidation. The oxidation may be carried out by using additives in the deposition bath,by post−deposition heat treatment in the presence of oxygen,or by chemical oxidation of the film. The effect of the oxidant is to introduce sensitizing centres and additional states into the bandgap and thereby increase the lifetime of the photoexcited holes in the p−type material.3.Classification of infrared detectorsObserving a history of the development of the IR detector technology after World War II,many materials have been investigated.A simple theorem,after Norton[40],can be stated:”All physical phenomena in the range of about0.1–1 eV will be proposed for IR detectors”.Among these effects are:thermoelectric power(thermocouples),change in elec−trical conductivity(bolometers),gas expansion(Golay cell), pyroelectricity(pyroelectric detectors),photon drag,Jose−phson effect(Josephson junctions,SQUIDs),internal emis−sion(PtSi Schottky barriers),fundamental absorption(in−trinsic photodetectors),impurity absorption(extrinsic pho−todetectors),low dimensional solids[superlattice(SL), quantum well(QW)and quantum dot(QD)detectors], different type of phase transitions, etc.Figure6gives approximate dates of significant develop−ment efforts for the materials mentioned.The years during World War II saw the origins of modern IR detector tech−nology.Recent success in applying infrared technology to remote sensing problems has been made possible by the successful development of high−performance infrared de−tectors over the last six decades.Photon IR technology com−bined with semiconductor material science,photolithogra−phy technology developed for integrated circuits,and the impetus of Cold War military preparedness have propelled extraordinary advances in IR capabilities within a short time period during the last century [41].The majority of optical detectors can be classified in two broad categories:photon detectors(also called quantum detectors) and thermal detectors.3.1.Photon detectorsIn photon detectors the radiation is absorbed within the material by interaction with electrons either bound to lattice atoms or to impurity atoms or with free electrons.The observed electrical output signal results from the changed electronic energy distribution.The photon detectors show a selective wavelength dependence of response per unit incident radiation power(see Fig.8).They exhibit both a good signal−to−noise performance and a very fast res−ponse.But to achieve this,the photon IR detectors require cryogenic cooling.This is necessary to prevent the thermalHistory of infrared detectorsFig.5.Cashman’s detector cells:(a)Tl2S cell(ca.1943):a grid of two intermeshing comb−line sets of conducting paths were first pro−vided and next the T2S was evaporated over the grid structure;(b) PbS cell(ca.1945)the PbS layer was evaporated on the wall of the tube on which electrical leads had been drawn with aquadag(afterRef. 38).。
计算机视觉领域事件定义(中英文版)英文文档:The field of computer vision is concerned with enabling computers to interpret and understand visual information from the world around us.It encompasses a wide range of technologies and techniques, including image recognition, image processing, and video analysis.One aspect of computer vision that is gaining increasing attention is the ability to detect and recognize events from visual data.Event detection in computer vision involves the identification of specific occurrences or changes in a scene that are relevant or interesting.This could include recognizing actions such as walking, running, or jumping, or identifying events like a person entering or leaving an area.Event detection can be used in a variety of applications, including surveillance, robotics, and autonomous vehicles.In order to detect events in computer vision, researchers use various methods, including machine learning and deep learning algorithms.These algorithms are trained on large datasets of labeled images or videos to learn to recognize specific events.Once trained, these algorithms can be used to automatically detect events in new, unseen data.Challenges in event detection include the variability of events andthe complexity of real-world scenes.Events can occur under a wide variety of conditions, and the same event can look different depending on the lighting, viewpoint, or other factors.Additionally, scenes can be cluttered and contain multiple objects and events that need to be distinguished.Despite these challenges, research in event detection in computer vision is making progress and has the potential to enable a wide range of useful applications.中文文档:计算机视觉领域关注的是使计算机能够解释和理解周围世界的视觉信息。
2025年研究生考试考研英语(一201)自测试卷及答案指导一、完型填空(10分)Section I: Cloze TestDirections: Read the following text carefully and choose the best answer from the four choices marked A, B, C, and D for each blank.Passage:In today’s rapidly evolving digital landscape, the role of social media has become increasingly significant. Social media platforms are not just tools for personal interaction; they also serve as powerful channels for business promotion and customer engagement. Companies are now leveraging these platforms to reach out to their target audience more effectively than ever before. However, the effectiveness of social media marketing (1)_on how well the company understands its audience and the specific platform being used. For instance, while Facebook may be suitable for reaching older demographics, Instagram is more popular among younger users. Therefore, it is crucial for businesses to tailor their content to fit the preferences and behaviors of the (2)_demographic they wish to target.Moreover, the rise of mobile devices has further transformed the way peopleconsume content online. The majority of social media users now access these platforms via smartphones, which means that companies must ensure that their content is optimized for mobile viewing. In addition, the speed at which information spreads on social media can be both a boon and a bane. On one hand, positive news about a brand can quickly go viral, leading to increased visibility and potentially higher sales. On the other hand, negative publicity can spread just as fast, potentially causing serious damage to a brand’s reputation. As such, it is imperative for companies to have a well-thought-out strategy for managing their online presence and responding to feedback in a timely and professional manner.In conclusion, social media offers unparalleled opportunities for businesses to connect with customers, but it requires careful planning and execution to (3)___the maximum benefits. By staying attuned to trends and continuously adapting their strategies, companies can harness the power of social media to foster growth and build strong relationships with their audiences.1.[A] relies [B] bases [C] stands [D] depends2.[A] particular [B] peculiar [C] special [D] unique3.[A] obtain [B] gain [C] achieve [D] accomplishAnswers:1.D - depends2.A - particular3.C - achieveThis cloze test is designed to assess comprehension and vocabulary skills, as well as the ability to infer the correct usage of words within the context of the passage. Each question is crafted to require understanding of the sentence structure and meaning to select the best option.二、传统阅读理解(本部分有4大题,每大题10分,共40分)第一题Passage:In the 1950s, the United States experienced a significant shift in the way people viewed education. This shift was largely due to the Cold War, which created a demand for a highly educated workforce. As a result, the number of students pursuing higher education in the U.S. began to grow rapidly.One of the most important developments during this period was the creation of the Master’s degree program. The Master’s degree was designed to provide students with advanced knowledge and skills in a specific field. This program became increasingly popular as more and more people realized the value of a higher education.The growth of the Master’s degree program had a profound impact on American society. It helped to create a more educated and skilled workforce, which in turn contributed to the nation’s economic growth. It also helped to improve the quality of life for many Americans by providing them with opportunities for career advancement and personal development.Today, the Master’s degree is still an important part of the American educational system. However, there are some challenges that need to be addressed. One of the biggest challenges is the rising cost of education. As the cost of tuition continues to rise, many students are unable to afford the cost of a Master’s degree. This is a problem that needs to be addressed if we are to continue to provide high-quality education to all Americans.1、What was the main reason for the shift in the way people viewed education in the 1950s?A. The demand for a highly educated workforce due to the Cold War.B. The desire to improve the quality of life for all Americans.C. The increasing cost of education.D. The creation of the Master’s degree program.2、What is the purpose of the Master’s degree program?A. To provide students with basic knowledge and skills in a specific field.B. To provide students with advanced knowledge and skills in a specific field.C. To provide students with job training.D. To provide students with a general education.3、How did the growth of the Master’s degree program impact American society?A. It helped to create a more educated and skilled workforce.B. It helped to improve the quality of life for many Americans.C. It caused the economy to decline.D. It increased the cost of education.4、What is one of the biggest challenges facing the Master’s deg ree program today?A. The demand for a highly educated workforce.B. The rising cost of education.C. The desire to improve the quality of life for all Americans.D. The creation of new educational programs.5、What is the author’s main point in the last pa ragraph?A. The Master’s degree program is still an important part of the American educational system.B. The cost of education needs to be addressed.C. The Master’s degree program is no longer relevant.D. The author is unsure about the future of the Master’s degree program.第二题Reading Comprehension (Traditional)Passage:The digital revolution has transformed the way we live, work, and communicate. With the advent of the internet and the proliferation of smart devices, information is more accessible than ever before. This transformation has had a profound impact on education, with online learning platforms providing unprecedented access to knowledge. However, this shift towards digital learningalso poses challenges, particularly in terms of ensuring equitable access and maintaining educational quality.While the benefits of digital learning are numerous, including flexibility, cost-effectiveness, and the ability to reach a wider audience, there are concerns about the potential for increased social isolation and the difficulty in replicating the dynamic, interactive environment of a traditional classroom. Moreover, not all students have equal access to the technology required for online learning, which can exacerbate existing inequalities. It’s crucial that as we embrace the opportunities presented by digital technologies, we also address these challenges to ensure that no student is left behind.Educators must adapt their teaching methods to take advantage of new tools while also being mindful of the need to foster a sense of community and support among students. By integrating both digital and traditional approaches, it’s possible to create a learning environment that leverages the strengths of each, ultimately enhancing the educational experience for all students.Questions:1、What is one of the main impacts of the digital revolution mentioned in the passage?•A) The reduction of social interactions•B) The increase in physical book sales•C) The transformation of communication methods•D) The decline of online learning platformsAnswer: C) The transformation of communication methods2、According to the passage, what is a challenge associated with digital learning?•A) The inability to provide any form of interaction•B) The potential to widen the gap between different socioeconomic groups •C) The lack of available content for online courses•D) The complete replacement of traditional classroomsAnswer: B) The potential to widen the gap between different socioeconomic groups3、Which of the following is NOT listed as a benefit of digital learning in the passage?•A) Cost-effectiveness•B) Flexibility•C) Increased social isolation•D) Wider reachAnswer: C) Increased social isolation4、The passage suggests that educators should do which of the following in response to the digital revolution?•A) Abandon all traditional teaching methods•B) Focus solely on improving students’ technical skills•C) Integrate digital and traditional teaching methods•D) Avoid using any digital tools in the classroomAnswer: C) Integrate digital and traditional teaching methods5、What is the author’s stance on the role of digital technologies ineducation?•A) They are unnecessary and should be avoided•B) They offer opportunities that should be embraced, but with caution •C) They are the only solution to current educational challenges•D) They have no real impact on the quality of educationAnswer: B) They offer opportunities that should be embraced, but with cautionThis reading comprehension exercise is designed to test your understanding of the text and your ability to identify key points and arguments within the passage.第三题Reading PassageWhen the French sociologist and philosopher Henri Lefebvre died in 1991, he left behind a body of work that has had a profound influence on the fields of sociology, philosophy, and cultural studies. Lefebvre’s theories focused on the relationship between space and society, particularly how space is produced, represented, and experienced. His work has been widely discussed and debated, with scholars and critics alike finding value in his insights.Lefebvre’s most famous work, “The Production of Space,” published in 1974, laid the foundation for his theoretical framework. In this book, he argues that space is not simply a container for human activities but rather an active agent in shaping social relationships and structures. Lefebvre introduces the concept of “three spaces” to describe the production of space: the perceived space,the lived space, and the representative space.1、According to Lefebvre, what is the primary focus of his theories?A. The development of urban planningB. The relationship between space and societyC. The history of architectural designD. The evolution of cultural practices2、What is the main argument presented in “The Production of Space”?A. Space is a passive entity that reflects social structures.B. Space is a fundamental building block of society.C. Space is an object that can be easily manipulated by humans.D. Space is irrelevant to the functioning of society.3、Lefebvre identifies three distinct spaces. Which of the following is NOT one of these spaces?A. Perceived spaceB. Lived spaceC. Representative spaceD. Economic space4、How does Lefebvre define the concept of “three spaces”?A. They are different types of architectural designs.B. They represent different stages of the production of space.C. They are different ways of perceiving and experiencing space.D. They are different social classes that occupy space.5、What is the significance of Lefebvre’s work in the fields of sociology and philosophy?A. It provides a new perspective on the role of space in social relationships.B. It offers a comprehensive guide to urban planning and development.C. It promotes the idea that space is an unimportant aspect of society.D. It focuses solely on the history of architectural movements.Answers:1、B2、B3、D4、C5、A第四题Reading Comprehension (Traditional)Read the following passage and answer the questions that follow. Choose the best answer from the options provided.Passage:In recent years, there has been a growing interest in the concept of “smart cities,” which are urban areas that u se different types of electronic data collection sensors to supply information which is used to manage assets and resources efficiently. This includes data collected from citizens, devices, andassets that is processed and analyzed to monitor and manage traffic and transportation systems, power plants, water supply networks, waste management, law enforcement, information systems, schools, libraries, hospitals, and other community services. The goal of building a smart city is to improve quality of life by using technology to enhance the performance and interactivity of urban services, to reduce costs and resource consumption, and to increase contact between citizens and government. Smart city applications are developed to address urban challenges such as environmental sustainability, mobility, and economic development.Critics argue, however, that while the idea of a smart city is appealing, it raises significant concerns about privacy and security. As more and more aspects of daily life become digitized, the amount of personal data being collected also increases, leading to potential misuse or unauthorized access. Moreover, the reliance on technology for critical infrastructure can create vulnerabilities if not properly secured against cyber-attacks. There is also a risk of widening the digital divide, as those without access to the necessary technologies may be left behind, further exacerbating social inequalities.Despite these concerns, many governments around the world are moving forward with plans to develop smart cities, seeing them as a key component of their future strategies. They believe that the benefits of improved efficiency and service delivery will outweigh the potential risks, provided that adequate safeguards are put in place to protect citizen s’ data and ensure the resilience of thecity’s technological framework.Questions:1、What is the primary purpose of developing a smart city?•A) To collect as much data as possible•B) To improve the quality of life through efficient use of technology •C) To replace all traditional forms of communication•D) To eliminate the need for human interaction in urban services2、According to the passage, what is one of the main concerns raised by critics regarding smart cities?•A) The lack of available technology•B) The high cost of implementing smart city solutions•C) Privacy and security issues related to data collection•D) The inability to provide essential services3、Which of the following is NOT mentioned as an area where smart city technology could be applied?•A) Traffic and transportation systems•B) Waste management•C) Educational institutions•D) Agricultural production4、How do some governments view the development of smart cities despite the criticisms?•A) As a risky endeavor that should be avoided•B) As a temporary trend that will soon pass•C) As a strategic move with long-term benefits•D) As an unnecessary investment in technology5、What does the term “digital divide” refer to in the context of smart cities?•A) The gap between the amount of data collected and the amount of data analyzed•B) The difference in technological advancement between urban and rural areas•C) The disparity in access to technology and its impact on social inequality•D) The separation of digital and non-digital methods of service delivery Answers:1、B) To improve the quality of life through efficient use of technology2、C) Privacy and security issues related to data collection3、D) Agricultural production4、C) As a strategic move with long-term benefits5、C) The disparity in access to technology and its impact on social inequality三、阅读理解新题型(10分)Reading Comprehension (New Type)Passage:The rise of e-commerce has transformed the way people shop and has had aprofound impact on traditional brick-and-mortar retailers. Online shopping offers convenience, a wide range of products, and competitive prices. However, it has also raised concerns about the future of physical stores. This passage examines the challenges and opportunities facing traditional retailers in the age of e-commerce.In recent years, the popularity of e-commerce has soared, thanks to advancements in technology and changing consumer behavior. According to a report by Statista, global e-commerce sales reached nearly$4.2 trillion in 2020. This upward trend is expected to continue, with projections showing that online sales will account for 25% of total retail sales by 2025. As a result, traditional retailers are facing fierce competition and must adapt to the digital landscape.One of the main challenges for brick-and-mortar retailers is the shift in consumer preferences. Many shoppers now prefer the convenience of online shopping, which allows them to compare prices, read reviews, and purchase products from the comfort of their homes. This has led to a decrease in foot traffic in physical stores, causing many retailers to struggle to attract customers. Additionally, the ability to offer a wide range of products at competitive prices has become a hallmark of e-commerce, making it difficult for traditional retailers to compete.Despite these challenges, there are opportunities for traditional retailers to thrive in the age of e-commerce. One approach is to leverage the unique strengths of physical stores, such as the ability to provide an immersiveshopping experience and personalized customer service. Retailers can also use technology to enhance the in-store experience, such as implementing augmented reality (AR) to allow customers to visualize products in their own homes before purchasing.Another strategy is to embrace the digital world and create a seamless shopping experience that integrates online and offline channels. For example, retailers can offer online returns to brick-and-mortar stores, allowing customers to shop online and return items in person. This not only provides convenience but also encourages customers to make additional purchases while they are in the store.Furthermore, traditional retailers can leverage their established brand loyalty and customer base to create a competitive advantage. By focusing on niche markets and offering unique products or services, retailers can differentiate themselves from e-commerce giants. Additionally, retailers can invest in marketing and promotions to drive traffic to their physical stores, even as more consumers turn to online shopping.In conclusion, the rise of e-commerce has presented traditional retailers with significant challenges. However, by embracing the digital landscape, leveraging their unique strengths, and focusing on customer satisfaction, traditional retailers can adapt and thrive in the age of e-commerce.Questions:1.What is the main concern raised about traditional retailers in the age of e-commerce?2.According to the passage, what is one of the main reasons for the decline in foot traffic in physical stores?3.How can traditional retailers leverage technology to enhance the in-store experience?4.What strategy is mentioned in the passage that involves integrating online and offline channels?5.How can traditional retailers create a competitive advantage in the age of e-commerce?Answers:1.The main concern is the fierce competition from e-commerce and the shift in consumer preferences towards online shopping.2.The main reason is the convenience and competitive prices offered by e-commerce, which make it difficult for traditional retailers to compete.3.Traditional retailers can leverage technology by implementing augmented reality (AR) and offering online returns to brick-and-mortar stores.4.The strategy mentioned is to create a seamless shopping experience that integrates online and offline channels, such as offering online returns to brick-and-mortar stores.5.Traditional retailers can create a competitive advantage by focusing on niche markets, offering unique products or services, and investing in marketing and promotions to drive traffic to their physical stores.四、翻译(本大题有5小题,每小题2分,共10分)First QuestionTranslate the following sentence into Chinese. Write your translation on the ANSWER SHEET.Original Sentence:“Although technology has brought about nume rous conveniences in our daily lives, it is also true that it has led to significant privacy concerns, especially with the rapid development of digital communication tools.”Answer:尽管技术在我们的日常生活中带来了诸多便利,但也不可否认它导致了重大的隐私问题,尤其是在数字通信工具快速发展的情况下。
Intrusion Detection Systems:A Survey and TaxonomyStefan AxelssonDepartment of Computer EngineeringChalmers University of TechnologyG¨oteborg,Swedenemail:sax@ce.chalmers.se14March2000AbstractThis paper presents a taxonomy of intrusion detection systems that is then used to survey and classify a number of research prototypes.The taxonomy consists of a classificationfirst ofthe detection principle,and second of certain operational aspects of the intrusion detection sys-tem as such.The systems are also grouped according to the increasing difficulty of the problemthey attempt to address.These classifications are used predictively,pointing towards a numberof areas of future research in thefield of intrusion detection.1IntroductionThere is currently a need for an up-to-date,thorough taxonomy and survey of thefield of intru-sion detection.This paper presents such a taxonomy,together with a survey of the important research intrusion detection systems to date and a classification of these systems according to the taxonomy.It should be noted that the main focus of this survey is intrusion detection systems, in other words major research efforts that have resulted in prototypes that can be studied both quantitatively and qualitatively.A taxonomy serves several purposes[FC93]:Description It helps us to describe the world around us,and provides us with a tool with which to order the complex phenomena that surround us into more manageable units. Prediction By classifying a number of objects according to our taxonomy and then observing the ‘holes’where objects may be missing,we can exploit the predictive qualities of a good tax-onomy.In the ideal case,the classification points us in the right direction when undertaking further studies.Explanation A good taxonomy will provide us with clues about how to explain observed phe-nomena.We aim to develop a taxonomy that will provide at least some results in all areas outlined above.With one exception[DDW99],previous surveys have not been strong when it comes to a more systematic,taxonomic approach.Surveys such as[Lun88,MDL90,HDL90,ESP95] are instead somewhat superficial and dated by today’s standards.The one previous attempt at a taxonomy[DDW99]falls short in some respects,most notably in the discussion of detection principles,where it lacks the necessary depth.2Introduction to intrusion detectionIntrusion detection systems are the‘burglar alarms’(or rather‘intrusion alarms’)of the computer securityfield.The aim is to defend a system by using a combination of an alarm that sounds whenever the site’s security has been compromised,and an entity—most often a site security officer(SSO)—that can respond to the alarm and take the appropriate action,for instance by ousting the intruder,calling on the proper external authorities,and so on.This method should be contrasted with those that aim to strengthen the perimeter surrounding the computer system.We believe that both of these methods should be used,along with others,to increase the chances of mounting a successful defence,relying on the age-old principle of defence in depth.It should be noted that the intrusion can be one of a number of different types.For example,a user might steal a password and hence the means by which to prove his identity to the computer. We call such a user a masquerader,and the detection of such intruders is an important problem for thefield.Other important classes of intruders are people who are legitimate users of the system but who abuse their privileges,and people who use pre-packed exploit scripts,often found on the Internet,to attack the system through a network.This is by no means an exhaustive list,and the classification of threats to computer installations is an active area of research.Early in the research into such systems two major principles known as anomaly detection and signature detection were arrived at,the former relying onflagging all behaviour that is abnormal for an entity,the latterflagging behaviour that is close to some previously defined pattern signature of a known intrusion.The problems with thefirst approach rest in the fact that it does not necessarily detect undesirable behaviour,and that the false alarm rates can be high.The problems with the latter approach include its reliance on a well defined security policy,which may be absent,and its inability to detect intrusions that have not yet been made known to the intrusion detection system.It should be noted that to try to bring more stringency to these terms,we use them in a slightly different fashion than previous researchers in thefield.An intrusion detection system consists of an audit data collection agent that collects informa-tion about the system being observed.This data is then either stored or processed directly by the detector proper,the output of which is presented to the SSO,who then can take further action, normally beginning with further investigation into the causes of the alarm.3Previous workMost,if not all,would agree that the central part of an intrusion detection system is the detector proper and its underlying principle of operation.Hence the taxonomies in this paper are divid-ed into two groups:thefirst deals with detection principles,and the second deals with system characteristics(the other phenomena that are necessary to form a complete intrusion detection system).In order to gain a better footing when discussing thefield,it is illustrative to review the existing research into the different areas of intrusion detection starting at the source,the intrusion itself, and ending with the ultimate result,the decision.Obviously,the source of our troubles is an action or activity that is generated by the intruder. This action can be one of a bewildering range,and it seems natural to start our research into how to construct a detector byfirst studying the nature of the signal that we wish to detect.This points to the importance of constructing taxonomies of computer security intrusions,but,perhaps surprisingly,thefield is not over-laden with literature.Such classifications of computer security violations that exist[LJ97,NP89]are not directed towards intrusion detection,and on closer study they appear to be formulated at too high a level of representation to be applicable to the problem in hand.We know of only one study that connects the classification of different computer security violations to the problem of detection,in this case the problem of what traces are necessary to detect intrusion after the fact[ALGJ98].From the nature of the source we move to the question of how how to observe this source, and what problems we are likely to have in doing so.In a security context,we would probably perform some sort of security audit,resulting in a security audit log.Sources of frustration when undertaking logging include the fact that we may not be able to observe our subject directly inisolation;background traffic will also be present in our log,and this will most likely come from benign usage of the system.In other words,we would have an amount of traffic that is to varying degrees similar to the subject we wish to observe.However,we have found no study that goes into detail on the subject of what normal traffic one might expect under what circumstances.Although one paper states that in general it is probably not possible[HL93],we are not as pessimistic.With a sufficiently narrow assumption of operational parameters for the system,we believe useful results can be achieved.This brings us to the results of the security logging—in other words what can we observe—and what we suspect we should observe given an idea of the nature of the security violation, background behaviour,and observation mechanism.One issue,for example,is be precisely what data to commit to our security log.Again the literature is scarce,although for instance[ALGJ98, HL93,LB98]address some of the issues,albeit from different angles.How then to formulate the rule that governs our intrusion detection decision?Perhaps un-surprisingly given the state of research into the previous issues,this also has not been thor-oughly addressed.More often than not we have to reverse engineer the decision rule from the way in which the detector is designed,and often it is the mechanism that is used to im-plement the detector rather than the detection principle itself.Here wefind the main body of research:there are plenty of suggestions and implementations of intrusion detectors in the liter-ature,[AFV95,HDL90,HCMM92,WP96]to name but a few.Several of these detectors employ several,distinct decision rules,however,and as we will see later,it is thus often impossible to place the different research prototypes into a single category.It becomes more difficult to classify the detection principle as such because,as previously noted,it is often implicit in the work cit-ed.Thus we have to do the best we can to classify as precisely as possible given the principle of operation of the detector.Some work is clearer in this respect,for example[LP99].In the light of this,the main motivation for taking an in-depth approach to the different kinds of detectors that have been employed is that it is natural to assume that different intrusion det-ection principles will behave differently under different circumstances.A detailed look at such intrusion detection principles is thus in order,giving us a base for the study of how the opera-tional effectiveness is affected by the various factors.These factors are the intrusion detector,the intrusion we wish to detect,and the environment in which we wish to detect it.Such a distinction has not been made before,as often the different intrusion detection systems are lumped together according to the principle underlying the mechanism used to implement the detector.We read of detectors based on an expert system,an artificial neural network,or—least convincingly—a data mining approach.4A taxonomy of intrusion detection principlesThe taxonomy described below is intended to form a hierarchy shown in table1.A general prob-lem is that most references do not describe explicitly the decision rules employed,but rather the framework in which such rules could be set.This makes it well nigh impossible to classify down to the level of detail that we would like to achieve.Thus we often have to stop our categorisation when we reach the level of the framework,for example the expert system,and conclude that while the platform employed in the indicated role would probably have a well defined impact on the operational characteristics of the intrusion detection principle,we cannot at present categorise that impact.Due both to this and the fact that thefield as a whole is still rapidly expanding,the present taxonomy should of course be seen as afirst attempt.Since there is no established terminology,we are faced with the problem offinding satisfactory terms for the different classes.Wherever possible we have tried tofind new terms for the phe-nomena we are trying to describe.However,it has not been possible to avoid using terms already in thefield that often have slightly differing connotations or lack clear definitions altogether.For this reason we give a definition for all the terms used below,and we wish to apologise in advance for any confusion that may arise should the reader already have a definition in mind for a term used.T a b l e 1:C l a s s i fic a t i o n o f d e t e c t i o n p r i n c i p l e sa n o m a l y n o n t i m e s e r i e sW &S A .4ad e s c r i p t i v e s t a t i s t i c st i m e s e r i e sH y p e r v i e w (1)b A .8p r o g r a m m e ds i m p l e s t a tN S M A .6t h r e s h o l dd e f a u l t d e n y D P E M A .12,J A N U S A .17,B r o A .20s i g n a t u r e s t a t e -m o d e l l i n g U S T A T A .11p e t r i -n e te x p e r t -s y s t e ms t r i n g -m a t c h i n g s i m p l e r u l e -b a s e ds e l f -l e a r n i n gR i p p e r A .214.1Anomaly detectionAnomaly In anomaly detection we watch not for known intrusion—the signal—but rather for abnormalities in the traffic in question;we take the attitude that something that is abnormal is probably suspicious.The construction of such a detector starts by forming an opinion on what constitutes normal for the observed subject(which can be a computer system,a particular user etc.),and then deciding on what percentage of the activity toflag as abnormal, and how to make this particular decision.This detection principle thusflags behaviour that is unlikely to originate from the normal process,without regard to actual intrusion scenarios.4.1.1Self-learning systemsSelf-learning Self-learning systems learn by example what constitutes normal for the installation;typically by observing traffic for an extended period of time and building some model of the underlying process.Non-time series A collective term for detectors that model the normal behaviour of the system by the use of a stochastic model that does not take time series behaviour intoaccount.Rule modelling The system itself studies the traffic and formulates a number of rules that describe the normal operation of the system.In the detection stage,the systemapplies the rules and raises the alarm if the observed traffic forms a poor match(ina weighted sense)with the rule base.Descriptive statistics A system that collects simple,descriptive,mono-modal statistics from certain system parameters into a profile,and constructs a distance vector forthe observed traffic and the profile.If the distance is great enough the system raisesthe alarm.Time series This model is of a more complex nature,taking time series behaviour into ac-count.Examples include such techniques such as a hidden Markov model(HMM),anartificial neural network(ANN),and other more or less exotic modelling techniques.ANN An artificial neural network(ANN)is an example of a‘black box’modelling ap-proach.The system’s normal traffic is fed to an ANN,which subsequently‘learns’the pattern of normal traffic.The output of the ANN is then applied to new trafficand is used to form the intrusion detection decision.In the case of the surveyedsystem this output was not deemed of sufficient quality to be used to form the out-put directly,but rather was fed to a second level expert system stage that took thefinal decision.4.1.2ProgrammedProgrammed The programmed class requires someone,be it a user or other functionary,who teaches the system—programs it—to detect certain anomalous events.Thus the user of the system forms an opinion on what is considered abnormal enough for the system to signal a security violation.Descriptive statistics These systems build a profile of normal statistical behaviour by the parameters of the system by collecting descriptive statistics on a number of parameters.Such parameters can be the number of unsuccessful logins,the number of networkconnections,the number of commands with error returns,etc.Simple statistics In all cases in this class the collected statistics were used by higher level components to make a more abstract intrusion detection decision.Simple rule-based Here the user provides the system with simple but still compound rules to apply to the collected statistics.Threshold This is arguably the simplest example of the programmed—descriptive stati-stics detector.When the system has collected the necessary statistics,the user canprogram predefined thresholds(perhaps in the form of simple ranges)that definewhether to raise the alarm or not.An example is‘(Alarm if)number of unsuccess-ful login attempts 3.’Default deny The idea is to state explicitly the circumstances under which the observed system operates in a security-benign manner,and toflag all deviations from this oper-ation as intrusive.This has clear correspondence with a default deny security policy,formulating,as does the general legal system,that which is permitted and labelling allelse illegal.A formulation that while being far from common,is at least not unheardof.State series modelling In state series modelling,the policy for security benign oper-ation is encoded as a set of states.The transitions between the states are implicitin the model,not explicit as when we code a state machine in an expert systemshell.As in any state machine,once it has matched one state,the intrusion detec-tion system engine waits for the next transition to occur.If the monitored action isdescribed as allowed the system continues,while if the transition would take thesystem to another state,any(implied)state that is not explicitly mentioned willcause the system to sound the alarm.The monitored actions that can trigger tran-sitions are usually security relevant actions such asfile accesses(reads and writes),the opening of‘secure’communications ports,etc.The rule matching engine is simpler and not as powerful as a full expert system.There is no unification,for example.It does allow fuzzy matching,however—fuzzy in the sense that an attribute such as‘Write access to anyfile in the/tmp di-rectory’could trigger a transition.Otherwise the actual specification of the security-benign operation of the program could probably not be performed realistically. 4.2Signature detectionSignature In signature detection the intrusion detection decision is formed on the basis of knowl-edge of a model of the intrusive process and what traces it ought to leave in the observed system.We can define in any and all instances what constitutes legal or illegal behaviour, and compare the observed behaviour accordingly.It should be noted that these detectors try to detect evidence of intrusive activity irrespective of any idea of what the background traffic,i.e.normal behaviour,of the system looks like.These detectors have to be able to operate no matter what constitutes the normal behaviour of the system,looking instead for patterns or clues that are thought by the designers to stand out against the possible background traffic.This places very strict demands on the model of the nature of the intrusion.No sloppiness can be afforded here if the resulting detector is to have an acceptable detection and false alarm rate.Programmed The system is programmed with an explicit decision rule,where the program-mer has himself prefiltered away the influence of the channel on the observation space.The detection rule is simple in the sense that it contains a straightforward coding ofwhat can be expected to be observed in the event of an intrusion.Thus,the idea is to state explicitly what traces of the intrusion can be thought to occuruniquely in the observation space.This has clear correspondence with a default permitsecurity policy,or the formulation that is common in law,i.e.listing illegal behaviourand thereby defining all that is not explicitly listed as being permitted.State-modelling State-modelling encodes the intrusion as a number of different states, each of which has to be present in the observation space for the intrusion to beconsidered to have taken place.They are by their nature time series models.Twosubclasses exist:in thefirst,state transition,the states that make up the intrusionform a simple chain that has to be traversed from beginning to end;in the second,petri-net,the states form a petri-net.In this case they can have a more generaltree structure,in which several preparatory states can be fulfilled in any order,irrespective of where in the model they occur.Expert-system An expert system is employed to reason about the security state of the system,given rules that describe intrusive behaviour.Often forward-chaining,production-based tool are used,since these are most appropriate when dealingwith systems where new facts(audit events)are constantly entered into the syst-em.These expert systems are often of considerable power andflexibility,allowingthe user access to powerful mechanisms such as unification.This often comes at acost to execution speed when compared with simpler methods.String matching String matching is a simple,often case sensitive,substring matching of the characters in text that is transmitted between systems,or that otherwise arisefrom the use of the system.Such a method is of course not in the leastflexible,butit has the virtue of being simple to understand.Many efficient algorithms exist forthe search for substrings in a longer(audit event)string.Simple rule-based These systems are similar to the more powerful expert system,but not as advanced.This often leads to speedier execution.4.3Compound detectorsSignature inspired These detectors form a compound decision in view of a model of both the normal behaviour of the system and the intrusive behaviour of the intruder.The detector operates by detecting the intrusion against the background of the normal traffic in the sy-stem.At present,we call these detectors‘signature inspired’because the intrusive model is much stronger and more explicit than the normal model.These detectors have—at least in theory—a much better chance of correctly detecting truly interesting events in the super-vised system,since they both know the patterns of intrusive behaviour and can relate them to the normal behaviour of the system.These detectors would at the very least be able to qualify their decisions better,i.e.give us an improved indication of the quality of the alarm.Thus these systems are in some senses the most‘advanced’detectors surveyed.Self learning These systems automatically learn what constitutes intrusive and normal be-haviour for a system by being presented with examples of normal behaviour inter-spersed with intrusive behaviour.The examples of intrusive behaviour must thus beflagged as such by some outside authority for the system to be able to distinguish thetwo.Automatic feature selection There is only one example of such a system in this clas-sification,and it operates by automatically determining what observable featuresare interesting when forming the intrusion detection decision,isolating them,andusing them to form the intrusion detection decision later.4.4Discussion of classificationWhile some systems in table1appear in more than one category,this is not because the classi-fication is ambiguous but because the systems employ several different principles of detection. Some systems use a two-tiered model of detection,where one lower level feeds a higher level. These systems(MIDAS,NADIR,Haystack)are all of the type that make signature—programmed—default permit decisions on anomaly data.One could of course conceive of another type of detector that detects anomalies from signature data(or alarms in this case)and indeed one such system has been presented in[MCZH99],but unfortunately the details of this particular system are so sketchy as to preclude further classification here.It is probable that the detection thresholds of these systems,at least in the lower tier,can be lowered(the systems made more sensitive)because any‘false alarms’at this level can be mitigated at the higher level.It should be noted that some of the more general mechanisms surveyed here could be used to implement several different types of detectors,as is witnessed by some multiple entries.However, most of the more recent papers do not go into sufficient detail to enable us draw more precise conclusions.Examples of the systems that are better described in this respect are NADIR,MIDAS, and the P-BEST component of EMERALD.4.4.1Orthogonal conceptsFrom study of the taxonomy,a number of orthogonal concepts become clear:anomaly/signature on the one hand,and self-learning/programmed on the other.The lack of detectors in the signature—self-learning class is conspicuous,particularly since detectors in this class would probably prove use-ful,combining as they do the advantages of self-learning systems—they do not have to perform the arduous and difficult task of specifying intrusion signatures—with the detection efficiency of signature based systems.4.4.2High level categoriesWe see that the systems classified fall into three clear categories depending on the type of intrusion they detect most readily.In order of increasing difficulty they are:Well known intrusions Intrusions that are well known,and for which a‘static’,well defined pat-tern can be found.Such intrusions are often simple to execute,and have very little inherent variability.In order to exploit their specificflaw they must be executed in a straightforward, predictable manner.Generalisable intrusions These intrusions are similar to the well known intrusions,but have a larger or smaller degree of variability.These intrusions often exploit more generalflaws in the attacked system,and there is much inherent opportunity for variation in the specific attack that is executed.Unknown intrusions These intrusions have the weakest coupling to a specificflaw,or one that is very general in nature.Here,the intrusion detection system does not really know what to expect.4.4.3Examples of systems in the high level categoriesA few examples of systems that correspond to these three classes will serve to illustrate how the surveyed systems fall into these three categories.Well known intrusions First we have the simple,signature systems that correspond to the well known intrusions class.The more advanced(such as IDIOT)and general(such as P-BEST in EMER-ALD)move towards the generalisable intrusions class by virtue of their designers’attempts to ab-stract the signatures,and take more of the expected normal behaviour into account when speci-fying signatures[LP99].However,the model of normal behaviour is still very weak,and is often not outspoken;the designers draw on experience rather than on theoretical research in the area of expected source behaviour.Generalisable intrusions The generalisable intrusions category is only thinly represented in the survey.It corresponds to a signature of an attack that is‘generalised’(specified on a less detailed level)leaving unspecified all parameters that can be varied while still ensuring the success of the attack.Some of the more advanced signature based systems have moved towards this ideal, but still have a long way to go.The problem is further compounded the systems’lack of an clear model of what constitutes normal traffic.It is of course more difficult to specify a general signature,while remaining certain that the normal traffic will not trigger the detection system. The only example of a self-learning,compound system(RIPPER)is interesting,since by its very nature it can accommodate varying intrusion signatures merely by being confronted by different variations on the same theme of attack.It is not known how easy or difficult it would be inpractice to expand its knowledge of the intrusive process by performing such variations.It is entirely possible that RIPPER would head in the opposite direction and end up over-specifying the attack signatures it is presented with,which would certainly lower the detection rate.Unknown intrusions The third category,unknown intrusions,contains the anomaly detectors,in particular because they are employed to differentiate between two different users.This situation can be modelled simply as the original user(stochastic process)acting in his normal fashion,and a masquerader(different stochastic process)acting intrusively,the problem then becoming a clear example of the uncertainties inherent detecting an unknown intrusion against a background of normal behaviour.This is perhaps the only case where this scenario corresponds directly to a security violation,however.In the general case we wish to employ anomaly detectors to detect intrusions that are novel to us,i.e.where the signature is not yet known.The more advanced detectors such as the one described in[LB98]do indeed operate by differentiating between dif-ferent stochastic models.However,most of the earlier ones,that here fall under the descriptive statistics heading,operate without a source model,opting instead to learn the behaviour of the background traffic andflag events that are unlikely to emanate from the known,non-intrusive model.In the case where we wish to detect intrusions that are novel to the system,anomaly detectors would probably operate with a more advanced source model.Here the strategy offlagging be-haviour that is unlikely to have emanated from the normal(background)traffic probably results in less of a direct match.It should be noted that research in this area is still in its infancy.Ano-maly detection systems with an explicit intrusive source model would probably have interesting characteristics although we will refrain from making any predictions here.4.4.4Effect on detection and false alarm ratesIt is natural to assume that the more difficult the problem(according to the scale presented in section4.4.2),the more difficult the accurate intrusion detection decision,but it is also the case that the intrusion detection problem to be solved becomes more interesting as a result.This view is supported by the classification above,and the claims made by the authors of the classified sys-tems,where those who have used‘well known intrusions’detection are thefirst to acknowledge that modifications in the way the system vulnerability is exploited may well mean the attack goes undetected[Pax88,HCMM92,HDL90].In fact,these systems all try to generalise their signature patterns to the maximum in order to avoid this scenario,and in so doing also move towards the ‘generalisable intrusions’type of detector.The resulting detector may well be more robust,since less specific information about the nature of its operation is available to the attacker.RIPPER is the only tool in this survey that can claim to take both an intrusion model and nor-mal behaviour model into account.Thus it is able to make a complex,threshold based decision, not two tiered as is the case with some others.These detectors will by their very nature resem-ble signature based systems since we still have a very incomplete knowledge of what intrusive processes look like in computer systems.These systems will in all likelihood never detect an in-trusion that is novel to them.However,by employing a complete source model,they promise the best performance with regard to detection and false alarm rates.Indeed,if the source models can be specified with sufficient(mathematical)accuracy,a theoretically optimal,or near optimal, detector can be constructed,and its optimal detection and false alarm rates calculated.The main interest in intrusion detectors is of course in the degree to which they can correct-ly classify intrusions and avoid false alarms.We call this parameter effectiveness.There has not been much study of the effectiveness of intrusion detection systems to date,although in the past year some work in thisfield has been presented[LGG98,WFP99].These studies are still weak when it comes to the classification both of intrusions and of signal and channel models,so it is difficult to draw any conclusions from them.The one paper which is both strong on the mod-elling and the claims it makes is[HL93],and interestingly the work presented there suggests that some types of intrusion detectors—those that fall into the anomaly—self learning—non-time series category above—could have a difficult time living up to realistic effectiveness goals,as claimed in[Axe99].。
视觉识别系统英文作文英文:Visual recognition systems are becoming increasingly important in our daily lives. From facial recognition on our smartphones to security systems in public places, these systems are designed to identify and verify individuals based on their visual features. The technology behind these systems has advanced rapidly in recent years, allowing for more accurate and efficient recognition.One example of a visual recognition system that I encounter regularly is the facial recognition feature on my smartphone. This system allows me to unlock my phone with just a glance, making it convenient and secure at the same time. I no longer need to remember complex passwords or use my fingerprint to unlock my phone, which saves me time and hassle.Another example is the use of visual recognitionsystems in public places, such as airports and train stations. These systems can quickly identify individuals and compare them to a database of known suspects or persons of interest. This helps to enhance security and prevent potential threats, making public spaces safer for everyone.Visual recognition systems are also being used in the retail industry to track customer behavior and preferences. For example, some stores use facial recognition to analyze customer demographics and shopping habits, allowing them to tailor their marketing strategies and product offerings accordingly.Overall, visual recognition systems have greatly improved our daily lives in terms of convenience, security, and personalized experiences. As the technology continues to advance, we can expect to see even more innovative applications in the future.中文:视觉识别系统在我们日常生活中变得越来越重要。
A review on computer vision based defect detection and condition assessment of concrete and asphalt civil infrastructure qChristian Koch a ,⇑,Kristina Georgieva b ,Varun Kasireddy c ,Burcu Akinci c ,Paul Fieguth daDept.of Civil Engineering,University of Nottingham,Nottingham,NG72RD,United KingdombChair of Computing in Engineering,Ruhr-Universität Bochum,Universitätstraße 150,44801Bochum,Germany cDept.of Civil and Environmental Engineering,Carnegie Mellon University,Pittsburgh,PA 15213,United States dDept.of Systems Design Engineering,Faculty of Engineering,University of Waterloo,Waterloo,Ontario N2L 3G1,Canadaa r t i c l e i n f o Article history:Received 22September 2014Received in revised form 17December 2014Accepted 22January 2015Available online xxxx Keywords:Computer vision InfrastructureCondition assessment Defect detectionInfrastructure monitoringa b s t r a c tTo ensure the safety and the serviceability of civil infrastructure it is essential to visually inspect and assess its physical and functional condition.This review paper presents the current state of practice of assessing the visual condition of vertical and horizontal civil infrastructure;in particular of reinforced concrete bridges,precast concrete tunnels,underground concrete pipes,and asphalt pavements.Since the rate of creation and deployment of computer vision methods for civil engineering applications has been exponentially increasing,the main part of the paper presents a comprehensive synthesis of the state of the art in computer vision based defect detection and condition assessment related to concrete and asphalt civil infrastructure.Finally,the current achievements and limitations of existing methods as well as open research challenges are outlined to assist both the civil engineering and the computer science research community in setting an agenda for future research.Ó2015Elsevier Ltd.All rights reserved.1.IntroductionManual visual inspection is currently the main form of assess-ing the physical and functional conditions of civil infrastructure at regular intervals in order to ensure the infrastructure still meets its expected service requirements.However,there are still a num-ber of accidents that are related to insufficient inspection and con-dition assessment.For example,as a result of the collapse of the I-35W Highway Bridge in Minneapolis (Minnesota,USA)in 200713people died,and 145people were injured [1].In the final accident report the National Transportation Safety Board identified major safety issues including,besides others,the lack of inspection guid-ance for conditions of gusset plate distortion;and inadequate use of technologies for accurately assessing the condition of gusset plates on deck truss bridges.A different,less tragic example is the accident of a freight train in the Rebunhama Tunnel in Japan in 1999that resulted in people losing the trust in the safety and durability of tunnels.According to [2],the failure to detect shearcracks had resulted in five pieces of concrete blocks,as large as sev-eral tens of centimeters,which had fallen onto the track causing the train to derail.In order to prevent these kinds of accidents it is essential to con-tinuously inspect and assess the physical and functional condition of civil infrastructure to ensure its safety and serviceability.Typically,condition assessment procedures are performed manually by certified inspectors and/or structural engineers,either at regular intervals (routine inspection)or after disasters (post-dis-aster inspection).This process includes the detection of the defects and damage (cracking,spalling,defective joints,corrosion,pot-holes,etc.)existing on civil infrastructure elements,such as build-ings,bridges,tunnels,pipes and roads,and the defects’magnitude (number,width,length,etc.).The visual inspection and assessment results help agencies to predict future conditions,to support investment planning,and to allocate limited maintenance and repair resources,and thus ensure the civil infrastructure still meets its service requirements.This review paper starts with the description of the current practices of assessing the visual condition of vertical and horizon-tal civil infrastructure,in particular of reinforced concrete bridges (horizontal:decks,girders,vertical:columns),precast concrete tunnels (horizontal:segmental lining),underground concrete pipes (horizontal)(wastewater infrastructure),and asphalt pave-ments (horizontal).In order to motivate the potential of computer/10.1016/j.aei.2015.01.0081474-0346/Ó2015Elsevier Ltd.All rights reserved.qHandled by W.O.O’Brien⇑Corresponding author at:The University of Nottingham,Faculty of Engineering,Department of Civil Engineering,Room B27Coates Building,University Park,Nottingham NG72RD,UK.Tel.:+44(0)1158468933.E-mail address:christian.koch@ (C.Koch).URL:/civeng (C.Koch).vision,this part focuses on answering the following questions:(1)what are the common visual defects that cause damage to civil infrastructure;(2)what are the typical manual procedures to detect those defects;(3)what are the limitations of manual defect detection;(4)how are the defects measured;and (5)what tools and metrics are used to assess the condition of each infrastructure element.Due to the availability of low cost,high quality and easy-to-use visual sensing technologies (e.g.digital cameras),the rate of cre-ation and deployment of computer vision methods for civil engi-neering applications has been exponentially increasing over the last puter vision modules,for example,are becoming an integral component of modern Structural Health Monitoring (SHM)frameworks [3].In this regards,the second and largest part of the paper presents a comprehensive synthesis of the state of the art in computer vision based defect detection and condition assess-ment of civil infrastructure.In this respect,this part explains and tries to categorize several state-of-the-art computer vision methodologies,which are used to automate the process of defect and damage detection.Basically,these methods are built upon common image processing techniques,such as template matching,histogram transforms,background subtraction,filtering,edge and boundary detection,region growing,texture recognition,and so forth.It is shown,how these techniques have been used,tested and evaluated to identify different defect and damage patterns in remote and close-up images of concrete bridges,precast concrete tunnels,underground concrete pipes and asphalt pavements.The third part summarizes the current achievements and limitations of computer vision for infrastructure condition assess-ment.Based on that,open research challenges are outlined to assist both the civil engineering and the computer science research com-munity in setting an agenda for future research.2.State of practice in visual condition assessmentThis section presents the state of practice in visual condition assessment of reinforced concrete bridges,precast concrete tun-nels,underground concrete pipes and asphalt pavements.2.1.Reinforced concrete bridgesAs per US Federal Highway Administration (FHWA)’s recent bridge element inspection manual [4],during a routine inspection of a reinforced concrete (RC)bridge,it is mandatory to identify,measure (if necessary)and record information related to damage and defects,such as delamination/spall/patched area,exposed rebar,efflorescence/rust staining,cracking,abrasion/wear,distor-tion,settlement and scouring.While this list of defects comprises the overall list for common RC bridge element categories,such as decks and slabs,railings,superstructure,substructure,culverts and approach ways,not all defects are applicable to all components.Table 1highlights which defects are applicable to which com-ponents and hence need to be checked for each type of component on a bridge.While some of the stated defects are visually detected,some others of them may require physical measurements for accu-rate documentation and assessment.The size of the defect plays an important factor in deciding if it is necessary to go beyond the visual approach.In addition to the list of defects stated above,FHWA also man-dates that all bearings should be checked during inspection,irre-spective of the material type and functional type of the bridge.Some of the relevant defects for bearings are corrosion,connection problems,excessive movement,misalignment,bulging,splitting and tearing,loss of bearing area,and damage.Furthermore,for seals and joints,inspectors focus on a specific set of defects,such as leakage,adhesion loss,seal damage,seal cracking,debris impac-tion,poor condition of adjacent deck,and metal deterioration or damage.While most of these defects can be detected visually,assessing severity of the defects however needs close-up examina-tion and measurements with suitable tools and equipment.All of the existing defects on a bridge are categorized on a scale of 1–4–each corresponding to the condition state of a particular element (1-Good,2-Fair,3-Poor,and 4-Severe).The condition state is an implicit function of severity and extent of a defect on a com-ponent.Though such categorization of condition states provides uniformity for each component and effects,the actual assessment that results in such categorization can be subjective.Table 2pro-vides some examples of guidelines provided in [4]for categoriza-tion of the condition states of different defects.Please refer to Appendix D2.3in [4]for the complete list of guidelines for all defects.There are typically three ways to perform manual inspection for concrete bridge elements:visual,physical and advanced.A combi-nation of these methods is required depending on the condition of the bridge member under consideration.During visual inspection,an inspector focuses on surface deficiencies,such as cracking,spal-ling,rusting,distortion,misalignment of bearings and excessive deflually,the inspector can visually detect most of theTable 1Defects a related to general bridge elements (Grey:Required;White:Not Required)[4].aDel/Spall –Delamination/Spall/Patched area;Exp Rebar –Exposed Rebar;Eff/Rust –Efflorescence/Rust Staining;Crack –Cracking;Abr/Wr –Abrasion/Wear;Distor –Distortion;Settle –Settlement;Scour –Scouring.2 C.Koch et al./Advanced Engineering Informatics xxx (2015)xxx–xxxrelevant defects,provided there is suitable access to the bridge ele-ment.However,visual inspections might not be adequate during the assessment of some specific defects.For example,an inspector can identify visually that there is delamination when looking at a patch of concrete,but would not be able to gauge the extent and depth of it accurately by just visual inspection.Visual inspections,without uti-lization of any other inspection techniques,are also known to be subjective which might result in unreliable results [5,6].In contrast to the visual inspection,efforts during physical inspections are mainly towards quantifying the defects once they are identified visually.For example,to determine delamination areas in a pier or concrete deck,physical methods,namely,ham-mer sounding or chain drag may be used [7].Measurements con-cerning expansion joint openings and bearing positions are also essential during the inspection and evaluation of a bridge.In some cases,advanced inspection methods like those based on strength,sonic,ultrasonic,magnetic,electrical,nuclear,thermography,radar and radiography,are used to detect sub-surface defects or for pre-cise measurements of even surface defects [22].2.2.Precast concrete tunnelsPrecast concrete tunnels are one example of civil infrastructure components that are becoming increasingly important when developing modern traffic concepts worldwide.However,it is com-monly known that numerous tunnels,for example in the US,are more than 50years old and are beginning to show signs of dete-rioration,in particular due to water infiltration [8].In order to sup-port owners in operating,maintaining,inspecting and evaluating tunnels,the US Federal Highway Administration (FHWA),for example,has provided a Tunnel Operations,Maintenance,Inspec-tion and Evaluation (TOMIE)Manual [8]and a Highway and Rail Transit Tunnel Inspection Manual [9]that promote uniform and consistent guidelines.In addition,Best Practices documents sum-marize the similarities and differences of tunnel inspection proce-dures among different US federal states and transportation agencies [10].There are different types of tunnel inspections:initial,routine,damage,in-depth and special inspections [8].Routine inspections usually follow an initial inspection at a regular interval of five years for new tunnels and two years for older tunnels,depending on con-dition and age.According to [9],inspections should always be accomplished by a team of inspectors,consisting of registered pro-fessional engineers with expertise in civil/structural,mechanical,and electrical engineers,as both structural elements and functional systems have to be assessed.However,the focus of this review is on civil and structural condition assessment of precast concrete tunnels.Accessing the various structural elements for up-close visual inspection requires specific equipment and mon-ly,dedicated inspection vehicles,such as Aerial bucket trucks and rail-mounted vehicles,equipped with,for example,cameras (used for documentation),chipping hammers (used to sound concrete),crack comparator gauges (used to measure crack widths),and inspection forms (used to document stations,dates,liner types,defect locations and condition codes),are driven through the tun-nel and permit the inspectors to gain an up-close,hands-on view of most of the structural elements.More recently,integrated and vehicle-mounted scanning sys-tems have entered the market.For example,the Pavemetrics Laser Tunnel Scanning System (LTSS)uses multiple high-speed laser scanners to acquire both 2D images and high-resolution 3D pro-files of tunnel linings at a speed of 20km/h [11].Once digitized the tunnel data can be viewed and analyzed offline by operators using multi-resolution 3D viewing and analysis software that allows for high-precision measurement of virtually any tunnel fea-ture.A different system is the Dibit tunnel scanner that is manually moved through the tunnel [12].It provides an actual comprehen-sive visual and geometrical image of the recorded tunnel surface.The corresponding tunnel scanner software allows easy,quick and versatile data evaluations to visualize the inspected tunnel and manually assess its condition.According to [9],visual inspection must be made on all exposed surfaces of the structural (concrete)elements (e.g.precast segmen-tal liners,placed concrete,slurry walls),and all noted defects have to be documented for location and measured to determine the scale of severity (Table 3).Based on the amount,type,size,and location of defects found on the structural element as well as the extent to which the ele-ment retains its original structural capacity,elements are indi-vidually rated using a numerical rating system of 0–9,0being the worst condition (critical,structure is closed and beyond repair)and 9being the best condition (new construction)[9].2.3.Underground concrete pipesThere is a great deal of buried infrastructure in modern cities,most of which appears to be out-of-sight and out-of-mind.Thus,whereas the number of cracks or depths of potholes in asphaltTable 2Examples of defects and guidelines for assessment of condition states [4].DefectsCondition states 1234GoodFairPoorSevereDelamination/Spall/Patched AreaNoneSpall:<1in depth or <6in diameterSpall:>1in &>6indiameter;unsound patched area or if signs of distress Situation worse than for Condition State 3and if the inspector deems that it might affect the strength or serviceability of the elementEfflorescence/Rust staining NoneSurface white withoutbuild-up or leaching with-out rust staining Heavy build-up with rust stainingSituation worse than for Condition State 3and if the inspector deems that it might affect the strength or serviceability of the elementCrackingWidth <0.012in or spacing >3ft Width 0.012–0.05in or spacing 1–3ftWidth >0.05in or spacing <1ftSituation worse than for Condition State 3and if the inspector deems that it might affect the strength or serviceability of the elementAbrasion/Wear No abrasion/wearAbrasion or wearing has exposed coarse aggregate but the aggregate remains secure in the concrete Coarse aggregate is loose or has popped out of the concrete matrix due to abrasion or wearSituation worse than for Condition State 3and if the inspector deems that it might affect the strength or serviceability of the elementC.Koch et al./Advanced Engineering Informatics xxx (2015)xxx–xxx3and concrete pavements may very well be the subject of water-cooler conversation,an interest in or an awareness of the state of underground sewage pipes is quite far removed from the percep-tion of most citizens.However there are two key attributes that motivate attention to underground infrastructure:1.Being buried,the infrastructure is challenging to inspect.2.Being buried,the infrastructure is very expensive to fix or replace.Indeed,the costs associated with sewage infrastructure mod-ernization or replacement are staggering,with dollar figures quot-ed in the range of one or more trillion dollars [13].There is,however,a strong incentive to undertake research and to develop sophisticated methods for underground concrete pipe inspection,due to the huge cost gap between trenchless approach-es and the far more expensive digging up and replacement.The North American Society for Trenchless Technology and corre-sponding No-Dig conferences worldwide demonstrate the wide-spread interest in this strategy,dating back many years [14].Direct human inspection,which is possible,at least in principle,for above-ground exposed infrastructure such as tunnels and road surfaces,is simply not possible for sewage pipes because of their relatively small size and buried state.Thus there has long been interest [15]in automated approaches,normally a small remote-ly-controlled vehicle with a camera.A sewage pipe would normally be classified [16]into anticipat-ed structures,Undamaged pipe.Pipe joints (connections between pipe segments). Pipe laterals (connections to other pipes).And some number of unanticipated problem classes:Cracks.Mushroom cracks (networks of multiple,intersecting cracks,a precursor to collapse). Holes.Damaged/eroded laterals or joints. Root intrusion. Pipe collapse.In common with other forms of infrastructure,the primary challenge to sewage pipe inspection is the tedium of manual examination of many hours of camera data,exacerbated by the sheer physical extent of the infrastructure which,in the case of sewer pipes,exceeds 200,000km in each of the UK,Japan,Ger-many,and the US [17].There are,however,a few attributes unique to sewage pipe inspection:Lighting is typically poor,since the only light available is that provided by the inspecting vehicle,and any forward-lookingcamera sees a well-lit pipe at the sides transitioning to com-pletely dark ahead.Sewage pipes are subject to extensive staining and background patterning that can appear as very sudden changes in color or shade,giving the appearance of a crack.Since the focus of this paper is on the computer vision analysis techniques,this following overview of data acquisition is brief,and the reader is referred to substantial review papers [15,17,18,19].Closed circuit television (CCTV)[15,17,20,18,19,21–24]is the most widespread approach to data collection for sewage pipe inspec-tion;nevertheless the sewer infrastructure which has been imaged amounts only to a miniscule fraction of perhaps a few percent [19].Because the most common approach is to have a forward-looking camera looking down the pipe,the CCTV method suffers from drawbacks of geometric distortion,a significant drawback in auto-mated analysis.Sewer scanner and evaluation technology (SSET)[15,16,25,26,19]represents a significant step above CCTV imaging.The pipe is scanned in a circular fashion,such that an image of a flattened pipe is produced with very few distortions and is uni-formly ser profiling [17,27,28,20]is similar to the SSET approach,in that a laser scans the pipe surface circularly,with an offset camera observing the laser spot and allowing the three-dimensional surface geometry of the pipe to be constructed via triangulation.There are a few further strategies,albeit less common,for sewer pipe inspection.A SONAR approach [15,28,19]has been proposed for water-filled pipes,where most visual approaches will fail,par-ticularly if the water is not clear.Ultrasound methods [29–31,17],widely used to assess cracks in above-ground pipes,have been pro-posed to allow an assessment of crack depth,which is difficult to infer from visual images.Infrared Thermography [15,17,19]relies on the fact that holes,cracks,or water intrusion may affect the thermal behavior of the pipe and therefore be revealed as a ther-mal signature.Finally,ground penetrating radar [15,17]allows the buried pipe to be studied from the surface,without the clutter and challenges of driving robots in buried pipes,but at a very sig-nificant reduction in resolution and contrast.Because of rather substantial cost associated with data acquisi-tion of sewer pipes,there is significant interest in maximizing the use of data.Prediction methods [32–35]develop statistical,neural,or expert system deterioration models to predict pipe state,over time,on the basis of earlier observations.2.4.Asphalt pavementsAs reported by the American Society of Civil Engineers (ASCE),pavement defects,also known as pavement distress,cost US motorists $67billion a year for repairs [36].Therefore,road surface should be evaluated and defects should be detected timely to ensure traffic safety.Condition assessment of asphalt pavements is essential to road maintenance.Table 3Common civil/structural defects of concrete tunnels and respective severity scales according to [9].Defect type/Severity Minor ModerateSevereScaling <6mm deep 6–25mm deep>25mm deepCracking<0.80mm0.80–3.20mm,or <0.10mm (pre-stressed member)>3.20mm,or >0.10mm (pre-stressed member)Spalling/Joint Spall <12mm deep or 75–150mm in diameter 12–25mm deep or $150mm in diameter >25mm deep or >150mm in diameterPop-Outs (holes)<10mm in diameter 10–50mm in diameter50–75mm in diameter (>75mm are spalls)LeakageWet surface,no dropsActive flow at volume <30drips per minuteActive flow at volume >30drips per minute4 C.Koch et al./Advanced Engineering Informatics xxx (2015)xxx–xxxThere exist several techniques to detect distress in asphalt pavements.These techniques differ in the pavement data which is being collected and in the way this data is processed.Sensor-based techniques utilize devices to measure parameters of the pavement surface.Visual-based techniques make use of observa-tions of the pavement surface to identify anomalies that indicate distress.Depending on the way of processing data,techniques are classified as purely manual,semi-automated or automated [37].Manual processing is entirely performed by experts,while semi-automated and automated techniques require little or no human intervention.Visual-based techniques consist in manually inspecting the road surface or employing digital images and computing devices to assess the pavement condition.In case of manual inspection,trained personnel walks over the road shoulder and rates the pave-ment condition according to distress identification manuals.The disadvantage of this technique is that it is subjective despite the use of manuals and it depends on the experience of the personnel.Also,the personnel are exposed to traffic and weather,which makes the inspection procedure hazardous.Another issue related to the manual inspection of the road service is the time required to perform it.To speed up the assessment process,pavement images are ana-lyzed instead of walking on the roads.Pavement images are obtained using downward-looking video cameras mounted on sophisticated vehicles.When the images and data are analyzed by human experts,the process of assessing the pavement condition is semi-automated.However,the rating of the pavement still depends on the experience of the analyzer and the subjectivity issue remains.Most distress detection techniques,regardless of whether they are manual,semi-automated or automated,depend on the pave-ment distress type.Pavement distress varies in its form and monly,distress is characterized as alligator cracking,bleeding,block cracking,depression,longitudinal or transverse cracking,patches,potholes,rutting,raveling and more.The U.S.Army Corps of Engineers,for example,distinguishes between 19types of dis-tress [38].Distress types and measurements are defined in visual pave-ment distress identification manuals.Some of these measurements and indices vary between different countries,and federal states.Table 4presents examples of defect assessment measurements and condition indices defined in such manuals [39–43].As can be seen,severity and extent are present in most of the manuals.The common procedure to obtain the extent value is to count the occurrences of the different severity levels for each type of distress for the whole segment and convert the amount of distress into dis-tress percentage.Condition assessment indices are calculated based on the dis-tress measurements.Several pavement condition assessment indices have been developed and the procedures of their calcula-tion are described in visual distress identification manuals.For instance,the pavement condition index (PCI)is widely used.The pavement condition index is a statistical measure of the pavement condition developed by the US Army Corps of Engineers [38].It is a numerical value that ranges from 0to 100,where 0indicates the worst possible condition and 100represents the best possible con-dition.A verbal description of the pavement condition can bedefined depending on the PCI value.This description is referred to as pavement condition rating (PCR).PCR classifies the pavement condition as failed,serious,very poor,poor,fair,satisfactory or good.puter vision methods for defect detection and assessmentThis section presents a comprehensive synthesis of the state of the art in computer vision based defect detection and assessment of civil infrastructure.In this respect,this part explains and tries to categorize several state-of-the-art computer vision methodolo-gies,which are used to automate the process of defect and damage detection as well as assessment.Fig.1illustrates the general com-puter vision pipeline starting from low-level processing up to high-level processing (Fig.1,top).Correspondingly,the bottom part of Fig.1categorizes specific methods for the detection,classification and assessment of defects on civil infrastructure into pre-process-ing methods,feature-based methods,model-based methods,pat-tern-based methods,and 3D reconstruction.These methods,however,cannot be considered fully separately.Rather they build on top of each other.For example,extracted features are learned to support the classification process in pattern-based methods.Subsequently,it is shown,how these methodologies have been used,tested and evaluated to identify different defect and damage patterns in remote and close-up images of concrete bridges,pre-cast concrete tunnels,underground concrete pipes and asphalt pavements.3.1.Reinforced concrete bridgesMuch of the research in defect detection and assessment using computer vision methods for RC bridges have largely focused on cracks,and to some extent on spalling/delamination and rusting.Many of these research studies targeted and contributed success-fully to the automation of detection and measurement of defects.More studies need to be done to improve the methods used for automatic assessment as they are currently based on several assumptions.In addition to cracks,there are also other defects that are essen-tial to be detected and assessed in relation to a RC bridge.Being able to detect,assess and document all defects as independent entities is paramount to provide a comprehensive approach for bridge inspection.Currently,some of the other categories of defects are being inherently detected or assessed as part of other major dominating defects present at the using computer vision methods.For exam-ple,some methods detect abrasion as part of the crack [44].In other cases,such as distortion and misalignment of bearings,no automated method exists for detecting and assessing them.This clearly indicates that more research needs to be done in the direc-tion of automating the detection and assessment of various defects.To be able to perform automatic assessment and condition rating assignment,as a first step,it is necessary to identify the relevant defect parameters to accurately and comprehensively represent the defect information.Below we will present the synthesis of the research done so far in the computer vision domain for various types of defects.Table 4Examples of pavement defect assessment measurements and condition indices.Ohio [39]British Columbia [40]Washington [41]South Africa [42]Germany [43]Measurement Severity,extentSeverity,densitySeverity,extentDegree,extentExtentIndexPavement condition ratingPavement distress indexPavement condition ratingVisual condition indexSubstance value (surface)C.Koch et al./Advanced Engineering Informatics xxx (2015)xxx–xxx5。
中科院英语考题光环效应作文English:The halo effect refers to the tendency for positive perceptions of a person or brand in one area to influence overall judgments or evaluations in a favorable light. This cognitive bias can have a significant impact in various contexts, including academic settings where researchers from prestigious institutions like the Chinese Academy of Sciences may be perceived as more credible or competent simply due to their affiliation. However, it is important to be cautious of the halo effect as it can lead to biased decision-making and unfair advantages for individuals or institutions with a strong reputation. By recognizing and addressing this bias, we can strive for more objective and equitable assessments based on merit rather than superficial associations.Chinese:光环效应指的是在某一领域对一个人或品牌产生积极看法的倾向会影响整体评价趋向有利的认知偏见。
Proving Your Vision:An English CompositionIn the everevolving world,the ability to foresee and prove ones vision is a critical skill that can set individuals apart from the rest.This essay aims to explore the importance of having a vision and the steps one can take to validate and actualize it.IntroductionA vision is a clear and compelling image of the future that guides and inspires action.It is not just a dream but a strategic direction that can be articulated and pursued.The first step in proving ones vision is to believe in it,even when others may not see its potential.Identifying the VisionThe journey begins with selfreflection.One must identify what they are passionate about and what they wish to achieve.This could be a career goal,a social cause,or a personal ambition.The clarity of this vision is crucial as it forms the foundation for all subsequent actions.Research and AnalysisOnce the vision is identified,it is essential to conduct thorough research.This involves understanding the current landscape,identifying trends,and analyzing potential obstacles and opportunities.Datadriven insights can provide a solid basis for the vision and help in anticipating future developments.Developing a StrategyWith a clear understanding of the context,the next step is to develop a strategic plan. This plan should outline the steps needed to achieve the vision,including shortterm and longterm goals,milestones,and a roadmap for progress.It is also important to consider alternative paths and contingencies.Building a Support NetworkA vision is rarely realized in isolation.Building a network of supporters is vital.This includes mentors,peers,and collaborators who share the same vision or can provide valuable insights and working events,social media,and professional associations can be platforms to connect with likeminded individuals.Demonstrating CommitmentActions speak louder than words.Demonstrating commitment to the vision involves taking concrete steps towards its realization.This could be through starting a project, initiating a campaign,or making personal sacrifices.Consistent effort and dedication are key to proving the viability of ones vision.Adapting and Overcoming ChallengesThe path to realizing a vision is often fraught with challenges.Adaptability is crucial in overcoming these obstacles.This involves being open to feedback,learning from failures, and adjusting the strategy as needed.Resilience in the face of adversity is a testament to the strength of ones vision.Communicating the VisionEffective communication is essential to gain support and inspire others.This involves articulating the vision in a compelling manner,highlighting its benefits,and sharing the progress made.Storytelling can be a powerful tool in making the vision relatable and inspiring to a wider audience.Measuring ProgressSetting measurable goals and tracking progress is a practical way to prove the visions validity.Regular evaluations can provide insights into what is working and what needs improvement.This data can be used to refine the strategy and demonstrate the visions progress to stakeholders.ConclusionProving ones vision is a dynamic process that requires belief,planning,action,and adaptability.It is a journey that can transform not only the individual but also the world around them.By following these steps,one can turn a vision into a reality and make a lasting impact.In conclusion,the ability to prove ones vision is a testament to ones foresight, determination,and leadership.It is a skill that can lead to personal fulfillment and societal advancement,making it an invaluable asset in todays world.。
The Detection of Orientation Continuity and Discontinuity by Cat V1NeuronsTao Xu1,Ling Wang1,Xue-Mei Song2*,Chao-Yi Li1,2*1Key Laboratory for Neuroinformation of Ministry of Education,University of Electronic Science and Technology of China,Chengdu,China,2Shanghai Institutes of Biological Sciences,Chinese Academy of Sciences,Shanghai,ChinaAbstractThe orientation tuning properties of the non-classical receptive field(nCRF or‘‘surround’’)relative to that of the classical receptive field(CRF or‘‘center’’)were tested for119neurons in the cat primary visual cortex(V1).The stimuli were concentric sinusoidal gratings generated on a computer screen with the center grating presented at an optimal orientation to stimulate the CRF and the surround grating with variable orientations stimulating the nCRF.Based on the presence or absence of surround suppression,measured by the suppression index at the optimal orientation of the cells,we subdivided the neurons into two categories:surround-suppressive(SS)cells and surround-non-suppressive(SN)cells.When stimulated with an optimally oriented grating centered at CRF,the SS cells showed increasing surround suppression when the stimulus grating was expanded beyond the boundary of the CRF,whereas for the SN cells,expanding the stimulus grating beyond the CRF caused no suppression of the center response.For the SS cells,strength of surround suppression was dependent on the relative orientation between CRF and nCRF:an iso-orientation grating over center and surround at the optimal orientation evoked strongest suppression and a surround grating orthogonal to the optimal center grating evoked the weakest or no suppression.By contrast,the SN cells showed slightly increased responses to an iso-orientation stimulus and weak suppression to orthogonal surround gratings.This iso-/orthogonal orientation selectivity between center and surround was analyzed in22SN and97SS cells,and for the two types of cells,the different center-surround orientation selectivity was dependent on the suppressive strength of the cells.We conclude that SN cells are suitable to detect orientation continuity or similarity between CRF and nCRF,whereas the SS cells are adapted to the detection of discontinuity or differences in orientation between CRF and nCRF.Cita tion:Xudoi:10.1371/journal.pone.0079723Editor:Luis M.Martinez,CSIC-Univ Miguel Hernandez,SpainReceived February25,2013;Accepted October4,2013;Published November21,2013Copyright:ß2013Xu et al.This is an open-access article distributed under the terms of the Creative Commons Attribution License,which permits unrestricted use,distribution,and reproduction in any medium,provided the original author and source are credited.Funding:This work was supported by the Major State Basic Research Program(2013CB329401),the Natural Science Foundations of China(90820301,31100797 and31000492),and Shanghai Municipal Committee of Science and Technology(088014158,098014026).The funders had no role in the study design,data collection and analysis,decision to publish,or preparation of the manuscript.Competing Interests:The authors have declared that no competing interests exist.*E-mail:xmsong@(XMS);cyli@(CYL)IntroductionThe responses of V1neurons to visual stimuli presented in the classical receptive field(CRF or center)can be facilitated or suppressed by stimuli in the non-classical-receptive field(nCRF or surround)[1–14].The center responses of most V1cells are suppressed by surround stimuli[14–16]with maximal sup-pressive modulation occurring when the center and surround are stimulated at the identical orientation,drifting direction and spatiotemporal frequency[7,9,11,17,18].Suppression is weaker if the surrounding orientation deviates from the optimal central grating,particularly when the orientation in the surround is90u from the preferred orientation(orthogonal) [15,19].This effect may be linked to figure/ground segregation [7,10,20].For some cells,facilitatory effects have also been observed when the surround stimuli match the central optimal orientation[21–23]and are eliminated by orthogonal surround stimuli[21,23].Previous studies[11,19]of the primary visual cortex of anaesthetized primates revealed that surround orientation tuning depends on the contrast of the central bining intrinsic signal optical imaging and single-unit recording in the V1of anesthetized cats,a recent study[24]reported that surround orientation selectivity might also depended on the location of the neurons in the optical orientation map of V1.In the experiments presented here,we recorded single-unit activities from the V1of anesthetized cats and determined the CRF size and suppression index(SI)of the neurons using a size-tuning test.In the experiment,a high-contrast grating,which was drifting at the optimal orientation/direction of the neuron, was varied in size,and the SI of the neuron was measured from the size-tuning curve.We classified the recorded neurons into two categories according to their SI,the‘‘surround-non-suppressive’’(SN,0ƒSIƒ0.1)and the‘‘surround-suppressive’’(SS,0.1S SIƒ1).We then performed surround orientation tuning tests.In these tests,the nCRF was stimulated by a larger circular grating (20u|20u)that drifted randomly at different orientations/direc-tions while a smaller optimally oriented grating was drifted within the CRF.We observed that the surround suppression was stronger at an iso-compared to ortho-orientation for SS neurons and vice versa for SN cells.These results may imply that SN neurons detect continuity of a preferred orientation,and SS neurons detect discontinuity of a preferred orientation.T, Wang L,Song X-M,Li C-Y(2013)The Detection of Orientation Continuity and Discontinuity by Cat V1Neurons.PLoS ONE8(11):e79723.Materials and MethodsEthics statementThe experimental data were obtained from anesthetized and paralyzed adult cats(weighing 2.0,4.0kg each)bred in the facilities of the Institute of Neuroscience,Chinese Academy of Sciences(Permit Number:SYXK2009-0066).This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals from the National Institute of Health.The protocol was specifically approved by the Committee on the Ethics of Animal Experiments of the Shanghai Institute for Biological Sciences,Chinese Academy of Sciences(Permit Number:ER-SIBS-621001C).All surgeries were performed under general anesthesia combined with the local application of Lidocaine(for details see‘‘animal preparation’’),and all efforts were made to minimize suffering. Animal preparationAcute experiments were performed on10cats.Detailed descriptions of the procedures for animal surgery,anesthesia, and recording techniques can be observed in a previous study[25]. Briefly,the cats were anesthetized prior to surgery with ketamine hydrochloride(30mg/kg,intravenously[iv]),and then tracheal and venous canulations were performed.After surgery,the animal was placed in a stereotaxic frame for performing a craniotomy and neurophysiological procedures.During recording,anesthesia and paralysis were maintained with urethane(20mg/kg/h)and gallamine triethiodide(10mg/kg/h),respectively,and glucose (200mg/kg/h)in Ringer’s solution(3mL/kg/h)was infused. Heart rate,electrocardiography,electroencephalography(EEG), end-expiratory CO2,and rectal temperature were monitored continuously.Anesthesia was considered sufficient when the EEG indicated a permanent sleep-like state.Reflexes,including cornea, eyelid,and withdrawal reflexes,were tested at appropriate intervals.Additional urethane was given immediately if necessary. The nictitating membranes were retracted,and the pupils were dilated.Artificial pupils of3mm diameter were used.Contact lenses and additional corrective lenses were applied to focus the retina on a screen during stimulus presentation.At the end of the experiment,the animal was sacrificed by an overdose of barbiturate administered i.v.Single-unit recordingThe recordings were performed with a multi-electrode tungsten array consisting of2|8channels with250m m between neigh-boring channels from MicroProbes.The impedance of the electrodes was5M V on average(specified by the arrays’manufacture).The signals were recorded using the Cerebus system.Spike signals were band-pass filtered at250–7500Hz and sampled at30kHz.Only well-isolated cells satisfying the strict criteria for single-unit recordings(fixed shape of the action potential and the absence of spikes during the absolute refractory period)were collected for further analyses.We mapped and optimized the stimuli for each individual neuron independently and ignored the data simultaneously collected from other neurons. Recordings were made mainly from layers2/3and4.Visual stimulationThe visual stimuli were generated by a Cambridge Systems VSG graphics board.The stimuli were patches of drifting sinusoidal gratings presented on a high-resolution monitor screen (40|30cm)at a100Hz vertical refresh rate.The screen was maintained at the identical mean luminance as the stimulus patches(10cd/m2).The monitor was placed57cm from the cat’s eyes.All cells recorded were obtained from the area of the cortex that represented the central10u of the visual field.When the single-cell action potentials were isolated,the preferred orienta-tion,drifting direction,spatial frequency and temporal frequency of each cell were determined.Each cell was stimulated monoc-ularly through the dominant eye with the non-dominant eye occluded.To locate the center of the CRF,a narrow rectangular sine-wave grating patch(0.5u–1.0u wide|3.0u–5.0u long at a40% contrast)was moved at successive positions along axes perpendic-ular or parallel to the optimal orientation of the cell,and the response to its drift was measured.The grating was set at the optimal orientation and spatial frequency and drifted in the preferred direction at the optimal speed for the recorded cells.The peak of the response profiles for both axes was defined as the center of the CRF.We further confirmed that the stimulus was positioned in the center of the receptive field by performing an occlusion test,in which a mask consisting of a circular blank patch and concentric with the CRF,was gradually increased in size on a background drifting grating[15,25,26].If the center of the CRF was accurately determined,the mask curve should begin at peak,and the response decreased as more of the receptive field was masked.If the curve obtained with the mask did not begin at the peak value, we considered the stimulus to be offset in relation to the receptive field center,and the position of the receptive field was reassessed. In the size-tuning tests,the circular sinusoidal gratings(40% contrast)were centered over the receptive field center and randomly presented with different diameters(from0.1to20u). The optimized values for these parameters(orientation,spatial and temporal frequency)were used in these tests.Each grating size was presented for5–10cycles of the grating drift,and standard errors were calculated for3–10repeats.We defined the CRF size as the aperture size of the peak response diameter(the stimulus diameter at which the response was maximal if the responses decreased at larger stimulus diameters or reached95%of the peak value if they did not)[27,28].Contrast in the subsequent experiments was selected to elicit responses that reached approximately90%of the saturation response for each cell with the center(CRF)contrast response function.Our contrast had a range of20–70%and a mode of40%.To measure the orientation tuning of the cell’s surround,the optimal orientation,aperture,and spatiotemporal frequencies for the center stimulus remained constant.Directly abutting the outer circumference of the center stimulus was a surround grating(with an outer diameter of20u)of the identical phase and spatial and temporal frequencies.Whereas the center stimulus was maintained at the preferred orientation/direction throughout the experiment, the surround stimulus was shown with variable orientations/ directions(in15u increments).Both the center and surround stimuli were shown at a high contrast(with a range of20–70%and a mode of40%).The responses to each patch were recorded for5–10cycles of the grating drift,and standard errors were calculated for3–10repeats.The cells were classified as‘‘simple’’if the first harmonic(F1)of their response to the sine-wave gratings was greater than the mean firing rate(F0)of the response(F1/F0ratio T1)or‘‘complex’’if the F1/F0ratio was S1[29].Data analysisThe size-tuning curves for all recorded cells were fit using a DOG model[9].In this model,the narrower positive Gaussian represented the excitatory center(the CRF),whereas the broader negative Gaussian represented the suppressive surround(the nCRF).The two Gaussians were considered to be concentricallyoverlapping,and the summation profile could be represented as the difference of the two Gaussian integrals.The model was defined by the following equation:R(s)~R0z K eðs=2{s=2e{(2y=a)2dy{K iðs=2{s=2e{(2y=b)2dywhere R0is the spontaneous firing rate,and each integral represents the relative contribution from putative excitatory and inhibitory components,respectively.The excitatory Gaussian is described by gain,K e,and a space constant,a,and the inhibitory Gaussian by its gain,K i,and space constant,b.All population values are provided below as the mean+SEM.ResultsThe data were obtained from119neurons in the striate cortex of10cats.Of these neurons,46were simple cells and73were complex cells.The results did not differ significantly between these two classes.The determination of the extent of the classical receptive field and strength of surround suppressionThe spatial extent of the classical receptive field(CRF)and the strength of surround suppression were determined using size-tuning tests(see materials and methods).Two patterns of behavior were distinguished based on the presence or absence of surround suppression.Fig.1shows the size-response curves of two representative cells.For some cells,the spike rate rose with increasing stimulus diameter and reached an asymptote(cell A). For other cells,the responses rose to a peak and then decreased as stimulus diameter further increased(cell B).The peak response diameter(as in Fig.1B)or the diameter of the saturation point (reaching95%of the peak value,as in Fig.1A)was defined as theCRF size.Surround suppression was present for many of the cells tested,although its strength varied substantially between cells.We quantified the degree of surround suppression for each cell using the SI(1-asymptotic response/peak response).Across the popula-tion,suppression equal to or less than10%was observed in18.5% (22/119)of the cells tested,and these cells were assigned to the ‘‘surround-no-suppressive’’category(SN).Cells that exhibited suppression greater than10%were categorized as‘‘surround-suppressive’’(SS),which comprised81.5%(97/119)of the population.The relationship between surround orientation selectivity and the degree of surround suppressionWe initially determined the contrast-response function of the classical receptive field(CRF)for each individual neuron,and then a high-contrast that elicited a response of90%of the saturation response(with a range of20–70%and a mode of 40%)was selected to determine the neuron’s preferred orientation by stimulating the CRF alone.We then varied the orientation of the surround grating and held the CRF stimuli in the cell’s preferred orientation/direction.Altogether,we pre-sented24surround orientations/directions shifted in15u increments.Figure2shows four representative examples of the surround orientation tuning curves when the central stimulus was in the preferred orientation/direction.The cells in Fig.2A and B were identical to those in Fig.1A and B.The solid circles plotted represent the center-alone condition,and the open circles plotted represent the orientation tuning measured by the surround grating when the center grating was optimal.The abscissa indicated the stimulus orientation relative to the neuron’s preferred orientation/ direction;0on the abscissa indicated that the gratings were oriented in the optimal orientation and drifted in the optimal direction.The cell in Fig.2A possessed no surround suppression (SI=0)in the size-tuning test(see Fig.1A).A slight facilitation was shown on the orientation tuning curve when the surround grating was drifting in the neuron’s preferred orientation/direction(0 position on the x-axis),and a suppression drop occurred when the surround orientation was45u away from the preferred central grating.The cell illustrated in Fig.2B displayed significant surround suppression(SI=0.65)in the size-tuning test(see Fig.1B)and on the surround orientation tuning curve.The surround suppression was strongest in the cell’s optimal orienta-tion/direction and weak in the orthogonal orientations(+90u on the abscissa).The cell in Fig.2C exhibited extremely strong suppression(SI=0.98,size-tuning test)around the optimal orientation/direction(0and+180u)and facilitation in the orthogonal orientations/directions.Fig.2D shows a cell (SI=0.67,size-tuning test)with moderate strength surround suppression at all surround orientations and a maximal suppres-sion at230u.We consistently observed that the surround was more selective toward ortho-orientations compared to iso-orientations in the strong suppression neurons than in the weak suppression neurons. To quantify this observation,we used an index for iso-/orthogonal orientation selectivity(OSI).OSI is a measure of the average responses from2orthogonal surround orientations relative to the responses to the iso-oriented surroundstimuli.Figure1.Size-tuning curves obtained from two representative cells.The response,in spikes per second(y-axis),is plotted against the diameter of the circular grating patch(x-axis).Solid lines indicate the best-fitting difference of integrals of the Gaussian functions.The arrows indicate stimulus diameter at which the responses were maximal and/or became asymptotic.(A)A cell exhibiting a no-suppression surround (SI=0).(B)A cell showing a suppressive surround(SI=0.65).The error bars represent+SEM.doi:10.1371/journal.pone.0079723.g001OSI ~Rh -90z R h z 902-R h 0R refwhere R h 0represents the response magnitude (spikes per second)at iso-orientation,R h 290and R h +90represent the response magnitude in the two orthogonal orientations,and R ref is the response to the center stimulus presented alone.OSI directly compared the suppression strength resulting from iso-and ortho-oriented surround stimuli.A value of 1indicated that the surround had no suppressive effect in ortho-orientations and exerted complete suppression at iso-orientation,a value of 0indicated that the suppression was equal in magnitude for both iso-and ortho-oriented surround stimuli,and negative values (OSI S 0)showed that the surround suppression was weaker in the iso-compared to ortho-orientation.The OSI values for the 4cells in Fig.2A,B,C and D were 20.54,0.41,0.98and 0.25,respectively.The relationship between OSI and SI is shown in Fig.3.The data revealed a significant positive correlation between the two variables (r =0.59,P S 0.001).This result indicated that the stronger the surround suppression was,the greater the sensitivity to the ortho-orientation between the center and surround,whereas the weaker the surround suppression was,the greater the sensitivity to the iso-orientation between the center and surround.Of the 119neurons analyzed,there were 22SN cells and 97SS cells.The distribution of OSI within the two groups is shown in Fig.4.The black columns indicate the proportion of cells with an OSI S 0,and the gray columns indicate the proportion of cells with an OSI T 0.Within the 22SN neurons,there were 14cells (64%)with an OSI S 0and 8(36%)cells with an OSI T 0.A comparison within the SS neuron group indicated that the proportion was 19%(18/97)with an OSI S 0and 81%(79/97)with an OSI T 0.Because SN neurons exert weaker suppression toiso-orientedFigure 2.The orientation tuning curves of the center and surround for four representative neurons.The solid circles represent the orientation tuning of the center;0u represents the optimal orientation for clarity.The open circles show the tuning of the surround when the center was stimulated with an optimally oriented grating.The error bars are +SEM.(A)The cell was identical to that shown in Fig.1A,thus showing no surround suppression (SI =0)in the size-tuning curve.A slight facilitation is shown when the surround grating was drifting in the preferred orientation/direction (0u on the x -axis)and a suppression drop occurred as the surround orientation was 45u away from the preferred orientation.(B)The neuron was identical to that in Fig.1B,thus showing significant suppression (SI =0.65)in the size-tuning test.The surround suppression was maximal in theoptimalFigure 3.The relationship between the surround iso/orthogo-nal orientation selectivity index (OSI)and suppression index (SI).Each point represents data from one cell.The x -axis indicates the SI in the size-tuning tests,the y -axis indicates the surround iso-/orthogonal orientation selectivity index (OSI).The straight line is a linear regression of the data points.A significant correlation was observed between SI and OSI (r =0.59,P S 0.001).doi:10.1371/journal.pone.0079723.g003orientation/direction (0u )and weaker in the orthogonal orientations (+90u on the abscissa).(C)The neuron showed strong surround suppression around the optimal orientation/direction and significant facilitation in the orthogonal orientations.(D)The cell (SI =0.67,size-tuning test)revealed a moderate-strength surround suppression to all surround orientations with a maximal suppression at 230u .doi:10.1371/journal.pone.0079723.g002surround stimuli and vice versa for the SS cells,these results implied that the SN neurons were more suitable for detecting orientation similarity or continuity,whereas the SS neurons detected differences or discontinuity in orientation.The relation-ship between the preferred orientations of the center and minimally suppressive orientation of the surround for the two types of neurons (SN and SS groups)is compared in Fig.5.We included only those cells in the sample that showed a statistically significant peak in the surround tuning curve.The solid circles represent the SN cells (n =11),and the open circles represent the SS cells (n =62).The positions of the circles on the x -axis indicate the difference between the center optimal orientation and minimally suppressed orientation of the surround,and the positions on the y -axis indicate the strength of the surround effect on the center response.The horizontal line represents the response amplitude to the center stimulus alone.The circles underneath the line indicate suppressive effect,and those above the line indicate facilitative effect,whereas the surround stimuli were oriented in the minimally suppressive orientation.Although the positions of the minimally suppressive orientation varied between cells,the maxima of the SS cells tended to aggregate around the orthogonal or oblique orientation,and for the maxima of SN cells,the cells tended to aggregate around the preferred orientation of center (CRF).The remaining cells (46/119)showed a broadly tuned surround or were nearly uniformly suppressive at all orientations.The population response of the two types of neuronsTo assess the overall effects of surround stimulation on the center response,we pooled the data individually for the two categories of cells by normalizing the magnitude of the surround effect to center stimulation alone.Figures 6A and B show the mean surround orientation tuning curves for the SN and SS cells,respectively.Fig.6A shows that the SN neurons (n =22)produced no significant effects from the surround stimulation at most orientations but significant facilitation in the neuron’s preferred orientation/direction.Fig.6B shows that the SS neurons (n =97)were suppressed by surround stimuli at all orientations with the strongest suppression (minimum response)in the neuron’s preferred orientation and direction.DiscussionPrevious studies [10,30]have shown that V1neurons can detect orientation discontinuity between CRF and its surround astheseFigure 4.A comparison of OSI between the SN and SS neuron groups.The black columns indicate the proportion of cells with an OSI S 0,and the gray columns indicate the proportion of cells with an OSI T 0.Within the 22SN neurons,there were 14cells (64%)with an OSI S 0and 8(36%)cells with an OSI T paring within the SS neuron group,the proportion was 19%(18/97)with an OSI S 0and 81%(79/97)with an OSI T 0.doi:10.1371/journal.pone.0079723.g004Figure 5.The distribution of differences between the preferred orientation of center and minimally suppressive orientation of the surround.The solid circles represent data from the SN cells (n =11),and the open circles represent data from the SS cells (n =62).The positions of the circles on the x -axis indicate the difference between the center optimal orientation and surround minimally suppressed orientation,and the position on the y -axis indicate the strength of the surround effect on the center response.The horizontal line represents response amplitude to the center stimulus alone,the circles underneath the line indicate the amplitude (in %)of the suppressive effect,and those above the line indicate the amplitude (in %)of the facilitative effect when the surround stimuli was oriented in the minimally suppressive orientation.doi:10.1371/journal.pone.0079723.g005Figure 6.A population analysis of the orientation/direction-dependent surround effects of the SN and SS neurons.The surround response amplitude of each neuron was normalized relative to the center response.The horizontal line indicates the response to the preferred center stimulus alone.(A)The averaged surround tuning curve of 22SN cells shows that the surround stimuli have no significant effects at most orientations but display significant facilitation in the neuron’s preferred orientation/direction.(B)The averaged surround tuning curve of 97SS cells.The neurons were maximally suppressed by surround stimuli in the neuron’s preferred orientation/direction and were least suppressive in the orthogonal orientation/direction (+90u ).doi:10.1371/journal.pone.0079723.g006neurons respond to any combination of center-surround orienta-tions as long as the two orientations are not identical.Other studies[7,9,11,15,19,31–34]have reported that the response tended to be stronger when the orientations of the surround stimuli were orthogonal to the optimal orientation of the center. Levitt and Lund[11]reported that for some neurons,the sensitivity to the surround orthogonal orientation became weaker by using a lower contrast of the central stimulus.A recent study [24]reported that the surround orientation tuning could be related to the position of the cells in the optical orientation map,and the authors observed that orthogonal orientation selectivity was higher in the domain cells and lower in the pinwheel cells.In the present experiments,we recorded119neurons from the V1of cats using high-contrast stimuli.According to the presence or absence of a suppressive surround in the size-tuning tests,we divided these neurons into two categories:surround-suppressive (SS)and surround-non-suppressive(SN).Of the sample we recorded,82%(97/119)were SS cells with a significant suppressive surround,and18%(22/119)were SN cells with no suppression or slight facilitation over the surround.We analyzed the relative orientation selectivity between the center and surround using the iso/orthogonal orientation selectivity index(OSI),and observed that the OSI of the surround depended significantly on the suppressive strength(SI)of the neurons.The stronger the surround suppression was,the greater the sensitivity was to the orthogonal orientation between the center and surround,whereas the weaker the surround suppression was,the greater the sensitivity was to the iso-orientation between the center and surround.When both the center and surround were stimulated by the optimal orientation of the cells,the response of the SS cells were maximally suppressed,whereas the response of the SN cells were activated or facilitated.From these results,it can be concluded that SS cells are possibly suitable for the detection of differences or discontinuity in orientations(ortho-or oblique-orientation between the surround and optimal center),and conversely,SN cells are most likely adapted to detecting similarities or continuity of orientations(the identical preferred orientation and drift direction between the surround and center).The functionally different iso/orthogonal center-surround interactions in orientation selectivity might be based on distinct local cortical circuitry for the two types of neurons.Earlier studies reported that some pyramidal cells in the cat visual cortex exhibit extensively spreading horizontal axon collaterals that link neigh-boring regions over several millimeters[35–43].In our previous paper[44],we compared the morphological features of the suppressive and non-suppressive(or facilitative)neurons in the primary visual cortex of anesthetized cats(Fig.1and Fig.2in[44]). We observed that only the non-suppressive(or facilitative)neurons possessed such extensively spreading axon collaterals,whereas the strong-suppressive neurons had restricted local axons.The SN neurons most likely detect orientation continuity by using their long-distance horizontal connections,which provide connections be-tween neighboring cells with similar orientation preferences [37,38,41,45–47],whereas the strong suppression neurons detect orientation discrepancy using local suppressive connections[48]. ConclusionsOur results demonstrate that1)the surround-non-suppressive (SN)neurons tended to detect center-surround continuing orientation and surround-suppressive(SS)neurons possibly to select center-surround discontinuing orientation,and2)the magnitude of the iso-/orthogonal orientation sensitivity of the surround significantly depended on the suppressive SI of the neurons.AcknowledgmentsWe thank Dr.D.A.Tigwell for comments on the manuscript and X.Z.Xu for technical assistance.We also thank Dr.K.Chen for help with the data analysis.Author ContributionsConceived and designed the experiments:XMS CYL.Performed the experiments:TX XMS.Analyzed the data:TX XMS.Contributed reagents/materials/analysis tools:LW.Wrote the paper:XMS CYL.References1.Blakemore C,Tobin EA(1972)Lateral inhibition between orientation detectorsin the cat’s visual cortex.Exp Brain Res15:439–440.2.Maffei L,Fiorentini A(1976)The unresponsive regions of visual corticalreceptive fields.Vision Res16:1131–1139.3.Rose D(1977)Responses of single units in cat visual cortex to moving bars oflight as a function of bar length.J Physiol271:1–23.4.Nelson JI,Frost BJ(1978)Orientation-selective inhibition from beyond theclassic visual receptive field.Brain Res139:359–365.5.Allman J,Miezin F,McGuinness E(1985)Stimulus specific responses frombeyond the classical receptive field:neurophysiological mechanisms for local-global comparisons in visual neurons.Annu Rev Neurosci8:407–430.6.Gilbert CD,Wiesel TN(1990)The influence of contextual stimuli on theorientation selectivity of cells in primary visual cortex of the cat.Vision Res30: 1689–1701.7.Knierim JJ,van Essen DC(1992)Neuronal responses to static texture patterns inarea V1of the alert macaque monkey.J Neurophysiol67:961–980.8.Li CY,Li W(1994)Extensive integration field beyond the classical receptivefield of cat’s striate cortical neurons—classification and tuning properties.Vision Res34:2337–2355.9.DeAngelis GC,Freeman RD,Ohzawa I(1994)Length and width tuning ofneurons in the cat’s primary visual cortex.J Neurophysiol71:347–374.10.Sillito AM,Grieve KL,Jones HE,Cudeiro J,Davis J(1995)Visual corticalmechanisms detecting focal orientation discontinuities.Nature378:492–496.11.Levitt JB,Lund JS(1997)Contrast dependence of contextual effects in primatevisual cortex.Nature387:73–76.mme VA,Super H,Spekreijse H(1998)Feedforward,horizontal,andfeedback processing in the visual cortex.Curr Opin Neurobiol8:529–535. 13.Nothdurft HC,Gallant JL,Van Essen DC(2000)Response profiles to textureborder patterns in area V1.Vis Neurosci17:421–436.14.Jones HE,Grieve KL,Wang W,Sillito AM(2001)Surround suppression inprimate V1.J Neurophysiol86:2011–2028.15.Sengpiel F,Sen A,Blakemore C(1997)Characteristics of surround inhibition incat area17.Exp Brain Res116:216–228.16.Walker GA,Ohzawa I,Freeman RD(2000)Suppression outside the classicalcortical receptive field.Vis Neurosci17:369–379.17.Akasaki T,Sato H,Yoshimura Y,Ozeki H,Shimegi S(2002)Suppressive effectsof receptive field surround on neuronal activity in the cat primary visual cortex.Neurosci Res43:207–220.18.Walker GA,Ohzawa I,Freeman RD(1999)Asymmetric suppressionoutside the classical receptive field of the visual cortex.J Neurosci19: 10536–10553.19.Cavanaugh JR,Bair W,Movshon JA(2002b)Selectivity and spatial distributionof signals from the receptive field surround in macaque V1neurons.J Neurophysiol88:2547–2556.20.Series P,Lorenceau J,Fregnac Y(2003)The‘‘silent’’surround of V1receptivefields:theory and experiments.J Physiol Paris97:453–474.21.Kapadia MK,Ito M,Gilbert CD,Westheimer G(1995)Improvement in visualsensitivity by changes in local context:parallel studies in human observers and in V1of alert monkeys.Neuron15:843–856.22.Polat U,Mizobe K,Pettet MW,Kasamatsu T,Norcia AM(1998)Collinearstimuli regulate visual responses depending on cell’s contrast threshold,Nature 391:580–584.23.Mizobe K,Polat U,Pettet MW,Kasamatsu T(2001)Facilitation andsuppression of single striate-cell activity by spatially discrete pattern stimuli presented beyond the receptive field.Vis Neurosci18:377–391.24.Hashemi-Nezhad M,Lyon DC(2012)Orientation Tuning of the SuppressiveExtraclassical Surround Depends on Intrinsic Organization of V1.Cereb Cortex 22:308–326.25.Song XM,Li CY(2008)Contrast-dependent and contrast-independent spatialsummation of primary visual cortical neurons of the cat.Cereb Cortex18:331–336.26.Levitt JB,Lund JS(2002)The spatial extent over which neurons in macaquestriate cortex pool visual signals.Vis Neurosci19:439–452.。