关于APD毕业设计的外文翻译
- 格式:doc
- 大小:438.00 KB
- 文档页数:21
Anti-Aircraft Fire Control and the Development of IntegratedSystems at SperryT he dawn of the electrical age brought new types of control systems. Able to transmit data between distributed components and effect action at a distance, these systems employed feedback devices as well as human beings to close control loops at every level. By the time theories of feedback and stability began to become practical for engineers in the 1930s a tradition of remote and automatic control engineering had developed that built distributed control systems with centralized information processors. These two strands of technology, control theory and control systems, came together to produce the large-scale integrated systems typical of World War II and after.Elmer Ambrose Sperry (I860-1930) and the company he founded, the Sperry Gyroscope Company, led the engineering of control systems between 1910 and 1940. Sperry and his engineers built distributed data transmission systems that laid the foundations of today‟s command and control systems. Sperry‟s fire control systems included more than governors or stabilizers; they consisted of distributed sensors, data transmitters, central processors, and outputs that drove machinery. This article tells the story of Sperry‟s involvement in anti-aircraft fire control between the world wars and shows how an industrial firm conceived of control systems before the common use of control theory. In the 1930s the task of fire control became progressively more automated, as Sperry engineers gradually replaced human operators with automatic devices. Feedback, human interface, and system integration posed challenging problems for fire control engineers during this period. By the end of the decade these problems would become critical as the country struggled to build up its technology to meet the demands of an impending war.Anti-Aircraft Artillery Fire ControlBefore World War I, developments in ship design, guns, and armor drove the need for improved fire control on Navy ships. By 1920, similar forces were at work in the air: wartime experiences and postwar developments in aerial bombing created the need for sophisticated fire control for anti-aircraft artillery. Shooting an airplane out of the sky is essentially a problem of “leading” the target. As aircraft developed rapidly in the twenties, their increased speed and altitude rapidly pushed the task of computing the lead out of the range of human reaction and calculation. Fire control equipment for anti-aircraft guns was a means of technologically aiding human operators to accomplish a task beyond their natural capabilities.During the first world war, anti-aircraft fire control had undergone some preliminary development. Elmer Sperry, as chairman of the Aviation Committee of the Naval Consulting Board, developed two instruments for this problem: a goniometer,a range-finder, and a pretelemeter, a fire director or calculator. Neither, however, was widely used in the field.When the war ended in I918 the Army undertook virtually no new development in anti-aircraft fire control for five to seven years. In the mid-1920s however, the Army began to develop individual components for anti-aircraft equipment including stereoscopic height-finders, searchlights, and sound location equipment. The Sperry Company was involved in the latter two efforts. About this time Maj. Thomas Wilson, at the Frankford Arsenal in Philadelphia, began developing a central computer for firecontrol data, loosely based on the system of “director firing” that had developed in naval gunn ery. Wilson‟s device resembled earlier fire control calculators, accepting data as input from sensing components, performing calculations to predict the future location of the target, and producing direction information to the guns.Integration and Data TransmissionStill, the components of an anti-aircraft battery remained independent, tied together only by telephone. As Preston R. Bassett, chief engineer and later president of the Sperry Company, recalled, “no sooner, however, did the components get to the point of functioning satisfactorily within themselves, than the problem of properly transmitting the information from one to the other came to be of prime importance.”Tactical and terrain considerations often required that different fire control elements be separated by up to several hundred feet. Observers telephoned their data to an officer, who manually entered it into the central computer, read off the results, and telephoned them to the gun installations. This communication system introduced both a time delay and the opportunity for error. The components needed tighter integration, and such a system required automatic data communications.In the 1920s the Sperry Gyroscope Company led the field in data communications. Its experience came from Elmer Spe rry‟s most successful invention, a true-north seeking gyro for ships. A significant feature of the Sperry Gyrocompass was its ability to transmit heading data from a single central gyro to repeaters located at a number of locations around the ship. The repeaters, essentially follow-up servos, connected to another follow-up, which tracked the motion of the gyro without interference. These data transmitters had attracted the interest of the Navy, which needed a stable heading reference and a system of data communication for its own fire control problems. In 1916, Sperry built a fire control system for the Navy which, although it placed minimal emphasis on automatic computing, was a sophisticated distributed data system. By 1920 Sperry had installed these systems on a number of US. battleships.Because of the Sperry Company‟s experience with fire control in the Navy, as well as Elmer Sperry‟s earlier work with the goniometer and the pretelemeter, the Army approached the company for help with data transmission for anti-aircraft fire control. To Elmer Sperry, it looked like an easy problem: the calculations resembled those in a naval application, but the physical platform, unlike a ship at sea, anchored to the ground. Sperry engineers visited Wilson at the Frankford Arsenal in 1925, and Elmer Sperry followed up with a letter expressing his interest in working on the problem. He stressed his company‟s experience with naval problems, as well as its recent developments in bombsights, “work from the other end of the pro position.” Bombsights had to incorporate numerous parameters of wind, groundspeed, airspeed, and ballistics, so an anti-aircraft gun director was in some ways a reciprocal bombsight . In fact, part of the reason anti-aircraft fire control equipment worked at all was that it assumed attacking bombers had to fly straight and level to line up their bombsights. Elmer Sperry‟s interests were warmly received, and in I925 and 1926 the Sperry Company built two data transmission systems for the Army‟s gun directors.The original director built at Frankford was designated T-1, or the “Wilson Director.” The Army had purchased a Vickers director manufactured in England, but encouraged Wilson to design one thatcould be manufactured in this country Sperry‟s two data tran smission projects were to add automatic communications between the elements of both the Wilson and the Vickers systems (Vickers would eventually incorporate the Sperry system into its product). Wilson died in 1927, and the Sperry Company took over the entire director development from the Frankford Arsenal with a contract to build and deliver a director incorporating the best features of both the Wilson and Vickers systems. From 1927 to 193.5, Sperry undertook a small but intensive development program in anti-aircraft systems. The company financed its engineering internally, selling directors in small quantities to the Army, mostly for evaluation, for only the actual cost of production [S]. Of the nearly 10 models Sperry developed during this period, it never sold more than 12 of any model; the average order was five. The Sperry Company offset some development costs by sales to foreign govemments, especially Russia, with the Army‟s approval 191.The T-6 DirectorSperry‟s modified version of Wilson‟s director was designated T-4 in development. This model incorporated corrections for air density, super-elevation, and wind. Assembled and tested at Frankford in the fall of 1928, it had problems with backlash and reliability in its predicting mechanisms. Still, the Army found the T-4 promising and after testing returned it to Sperry for modification. The company changed the design for simpler manufacture, eliminated two operators, and improved reliability. In 1930 Sperry returned with the T-6, which tested successfully. By the end of 1931, the Army had ordered 12 of the units. The T-6 was standardized by the Army as the M-2 director.Since the T-6 was the first anti-aircraft director to be put into production, as well as the first one the Army formally procured, it is instructive to examine its operation in detail. A technical memorandum dated 1930 explained the theory behind the T-6 calculations and how the equations were solved by the system. Although this publication lists no author, it probably was written by Earl W. Chafee, Sperry‟s director of fire control engineering. The director was a complex mechanical analog computer that connected four three-inch anti-aircraft guns and an altitude finder into an integratedsystem (see Fig. 1). Just as with Sperry‟s naval fire control system, the primary means of connection were “data transmitters,” similar to those that connected gyrocompasses to repeaters aboard ship.The director takes three primary inputs. Target altitude comes from a stereoscopic range finder. This device has two telescopes separated by a baseline of 12 feet; a single operator adjusts the angle between them to bring the two images into coincidence. Slant range, or the raw target distance, is then corrected to derive its altitude component. Two additional operators, each with a separate telescope, track the target, one for azimuth and one for elevation. Each sighting device has a data transmitter that measures angle or range and sends it to the computer. The computer receives these data and incorporates manual adjustments for wind velocity, wind direction, muzzle velocity, air density, and other factors. The computer calculates three variables: azimuth, elevation, and a setting for the fuze. The latter, manually set before loading, determines the time after firing at which the shell will explode. Shells are not intended to hit the target plane directly but rather to explode near it, scattering fragments to destroy it.The director performs two major calculations. First, pvediction models the motion of the target and extrapolates its position to some time in the future. Prediction corresponds to “leading” the target. Second, the ballistic calculation figures how to make the shell arrive at the desired point in space at the future time and explode, solving for the azimuth and elevation of the gun and the setting on the fuze. This calculation corresponds to the traditional artillery man‟s task of looking up data in a precalculated “firing table” and setting gun parameters accordingly. Ballistic calculation is simpler than prediction, so we will examine it first.The T-6 director solves the ballistic problem by directly mechanizing the traditional method, employing a “mechanical firing table.” Traditional firing tables printed on paper show solutions for a given angular height of the target, for a given horizontal range, and a number of other variables. The T-6 replaces the firing table with a Sperry ballistic cam.” A three-dimensionally machined cone shaped device, the ballistic cam or “pin follower” solves a pre-determined function. Two independent variables are input by the angular rotation of the cam and the longitudinal position of a pin that rests on top of the cam. As the pin moves up and down the length of the cam, and as the cam rotates, the height of the pin traces a function of two variables: the solution to the ballistics problem (or part of it). The T-6 director incorporates eight ballistic cams, each solving for a different component of the computation including superelevation, time of flight, wind correction, muzzle velocity. air density correction. Ballistic cams represented, in essence, the stored data of the mechanical computer. Later directors could be adapted to different guns simply by replacing the ballistic cams with a new set, machined according to different firing tables. The ballistic cams comprised a central component of Sperry‟s mechanical computing technology. The difficulty of their manufacture would prove a major limitation on the usefulness of Sperry directors.The T-6 director performed its other computational function, prediction, in an innovative way as well. Though the target came into the system in polar coordinates (azimuth, elevation, and range), targets usually flew a constant trajectory (it was assumed) in rectangular coordinates-i.e. straight andlevel. Thus, it was simpler to extrapolate to the future in rectangular coordinates than in the polar system. So the Sperry director projected the movement of the target onto a horizontal plane, derived the velocity from changes in position, added a fixed time multiplied by the velocity to determine a future position, and then converted the solution back into polar coordinates. This method became known as the “plan prediction method”because of the representation of the data on a flat “plan” as viewed from above; it was commonly used through World War II. In the plan prediction method, “the actual movement of the target is mechanically reproduced on a small scale within the Computer and the desired angles or speeds can be measured directly from the movements of these elements.”Together, the ballistic and prediction calculations form a feedback loop. Operators enter an estimated “time of flight” for the shell when they first begin tracking. The predictor uses this estimate to perform its initial calculation, which feeds into the ballistic stage. The output of the ballistics calculation then feeds back an updated time-of-flight estimate, which the predictor uses to refine the initial estimate. Thus “a cumulative cycle of correction brings the predicted future position of the target up to the point indicated by the actual future time of flight.”A square box about four feet on each side (see Fig. 2) the T-6 director was mounted on a pedestal on which it could rotate. Three crew would sit on seats and one or two would stand on a step mounted to the machine. The remainder of the crew stood on a fixed platform; they would have had to shuffle around as the unit rotated. This was probably not a problem, as the rotation angles were small. The direc tor‟s pedestal mounted on a trailer, on which data transmission cables and the range finder could be packed for transportation.We have seen that the T-6 computer took only three inputs, elevation, azimuth, and altitude (range), and yet it required nine operators. These nine did not include the operation of the range finder, which was considered a separate instrument, but only those operating the director itself. What did these nine men do?Human ServomechanismsTo the designers of the director, the operato rs functioned as “manual servomechanisms.”One specification for the machine required “minimum dependence on …human element.‟ The Sperry Company explained, “All operations must be made as mechanical and foolproof as possible; training requirements must visualize the conditions existent under rapid mobilization.” The lessons of World War I ring in this statement; even at the height of isolationism, with the country sliding into depression, design engineers understood the difficulty of raising large numbers of trained personnel in a national emergency. The designers not only thought the system should account for minimal training and high personnel turnover, they also considered the ability of operators to perform their duties under the stress of battle. Thus, nearly all the work for the crew was in a “follow-the-pointer”mode: each man concentrated on an instrument with two indicating dials, one the actual and one the desired value for a particular parameter. With a hand crank, he adjusted the parameter to match the two dials.Still, it seems curious that the T-6 director required so many men to perform this follow-the-pointer input. When the external rangefinder transmitted its data to the computer, it appeared on a dial and an operator had to follow the pointer to actually input the data into the computing mechanism. The machine did not explicitly calculate velocities. Rather, two operators (one for X and one for Y) adjusted variable-speed drives until their rate dials matched that of a constant-speed motor. When the prediction computation was complete, an operator had to feed the result into the ballistic calculation mechanism. Finally, when the entire calculation cycle was completed, another operator had to follow the pointer to transmit azimuth to the gun crew, who in turn had to match the train and elevation of the gun to the pointer indications.Human operators were the means of connecting “individual elements” into an integrated system. In one sense the men were impedance amplifiers, and hence quite similar to servomechanisms in other mechanical calculators of the time, especially Vannevar Bush‟s differential analyzer .The term “manual servomechanism”itself is an oxymoron: by the conventional definition, all servomechanisms are automatic. The very use of the term acknowledges the existence of an automatic technology that will eventually replace the manual method. With the T-6, this process was already underway. Though the director required nine operators, it had already eliminated two from the previous generation T-4. Servos replaced the operator who fed back superelevation data and the one who transmitted the fuze setting. Furthermore, in this early machine one man corresponded to one variable, and the machine‟s requirement for operators corresponded directly to the data flow of its computation. Thus the crew that operated the T-6 director was an exact reflection of the algorithm inside it.Why, then, were only two of the variables automated? This partial, almost hesitating automation indicates there was more to the human servo-motors than Sperry wanted to acknowledge. As much as the company touted “their duties are purely mechanical and little skill or judgment is required on the part of the operators,” men were still required to exercise some judgment, even if unconsciously. The data were noisy, and even an unskilled human eye could eliminate complications due to erroneous or corrupted data. The mechanisms themselves were rather delicate and erroneous input data, especially if it indicated conditions that were not physically possible, could lock up or damage the mechanisms. Theoperators performed as integrators in both senses of the term: they integrated different elements into a system.Later Sperry DirectorsWhen Elmer Sperry died in 1930, his engineers were at work on a newer generation director, the T-8. This machine was intended to be lighter and more portable than earlier models, as well as less expensive and “procurable in quantities in case of emergency.” The company still emphasized the need for unskilled men to operate the system in wartime, and their role as system integrators. The operators were “mechanical links in the apparatus, thereby making it possible to avoid mechanical complication which would be involved by the use of electrical or mechanical servo motors.” Still, army field experience with the T-6 had shown that servo-motors were a viable way to reduce the number of operators and improve reliability, so the requirements for the T-8 specified that wherever possible “electrical shall be used to reduce the number of operators to a minimum.” Thus the T-8 continued the process of automating fire control, and reduced the number of operators to four. Two men followed the target with telescopes, and only two were required for follow-the-pointer functions. The other follow-the-pointers had been replaced by follow-up servos fitted with magnetic brakes to eliminate hunting. Several experimental versions of the T-8 were built, and it was standardized by the Army as the M3 in 1934.Throughout the remain der of the …30s Sperry and the army fine-tuned the director system in the M3. Succeeding M3 models automated further, replacing the follow-the-pointers for target velocity with a velocity follow-up which employed a ball-and-disc integrator. The M4 series, standardized in 1939, was similar to the M3 but abandoned the constant altitude assumption and added an altitude predictor for gliding targets. The M7, standardized in 1941, was essentially similar to the M4 but added full power control to the guns for automatic pointing in elevation and azimuth. These later systems had eliminated errors. Automatic setters and loaders did not improve the situation because of reliability problems. At the start of World War II, the M7 was the primary anti-aircraft director available to the army.The M7 was a highly developed and integrated system, optimized for reliability and ease of operation and maintenance. As a mechanical computer, it was an elegant, if intricate, device, weighing 850 pounds and including about 11,000 parts. The design of the M7 capitalized on the strength of the Sperry Company: manufacturing of precision mechanisms, especially ballistic cams. By the time the U.S. entered the second world war, however, these capabilities were a scarce resource, especially for high volumes. Production of the M7 by Sperry and Ford Motor Company as subcontractor was a “real choke” and could not keep up with production of the 90mm guns, well into 1942. The army had also adopted an English system, known as the “Kerrison Director” or M5, which was less accurate than the M7 but easier to manufacture. Sperry redesigned the M5 for high-volume production in 1940, but passed in 1941.Conclusion: Human Beings as System IntegratorsThe Sperry directors we have examined here were transitional, experimental systems. Exactly for that reason, however, they allow us to peer inside the process of automation, to examine the displacement of human operators by servomechanisms while the process was still underway. Skilled asthe Sperry Company was at data transmission, it only gradually became comfortable with the automatic communication of data between subsystems. Sperry could brag about the low skill levels required of the operators of the machine, but in 1930 it was unwilling to remove them completely from the process. Men were the glue that held integrated systems together.As products, the Sperry Company‟s anti-aircraft gun directors were only partially successful. Still, we should judge a technological development program not only by the machines it produces but also by the knowledge it creates, and by how that knowledge contributes to future advances. Sperry‟s anti-aircraft directors of the 1930s were early examples of distributed control systems, technology that would assume critical importance in the following decades with the development of radar and digital computers. When building the more complex systems of later years, engineers at Bell Labs, MIT, and elsewhere would incorporate and build on the Sperry Company‟s experience,grappling with the engineering difficulties of feedback, control, and the augmentation of human capabilities by technological systems.在斯佩里防空炮火控和集成系统的发展电气时代的到来带来了新类型的控制系统。
A Design and Implementation of Active NetworkSocket ProgrammingK.L. Eddie Law, Roy LeungThe Edward S. Rogers Sr. Department of Electrical and Computer EngineeringUniversity of TorontoToronto, Canadaeddie@, roy.leung@utoronto.caAbstract—The concept of programmable nodes and active networks introduces programmability into communication networks. Code and data can be sent and modified on their ways to destinations. Recently, various research groups have designed and implemented their own design platforms. Each design has its own benefits and drawbacks. Moreover, there exists an interoperability problem among platforms. As a result, we introduce a concept that is similar to the network socket programming. We intentionally establish a set of simple interfaces for programming active applications. This set of interfaces, known as Active Network Socket Programming (ANSP), will be working on top of all other execution environments in future. Therefore, the ANSP offers a concept that is similar to “write once, run everywhere.” It is an open programming model that active applications can work on all execution environments. It solves the heterogeneity within active networks. This is especially useful when active applications need to access all regions within a heterogeneous network to deploy special service at critical points or to monitor the performance of the entire networks. Instead of introducing a new platform, our approach provides a thin, transparent layer on top of existing environments that can be easily installed for all active applications.Keywords-active networks; application programming interface; active network socket programming;I. I NTRODUCTIONIn 1990, Clark and Tennenhouse [1] proposed a design framework for introducing new network protocols for the Internet. Since the publication of that position paper, active network design framework [2, 3, 10] has slowly taken shape in the late 1990s. The active network paradigm allows program code and data to be delivered simultaneously on the Internet. Moreover, they may get executed and modified on their ways to their destinations. At the moment, there is a global active network backbone, the ABone, for experiments on active networks. Apart from the immaturity of the executing platform, the primary hindrance on the deployment of active networks on the Internet is more on the commercially related issues. For example, a vendor may hesitate to allow network routers to run some unknown programs that may affect their expected routing performance. As a result, alternatives were proposed to allow active network concept to operate on the Internet, such as the application layer active networking (ALAN) project [4] from the European research community. In the ALAN project, there are active server systems located at different places in the networks and active applications are allowed to run in these servers at the application layer. Another potential approach from the network service provider is to offer active network service as the premium service class in the networks. This service class should provide the best Quality of Service (QoS), and allow the access of computing facility in routers. With this approach, the network service providers can create a new source of income.The research in active networks has been progressing steadily. Since active networks introduce programmability on the Internet, appropriate executing platforms for the active applications to execute should be established. These operating platforms are known as execution environments (EEs) and a few of them have been created, e.g., the Active Signaling Protocol (ASP) [12] and the Active Network Transport System (ANTS) [11]. Hence, different active applications can be implemented to test the active networking concept.With these EEs, some experiments have been carried out to examine the active network concept, for example, the mobile networks [5], web proxies [6], and multicast routers [7]. Active networks introduce a lot of program flexibility and extensibility in networks. Several research groups have proposed various designs of execution environments to offer network computation within routers. Their performance and potential benefits to existing infrastructure are being evaluated [8, 9]. Unfortunately, they seldom concern the interoperability problems when the active networks consist of multiple execution environments. For example, there are three EEs in ABone. Active applications written for one particular EE cannot be operated on other platforms. This introduces another problem of resources partitioning for different EEs to operate. Moreover, there are always some critical network applications that need to run under all network routers, such as collecting information and deploying service at critical points to monitor the networks.In this paper, a framework known as Active Network Socket Programming (ANSP) model is proposed to work with all EEs. It offers the following primary objectives.• One single programming interface is introduced for writing active applications.• Since ANSP offers the programming interface, the design of EE can be made independent of the ANSP.This enables a transparency in developing andenhancing future execution environments.• ANSP addresses the interoperability issues among different execution environments.• Through the design of ANSP, the pros and cons of different EEs will be gained. This may help design abetter EE with improved performance in future.The primary objective of the ANSP is to enable all active applications that are written in ANSP can operate in the ABone testbed . While the proposed ANSP framework is essential in unifying the network environments, we believe that the availability of different environments is beneficial in the development of a better execution environment in future. ANSP is not intended to replace all existing environments, but to enable the studies of new network services which are orthogonal to the designs of execution environments. Therefore, ANSP is designed to be a thin and transparent layer on top of all execution environments. Currently, its deployment relies on automatic code loading with the underlying environments. As a result, the deployment of ANSP at a router is optional and does not require any change to the execution environments.II. D ESIGN I SSUES ON ANSPThe ANSP unifies existing programming interfaces among all EEs. Conceptually, the design of ANSP is similar to the middleware design that offers proper translation mechanisms to different EEs. The provisioning of a unified interface is only one part of the whole ANSP platform. There are many other issues that need to be considered. Apart from translating a set of programming interfaces to other executable calls in different EEs, there are other design issues that should be covered, e.g., • a unified thread library handles thread operations regardless of the thread libraries used in the EEs;• a global soft-store allows information sharing among capsules that may execute over different environmentsat a given router;• a unified addressing scheme used across different environments; more importantly, a routing informationexchange mechanism should be designed across EEs toobtain a global view of the unified networks;• a programming model that should be independent to any programming languages in active networks;• and finally, a translation mechanism to hide the heterogeneity of capsule header structures.A. Heterogeneity in programming modelEach execution environment provides various abstractions for its services and resources in the form of program calls. The model consists of a set of well-defined components, each of them has its own programming interfaces. For the abstractions, capsule-based programming model [10] is the most popular design in active networks. It is used in ANTS [11] and ASP [12], and they are being supported in ABone. Although they are developed based on the same capsule model, their respective components and interfaces are different. Therefore, programs written in one EE cannot run in anther EE. The conceptual views of the programming models in ANTS and ASP are shown in Figure 1.There are three distinct components in ANTS: application, capsule, and execution environment. There exist user interfaces for the active applications at only the source and destination routers. Then the users can specify their customized actions to the networks. According to the program function, the applications send one or more capsules to carry out the operations. Both applications and capsules operate on top of an execution environment that exports an interface to its internal programming resources. Capsule executes its program at each router it has visited. When it arrives at its destination, the application at destination may either reply it with another capsule or presents this arrival event to the user. One drawback with ANTS is that it only allows “bootstrap” application.Figure 1. Programming Models in ASP and ANTS.In contrast, ASP does not limit its users to run “bootstrap” applications. Its program interfaces are different from ANTS, but there are also has three components in ASP: application client, environment, and AAContext. The application client can run on active or non-active host. It can start an active application by simply sending a request message to the EE. The client presents information to users and allows its users to trigger actions at a nearby active router. AAContext is the core of the network service and its specification is divided into two parts. One part specifies its actions at its source and destination routers. Its role is similar to that of the application in ANTS, except that it does not provide a direct interface with the user. The other part defines its actions when it runs inside the active networks and it is similar to the functional behaviors of a capsule in ANTS.In order to deal with the heterogeneity of these two models, ANSP needs to introduce a new set of programming interfaces and map its interfaces and execution model to those within the routers’ EEs.B. Unified Thread LibraryEach execution environment must ensure the isolation of instance executions, so they do not affect each other or accessThe authors appreciate the Nortel Institute for Telecommunications (NIT) at the University of Toronto to allow them to access the computing facilitiesothers’ information. There are various ways to enforce the access control. One simple way is to have one virtual machine for one instance of active applications. This relies on the security design in the virtual machines to isolate services. ANTS is one example that is using this method. Nevertheless, the use of multiple virtual machines requires relatively large amount of resources and may be inefficient in some cases. Therefore, certain environments, such as ASP, allow network services to run within a virtual machine but restrict the use of their services to a limited set of libraries in their packages. For instance, ASP provides its thread library to enforce access control. Because of the differences in these types of thread mechanism, ANSP devises a new thread library to allow uniform accesses to different thread mechanisms.C. Soft-StoreSoft-store allows capsule to insert and retrieve information at a router, thus allowing more than one capsules to exchange information within a network. However, problem arises when a network service can execute under different environments within a router. The problem occurs especially when a network service inserts its soft-store information in one environment and retrieves its data at a later time in another environment at the same router. Due to the fact that execution environments are not allowed to exchange information, the network service cannot retrieve its previous data. Therefore, our ANSP framework needs to take into account of this problem and provides soft-store mechanism that allows universal access of its data at each router.D. Global View of a Unified NetworkWhen an active application is written with ANSP, it can execute on different environment seamlessly. The previously smaller and partitioned networks based on different EEs can now be merging into one large active network. It is then necessary to advise the network topology across the networks. However, different execution environments have different addressing schemes and proprietary routing protocols. In order to merge these partitions together, ANSP must provide a new unified addressing scheme. This new scheme should be interpretable by any environments through appropriate translations with the ANSP. Upon defining the new addressing scheme, a new routing protocol should be designed to operate among environments to exchange topology information. This allows each environment in a network to have a complete view of its network topology.E. Language-Independent ModelExecution environment can be programmed in any programming language. One of the most commonly used languages is Java [13] due to its dynamic code loading capability. In fact, both ANTS and ASP are developed in Java. Nevertheless, the active network architecture shown in Figure 2 does not restrict the use of additional environments that are developed in other languages. For instance, the active network daemon, anted, in Abone provides a workspace to execute multiple execution environments within a router. PLAN, for example, is implemented in Ocaml that will be deployable on ABone in future. Although the current active network is designed to deploy multiple environments that can be in any programming languages, there lacks the tool to allow active applications to run seamlessly upon these environments. Hence, one of the issues that ANSP needs to address is to design a programming model that can work with different programming languages. Although our current prototype only considers ANTS and ASP in its design, PLAN will be the next target to address the programming language issue and to improve the design of ANSP.Figure 2. ANSP Framework Model.F. Heterogeneity of Capsule Header StructureThe structures of the capsule headers are different in different EEs. They carries capsule-related information, for example, the capsule types, sources and destinations. This information is important when certain decision needs to be made within its target environment. A unified model should allow its program code to be executed on different environments. However, the capsule header prevents different environments to interpret its information successfully. Therefore, ANSP should carry out appropriate translation to the header information before the target environment receives this capsule.III. ANSP P ROGRAMMING M ODELWe have outlined the design issues encountered with the ANSP. In the following, the design of the programming model in ANSP will be discussed. This proposed framework provides a set of unified programming interfaces that allows active applications to work on all execution environments. The framework is shown in Figure 2. It is composed of two layers integrated within the active network architecture. These two layers can operate independently without the other layer. The upper layer provides a unified programming model to active applications. The lower layer provides appropriate translation procedure to the ANSP applications when it is processed by different environments. This service is necessary because each environment has its own header definition.The ANSP framework provides a set of programming calls which are abstractions of ANSP services and resources. A capsule-based model is used for ANSP, and it is currently extended to map to other capsule-based models used in ANTSand ASP. The mapping possibility to other models remains as our future works. Hence, the mapping technique in ANSP allows any ANSP applications to access the same programming resources in different environments through a single set of interfaces. The mapping has to be done in a consistent and transparent manner. Therefore, the ANSP appears as an execution environment that provides a complete set of functionalities to active applications. While in fact, it is an overlay structure that makes use of the services provided from the underlying environments. In the following, the high-level functional descriptions of the ANSP model are described. Then, the implementations will be discussed. The ANSP programming model is based upon the interactions between four components: application client , application stub , capsule , and active service base.Figure 3. Information Flow with the ANSP.•Application Client : In a typical scenario, an active application requires some means to present information to its users, e.g., the state of the networks. A graphical user interface (GUI) is designed to operate with the application client if the ANSP runs on a non-active host.•Application Stub : When an application starts, it activates the application client to create a new instance of application stub at its near-by active node. There are two responsibilities for the application stub. One of them is to receive users’ instructions from the application client. Another one is to receive incoming capsules from networks and to perform appropriate actions. Typically, there are two types of actions, thatare, to reply or relay in capsules through the networks, or to notify the users regarding the incoming capsule. •Capsule : An active application may contain several capsule types. Each of them carries program code (also referred to as forwarding routine). Since the application defines a protocol to specify the interactions among capsules as well as the application stubs. Every capsule executes its forwarding routine at each router it visits along the path between the source and destination.•Active Service Base : An active service base is designed to export routers’ environments’ services and execute program calls from application stubs and capsules from different EEs. The base is loaded automatically at each router whenever a capsule arrives.The interactions among components within ANSP are shown in Figure 3. The designs of some key components in the ANSP will be discussed in the following subsections. A. Capsule (ANSPCapsule)ANSPXdr decode () ANSPXdr encode () int length ()Boolean execute ()New types of capsule are created by extending the abstract class ANSPCapsule . New extensions are required to define their own forwarding routines as well as their serialization procedures. These methods are indicated below:The execution of a capsule in ANSP is listed below. It is similar to the process in ANTS.1. A capsule is in serial binary representation before it issent to the network. When an active router receives a byte sequence, it invokes decode() to convert the sequence into a capsule. 2. The router invokes the forwarding routine of thecapsule, execute(). 3. When the capsule has finished its job and forwardsitself to its next hop by calling send(), this call implicitly invokes encode() to convert the capsule into a new serial byte representation. length() isused inside the call of encode() to determine the length of the resulting byte sequence. ANSP provides a XDR library called ANSPXdr to ease the jobs of encoding and decoding.B. Active Service Base (ANSPBase)In an active node, the Active Service Base provides a unified interface to export the available resources in EEs for the rest of the ANSP components. The services may include thread management, node query, and soft-store operation, as shown in Table 1.TABLE I. ACTIVE SERVICE BASE FUNCTION CALLSFunction Definition Descriptionboolean send (Capsule, Address) Transmit a capsule towards its destination using the routing table of theunderlying environment.ANSPAddress getLocalHost () Return address of the local host as an ANSPAddress structure. This isuseful when a capsule wants to check its current location.boolean isLocal (ANSPAddress) Return true if its input argument matches the local host’s address andreturn false otherwise.createThread () Create a new thread that is a class ofANSPThreadInterface (discussed later in Section VIA “Unified Thread Abstraction”).putSStore (key, Object) Object getSStore (key) removeSStore (key)The soft-store operations are provided by putSStore(), getSSTore(), and removeSStore(), and they put, retrieve, and remove data respectively. forName (PathName) Supported in ANSP to retrieve a classobject corresponding to the given path name in its argument. This code retrieval may rely on the code loading mechanism in the environment whennecessary.C. Application Client (ANSPClient)boolean start (args[])boolean start (args[],runningEEs) boolean start (args[],startClient)boolean start (args[],startClient, runningEE)Application Client is an interface between users and the nearby active source router. It does the following responsibilities.1. Code registration: It may be necessary to specify thelocation and name of the application code in some execution environments, e.g., ANTS. 2. Application initialization: It includes selecting anexecution environment to execute the application among those are available at the source router. Each active application can create an application client instance by extending the abstract class, ANSPClient . The extension inherits a method, start(), to automatically handle both the registration and initialization processes. All overloaded versions of start() accept a list of arguments, args , that are passed to the application stub during its initialization. An optional argument called runningEEs allows an application client to select a particular set of environment variables, specified by a list of standardized numerical environment ID, the ANEP ID, to perform code registration. If this argument is not specified, the default setting can only include ANTS and ASP. D. Application Stub (ANSPApplication)receive (ANSPCapsule)Application stubs reside at the source and destination routers to initialize the ANSP application after the application clients complete the initialization and registration processes. It is responsible for receiving and serving capsules from the networks as well as actions requested from the clients. A new instance is created by extending the application client abstract class, ANSPApplication . This extension includes the definition of a handling routine called receive(), which is invoked when a stub receives a new capsule.IV. ANSP E XAMPLE : T RACE -R OUTEA testbed has been created to verify the design correctnessof ANSP in heterogeneous environments. There are three types of router setting on this testbed:1. Router that contains ANTS and a ANSP daemonrunning on behalf of ASP; 2. Router that contains ASP and a ANSP daemon thatruns on behalf of ANTS; 3. Router that contains both ASP and ANTS.The prototype is written in Java [11] with a traceroute testing program. The program records the execution environments of all intermediate routers that it has visited between the source and destination. It also measures the RTT between them. Figure 4 shows the GUI from the application client, and it finds three execution environments along the path: ASP, ANTS, and ASP. The execution sequence of the traceroute program is shown in Figure 5.Figure 4. The GUI for the TRACEROUTE Program.The TraceCapsule program code is created byextending the ANSPCapsule abstract class. When execute() starts, it checks the Boolean value of returning to determine if it is returning from the destination. It is set to true if TraceCapsule is traveling back to the source router; otherwise it is false . When traveling towards the destination, TraceCapsule keeps track of the environments and addresses of the routers it has visited in two arrays, path and trace , respectively. When it arrives at a new router, it calls addHop() to append the router address and its environment to these two arrays. When it finally arrives at the destination, it sets returning to false and forwards itself back to the source by calling send().When it returns to source, it invokes deliverToApp() to deliver itself to the application stub that has been running at the source. TraceCapsule carries information in its data field through the networks by executing encode() and decode(), which encapsulates and de-capsulates its data using External Data Representation (XDR) respectively. The syntax of ANSP XDR follows the syntax of XDR library from ANTS. length() in TraceCapsule returns the data length, or it can be calculated by using the primitive types in the XDRlibrary.Figure 5. Flow of the TRACEROUTE Capsules.V. C ONCLUSIONSIn this paper, we present a new unified layered architecture for active networks. The new model is known as Active Network Socket Programming (ANSP). It allows each active application to be written once and run on multiple environments in active networks. Our experiments successfully verify the design of ANSP architecture, and it has been successfully deployed to work harmoniously with ANTS and ASP without making any changes to their architectures. In fact, the unified programming interface layer is light-weighted and can be dynamically deployable upon request.R EFERENCES[1] D.D. Clark, D.L. Tennenhouse, “Architectural Considerations for a NewGeneration of Protocols,” in Proc. ACM Sigcomm’90, pp.200-208, 1990. [2] D. Tennenhouse, J. M. Smith, W. D. Sicoskie, D. J. Wetherall, and G. J.Minden, “A survey of active network research,” IEEE Communications Magazine , pp. 80-86, Jan 1997.[3] D. Wetherall, U. Legedza, and J. Guttag, “Introducing new internetservices: Why and how,” IEEE Network Magazine, July/August 1998. [4] M. Fry, A. Ghosh, “Application Layer Active Networking,” in ComputerNetworks , Vol.31, No.7, pp.655-667, 1999.[5] K. W. Chin, “An Investigation into The Application of Active Networksto Mobile Computing Environments”, Curtin University of Technology, March 2000.[6] S. Bhattacharjee, K. L. Calvert, and E. W. Zegura, “Self OrganizingWide-Area Network Caches”, Proc. IEEE INFOCOM ’98, San Francisco, CA, 29 March-2 April 1998.[7] L. H. Leman, S. J. Garland, and D. L. Tennenhouse, “Active ReliableMulticast”, Proc. IEEE INFOCOM ’98, San Francisco, CA, 29 March-2 April 1998.[8] D. Descasper, G. Parulkar, B. Plattner, “A Scalable, High PerformanceActive Network Node”, In IEEE Network, January/February 1999.[9] E. L. Nygren, S. J. Garland, and M. F. Kaashoek, “PAN: a high-performance active network node supporting multiple mobile code system”, In the Proceedings of the 2nd IEEE Conference on Open Architectures and Network Programming (OpenArch ’99), March 1999. [10] D. L. Tennenhouse, and D. J. Wetherall. “Towards an Active NetworkArchitecture”, In Proceeding of Multimedia Computing and Networking , January 1996.[11] D. J. Wetherall, J. V. Guttag, D. L. Tennenhouse, “ANTS: A toolkit forBuilding and Dynamically Deploying Network Protocols”, Open Architectures and Network Programming, 1998 IEEE , 1998 , Page(s): 117 –129.[12] B. Braden, A. Cerpa, T. Faber, B. Lindell, G. Phillips, and J. Kann.“Introduction to the ASP Execution Environment”: /active-signal/ARP/index.html .[13] “The java language: A white paper,” Tech. Rep., Sun Microsystems,1998.。
金刚钻的工业化运用一个程序一般需要50至70美网。
在这样的切割频率下,工具的负载量是比较低的。
而欧洲这样的程序下金刚钻的模型是完全不一样的!在我国,在这样的程序下,普遍金刚钻工具在非常自由的切割条件下,产品是不规则的易碎的微粒!在欧洲因为各种因素,情况是不同的。
因为欧洲的生活水平远高于我国,因此,他们的劳动力成本也要高。
为了使欧洲最大的石材生产商保持竞争力,他们必须要把注意力从原材料转移到生产的有效输出和最大化输出。
这就要求产品从原材料到成品的生产过程中尽可能减小能源的耗费和不必要的浪费。
该方法需要机床技术能够高速运作和先进的加工,可进行可靠的长时间持续的,无人值守操作。
在20世纪90年代,在机械和金刚石工具技术方面有很大的发展,使产量增加和降低生产成本。
如果我们对比一下欧洲和中国生产标准,我们可以看到在机器和工具的生产方面,中欧存在很大的差距。
在欧洲,制造这些瓷砖几乎是完全自动的,因为高效率的机械设计和自动处理设施。
最新一代的锯床这种应用能够使用主轴高达80分直径锯片。
机器和工具的设计,在达到下列的参数下,切割率是可以更快的。
•表面速度:- 25 – 35m / s•切削深度:-1mm•大桥速度:- 17m/min•切割速度:- IPOcm/5min或1m/h每个刀片•机输出:- 640m/5day(8小时每天)在这样的条件下,生产浪费减至最低,产量确更高。
通常情况下,在欧洲,刀片会产生10mm的缺口,而中国有12mm。
并且相对于中国12-15mm的切面的切口,欧洲只有10-12mm的切口。
在实现生产最大化材料处理和优化加工时间也是关键,厚片的切据被自动转移到自动的二次加工。
在这样精确的切割率下,对于金刚钻工具的要求是很高的,在程序控制下,型号和尺寸与中国的标准下是有很大不同的。
由于切割率相对高很多,最通常的尺寸是30-50。
切割率高,意味着工具的负载量也高,金刚钻的性质也会不一样!金刚钻的要求一般都是统一的,强大,块状颗粒,这是使在长时间的高负荷下,保持高产量。
金刚钻的工业化运用一个程序一般需要50至70美网。
在这样的切割频率下,工具的负载量是比较低的。
而欧洲这样的程序下金刚钻的模型是完全不一样的!在我国,在这样的程序下,普遍金刚钻工具在非常自由的切割条件下,产品是不规则的易碎的微粒!在欧洲因为各种因素,情况是不同的。
因为欧洲的生活水平远高于我国,因此,他们的劳动力成本也要高。
为了使欧洲最大的石材生产商保持竞争力,他们必须要把注意力从原材料转移到生产的有效输出和最大化输出。
这就要求产品从原材料到成品的生产过程中尽可能减小能源的耗费和不必要的浪费。
该方法需要机床技术能够高速运作和先进的加工,可进行可靠的长时间持续的,无人值守操作。
在20世纪90年代,在机械和金刚石工具技术方面有很大的发展,使产量增加和降低生产成本。
如果我们对比一下欧洲和中国生产标准,我们可以看到在机器和工具的生产方面,中欧存在很大的差距。
在欧洲,制造这些瓷砖几乎是完全自动的,因为高效率的机械设计和自动处理设施。
最新一代的锯床这种应用能够使用主轴高达80分直径锯片。
机器和工具的设计,在达到下列的参数下,切割率是可以更快的。
•表面速度:- 25 – 35m / s•切削深度:-1mm•大桥速度:- 17m/min•切割速度:- IPOcm/5min或1m/h每个刀片•机输出:- 640m/5day(8小时每天)在这样的条件下,生产浪费减至最低,产量确更高。
通常情况下,在欧洲,刀片会产生10mm的缺口,而中国有12mm。
并且相对于中国12-15mm的切面的切口,欧洲只有10-12mm的切口。
在实现生产最大化材料处理和优化加工时间也是关键,厚片的切据被自动转移到自动的二次加工。
在这样精确的切割率下,对于金刚钻工具的要求是很高的,在程序控制下,型号和尺寸与中国的标准下是有很大不同的。
由于切割率相对高很多,最通常的尺寸是30-50。
切割率高,意味着工具的负载量也高,金刚钻的性质也会不一样!金刚钻的要求一般都是统一的,强大,块状颗粒,这是使在长时间的高负荷下,保持高产量。
Development of polymer-based sensors for integration into a wireless data acquisition system suitable for monitoring environmental and physiological processesBiomolecular Engineering Volume 23, Issue 5, October 2006, Pages 253-257AbstractIn this work, the pressure sensing properties of polyethylene (PE) and polyvinylidene fluoride (PVDF) polymer films were evaluated by integrating them with a wireless data acquisition system. Each device was connected to an integrated interface circuit, which includes a capacitance to frequency converter (C/F) and an internal voltage regulator to suppress supply voltage fluctuations on the transponder side. The system was tested under hydrostatic pressures ranging from 0 to 17 kPa. Results show PE to be the more sensitive to pressure changes, indicating that it is useful for the accurate measurement of pressure over a small range. On the other hand PVDF devices could be used for measurement over a wider range and should be considered due to the low hysteresis and good repeatability displayed during testing. It is thought that this arrangement could form the basis of a cost-effective wireless monitoring system for the evaluation of environmental or physiological processes.Keywords: Pressure; Thick film; Polymers; Sensor; Wireless1. IntroductionIn many professions and industries, the ability to make measurements in difficult to reach or dangerous environments without risking the health of an individual is now a necessity. A way of wirelessly transmitting data from the sensor, which is at the point of interest, to a remote receiver is required. Using this approach, sensors can be implanted in a difficult to reach or harsh environment and left there for a period of time. Sensors designed to measure any number of parameters including pressure, conductivity and pH could be used (Barrie, 1992, Astaras, 2002and Flick and Orglmeister, 2000). Data transfer is typically achieved using radio frequencies to send information to a receiver, which is remote from the area of interest.Apart from industrial and environmental applications, these acquisition systems could revolutionise the healthcare system in a number of areas. They could find applications in the treatment of patients which have experienced extreme traumas by monitoring critical parameters such as intra-cranial pressure (Flick and Orglmeister, 2000). However, in a more routine setting they could also be used to make long term measurements of biological fluid pressure for clinical studies in several areas, such as cardiology, pulmonology and gastroenterology (Yang et al., 2003). In the future, it may even be possible to monitor patients while they reside in their home or continue to work (Budinger, 2003).With these applications in mind, a wireless data acquisition system, including a capacitance to frequency converter (C/F) and an internal voltage regulator to provide a stable operation has been developed. The circuitry was developed to minimise power consumption, as power will not be randomly available in the test environment. The system was developed specifically for the measurement of pressure. Two capacitive structures were formed using polyethylene (PE) and polyvinylidene fluoride (PVDF) for the sensing layer. These materials were chosen for their biocompatibleand mechanical properties. Capacitive structures are preferred as they lead to lower power consumption and higher sensitivity than their piezoelectric counterparts (Puers, 1993).PVDF is a low-density semi-crystalline material, consisting of longrepeating chains of CF2CH2molecules. The crystalline regionconsists of a number of polymorphs, of which the α- and β-phase are most common. The β-phase is piezoelectric and has many advantages including its mechanical strength, wide dynamic range, flexibility and ease of fabrication (Payne and Chen, 1990). Poled PVDF films have been employed in the development of devices, which can be used in a wide range of applications, for example, providing robots with tactile sensors and the measurement of explosive forces (Payne et al., 1990and Bauer, 1999). In a medical context, poled PVDF films have been popular in the development of plantar pressure-measurement systems, where their flexibility and the ease with which electrode patterns can be attached has been a particular advantage (Lee and Sung, 1999). Micromachined devices using PVDF as a flexible element in the system have also been developed for use in an endoscopic grasper because of its high force sensitivity, large dynamic range and good linearity (Dargahi et al., 1998).Polyethylene is a cost effective and versatile semi-crystalline polymerconsisting of repeating CH2CH2units. The most common forms arelow-density polyethylene (LDPE) and high-density polyethylene (HDPE), where the density is related to the degree of chain branching. It is a material which is useful in pressure-sensing applications and has been popular for use in the development of flexible electronics (Harsanyi, 1995 and Domenech et al., 2005). PE is particularly popular in the fabrication of polymer/carbon-black composites for pressure measurement (Zheng et al., 1999 and Xu et al., 2005). Furthermore, polyethylene terephtalate (PET)has been identified as an electret material with possible dynamic pressure sensing applications (Paajanen et al., 2000).In this work, both PE and PVDF films were formed into a sandwich capacitor, which was then subjected to changing hydrostatic pressures. The films deformed under pressure and the resulting change in capacitance was transmitted wirelessly through the liquid to an external receiver, which converts the signal to a corresponding voltage.2. Experimental procedureThe sensing layers were in the form of films with thickness of approximately 100 μm. The PVDF film has a dominant β-phase and was purchased from Precision Acoustics Ltd. The LDPE film was supplied from Goodfellow Cambridge Ltd. The Young's modulus of each material is an indication of how likely the material is to deform under applied pressure and is quoted to be 8.3 GPa and 0.1–0.3 GPa for PVDF and PE, respectively. To form the capacitors, DuPont 4929 silver paste was deposited using a DEK RS 1202 automatic screen-printer to form electrodes measuring15 mm × 10 mm. The sensor structure is shown in Fig. 1. This approach was used as difficulties in depositing other electrode materials on PVDF have been recorded (Payne and Chen, 1990). After deposition, the electrodes were dried in air and cured at 100 °C for 30 min. The electrical properties of each device were measured, from 1 Hz to 1 MHz, using a Solatron S1 1260 Impedance Gain/Phase Analyser.Fig. 1. Structure of the PVDF and PE capacitor.To evaluate the performance of each material under pressure, capacitors were individually connected to the interface and transmitter circuit. The sensor was protected using a thin, flexible waterproof membrane. The circuit was contained in a weatherproof housing. This was a rigid structure of dimensions 54 × 59 mm2 and was necessary to protect the electronics from the liquid environment. To connect the sensor to the interface an opening was drilled into the housing and the connections were made waterproof.The change in capacitance with increasing depth in a liquid environment was then recorded.The pressure in this case ranged from 0 to 17 kPa. The change in capacitance was converted to a frequency, which was wirelessly transmitted to an external receiver. The transmitter and receiver are battery powered. A comparison of the power requirements, this circuit (marked with an asterisk) is compared to other standard interface circuits is shown in Table 1. A block diagram of the transmitter and receiver system can be seen in Fig. 2.Table 1.Power consumption for sensor interface circuitsThe main element of the sensor interface circuit is an integrated capacitance to frequency converter, which is used to link the sensoris the sensor capacitance,converted to voltage levels using phase locked loop (PLL) unit. This IC is a micro-power device since it typically draws 20 μA. The relationship between the frequency (f) and the voltage (V) has been measured to bef=V×13.1kHz/V (2)The value of 13.1 kHz/V was found by measuring the slope of the change in frequency with voltage for the voltage-controlled oscillator as shown in Fig. 3. It should be noted that while the PLL unit reduces power supply, it creates a non-linear output signal. Therefore the sensor response will appear to be non-linear.Fig. 3. Measured F/V characteristics of the VCO.Finally, a Lloyd Instruments LR50k was used to evaluate the sensitivity of the PVDF material over a wider pressure range. The LR50k is commonly used to place materials under tension or compression. In this work, it was used in compression mode, increasing the load on the capacitor over time. The change in sensor output was measured using a HP 4192 A LF Impedance Analyser at a frequency of 100 kHz. The capacitor was repeatedly tested in the range 0–560 kPa.3. Results and discussionWhen parallel plate capacitors, such as those formed in this study, are placed under pressure, the thickness of the sensing layer changes, resulting in an alteration of the distance, d, between the electrodes or plates. When the pressure is applied uniformly, there is a correspondingly uniform change in d, which leads to a change in the overall capacitance, according to Eq. (3)(3)where, C is the capacitance, r, is the relative permittivity of the , is the permittivity of free space and A is the area of dielectric,othe capacitor plates. The capacitance was found to be 40 pF and 140 pF for the PE and PVDF sensors, respectively. The relative permittivity was measured to be 3.45 for PE and 9.27 for PVDF at a frequency of 1 MHz. The capacitance of both materials showed a high stability over a wide range of frequencies, as shown in Fig. 4, making them well suited for integration into the wireless data acquisition system. Previous work on thick film capacitors using a PZT and PVDF dielectric layer have shown that device sensitivity is affected by operating frequency (Arshak et al., 2000). The differences are attributed to changes in dissipation factor. The PE sensor showed a stable response, however there is some variation the capacitance of the PVDF sensor at higher frequencies. Therefore, operating frequency could be used to optimize the sensor response.Fig. 4. Variation of capacitance with frequency for PE and PVDF devices.Fig. 5shows the response of the PE and PVDF sensors to pressure in the range 0–17 kPa. It was observed that PE shows a higher sensitivity to pressure changes than the PVDF film. The change in voltage is related to the capacitance change, which is a direct result of deformation of the dielectric layer under pressure. For the PE sensor, the voltage changes by 20 mV over the entire range. For the PVDF sensor the change is 5 mV. The relationship between capacitance and voltage is shown in Eqs. (1)and (2). Therefore, it can be seen from the results that PE sensors show the highest sensitivity, and are well suited to pressure measurement over the range tested. On the other hand PVDF devices may be more useful for measurements over larger ranges. For example, shock sensors based on PVDF are used to measure impact pressures up to 12 GPa (Bauer, 1999).Fig. 5. Change in voltage with pressure in the range 0–17 kPa for the PE and PVDF sensors.In order to investigate the behaviour of the PVDF over a large pressure range, it was tested using an LR50k and the results are shown in Fig. 6. It can be seen that the material showed a high sensitivity, particularly for pressures up to 100 kPa. It is thought that the dissimilarity in Young's modulus can explain their different behaviour under pressure. PVDF is a tougher, more resilient material that PE and so it requires higher pressures, to achieve a measurable change in capacitance. Alternatively, PE will deform more easily, resulting in larger changes in capacitance over a reduced pressure range.Fig. 6. Relative change in capacitance for PVDF sensors, tested using a Lloyd Instruments LR50k.The maximum difference between loading and unloading cycles was measured and expressed as a percentage of the full-scale deviation in order to calculate the hysteresis. Values ranging from 6 to 30% have previously been calculated for polymer thick film devices (Arshak et al., 1995 and Arshak et al., 2000). In this work, the hysteresis was calculated to be 5% and 6% for the PE and PVDF sensors, respectively, as shown in Fig.7. This corresponds well with the values quoted above.Fig. 7. Hysteresis of (a) the PE sensor and (b) the PVDF sensor as measured for one loading and unloading cycle.Each device was also subjected to repeated cycling, in order to establish its repeatability (the maximum difference between output readings as determined by two calibrating cycles). Five cycles are shown for PE in Fig. 8(a) and PVDF in Fig. 8 (b). The repeatability was calculated to be 10% and 6% for PE and PVDF, respectively. This can be attributed to movement of the polymer chains while they are under pressure (Arshak et al., 1995). The more rigid nature of the PVDF can explain the lower percentage repeatability, as it does not suffer the same degree of slippage.Fig. 8. Repeatability of (a) the PE sensor and (b) the PVDF sensor as measured for five loading cycles.From the results shown above, it can be seen that both PE and PVDF have shown a good sensitivity to pressure. The measured levels of hysteresis and repeatability are similar to that previously measured for polymer devices (Arshak et al., 1995and Arshak et al., 2000). PVDF is best suited to the measurement of pressure in the range 0–100 kPa. PE could also be used over this range, but it is expected that because of its lower Young's modulus, the sensor would experience a high level of hysteresis and slippage of the polymer chains during its operation. However, for medical purposes, it is not likely, that measurements over a pressure of 40 kPa will be required. In this respect, PE is more suited for the measurement of physiological processes.4. ConclusionIn this work, the pressure sensing properties of sandwich capacitors based on PE and PVDF were evaluated using a specially constructed data acquisition system. It was seen that each material displayed a high sensitivity to pressure changes in the range 0–17 kPa. It was found that the PE sensors were the most sensitive, but each device displayed low hysteresis and repeatability. It can be concluded that PE is the most sensitive to pressures over a small range, however PVDF could find applications in systems where pressures measurements over a large range are required. Further evidence for this was found by testing the PVDF samples using an LR50k where they showed a high sensitivity to pressures from 0 to 100 kPa.AcknowledgementsThis research was supported by the Enterprise Ireland Commercialization Fund 2003, under the technology development phase, as part of the MIAPS project, reference No. CFTD/03/425. Funding was also received from theIrish Research Council for Science, Engineering and Technology: funded by the National Development Plan.ReferencesArshak et al., 2000K.I. Arshak, D. McDonagh and M.A. Durcan, Development of new capacitive strain sensors based on thick film polymer and cermet technologies., Sens. Acutators A-Phys.79(2000), pp. 102–114. Article | PDF (662 K) | View Record in Scopus | Cited By in Scopus (27) Arshak et al., 1995K.I. Arshak, A.K. Ray, C.A. Hogarth, D.G. Collins and F. Ansari, An analysis of polymeric thick-film resistors as pressure sensors,Sens. Acutators A-Phys.49 (1995), pp. 41–45. Abstract | PDF (346 K) | View Record in Scopus | Cited By in Scopus (18)Astaras, 2002Astaras, A., Ahmadian, M., Aydin, N., Cui, L., Johannessen, E., Tang, T.-B., Wang, L., Arslan, T., Beaumont, S.P., Flynn, B.W., Murray, A.F., Reid, S.W., Yam, P., Cooper, J.M., Cumming, R.S., 2002. A miniature integrated electronics sensor capsule for real-time monitoring of the gastrointestinal tract (IDEAS). IEEE ICBME conference: The Bio-Era: New Challenges, New Frontiers, Singapore.Barrie, 1992 S.A. Barrie, A Textbook of Natural Medicine: Heidelberg pH Capsule Gastric Analysis, Churchill Livingstone, New York (1992). Bauer, 1999 Bauer, F., 1999. Advances in Piezoelectric PVDF Shock Compression Sensors. 10th International Symposium on Electrets, 1999, ISE 10, Delphi, Greece, 647–650.。
南京理工大学毕业设计(论文)外文资料翻译教学点: 南京信息职业技术学院专业:电子信息工程姓名:陈洁学号:014910253034外文出处:《Pci System Architecture 》(用外文写)附件: 1.外文资料翻译译文;2。
外文原文。
附件1:外文资料翻译译文64位PCI扩展1.64位数据传送和64位寻址:独立的能力PCI规范给出了允许64位总线主设备与64位目标实现64位数据传送的机理。
在传送的开始,如果回应目标是一个64位或32位设备,64位总线设备会自动识别.如果它是64位设备,达到8个字节(一个4字)可以在每个数据段中传送。
假定是一串0等待状态数据段。
在33MHz总线速率上可以每秒264兆字节获取(8字节/传送*33百万传送字/秒),在66MHz总线上可以528M字节/秒获取.如果回应目标是32位设备,总线主设备会自动识别并且在下部4位数据通道上(AD[31::00])引导,所以数据指向或来自目标。
规范也定义了64位存储器寻址功能。
此功能只用于寻址驻留在4GB地址边界以上的存储器目标。
32位和64位总线主设备都可以实现64位寻址。
此外,对64位寻址反映的存储器目标(驻留在4GB地址边界上)可以看作32位或64位目标来实现。
注意64位寻址和64位数据传送功能是两种特性,各自独立并且严格区分开来是非常重要的。
一个设备可以支持一种、另一种、都支持或都不支持。
2.64位扩展信号为了支持64位数据传送功能,PCI总线另有39个引脚。
●REQ64#被64位总线主设备有效表明它想执行64位数据传送操作.REQ64#与FRAME#信号具有相同的时序和间隔。
REQ64#信号必须由系统主板上的上拉电阻来支持.当32位总线主设备进行传送时,REQ64#不能又漂移。
●ACK64#被目标有效以回应被主设备有效的REQ64#(如果目标支持64位数据传送),ACK64#与DEVSEL#具有相同的时序和间隔(但是直到REQ64#被主设备有效,ACK64#才可被有效).像REQ64#一样,ACK64#信号线也必须由系统主板上的上拉电阻来支持。
Design and Characterization of Single Photon APD Detector forQKD ApplicationAbstractModeling and design of a single photon detector and its various characteristics are presented. It is a type of avalanche photo diode (APD) designed to suit the requirements of a Quantum Key Distribution (QKD) detection system. The device is modeled to operate in a gated mode at liquid nitrogen temperature for minimum noise and maximum gain. Different types of APDs are compared for best performance. The APD is part of an optical communication link, which is a private channel to transmit the key signal. The encrypted message is sent via a public channel. The optical link operates at a wavelength of 1.55μm. The design is based on InGaAs with a quantum efficiency of more than 75% and a multiplication factor of 1000. The calculated dark current is below 10-12A with an overall signal to noise ratio better than 18dB. The device sensitivity is better than -40dBm, which is more than an order of magnitude higher than the dark current, corresponding to a detection sensitivity of two photons in pico-second pulses.I. INTRODUCTIONPhoton detectors sensitive to extremely low light levels are needed in a variety of applications. It was not possible to introduce these devices commercially several years ago because of the stringent requirements of QKD. Research efforts however resulted in photon detectors with reasonably good performance characteristics. The objective here is to model a single photon detector of high sensitivity, suitable for a QKD system. The detector is basically an APD, which needs cooling to very low temperature (77K) for the dark current to be low. The wavelength of interest is 1.55μm. Different applications may impose different requirements, and hence the dependence of the various parameters on wavelength, temperature, responsivity, dark current, noise etc, are modeled. Comparison of the results from calculations based on a suitable model provides amenable grounds to determine the suitability of each type of APD for a specific application.Attacks on communication systems in recent years have become a main concern accompanying the technological advances. The measures and counter measures against attacks have driven research effort towards security techniques that aim at absolute infallibility. Quantum Mechanics is considered one of the answers, due to inherent physical phenomena. QKD systems which depend on entangled pairs orpolarization states will inevitably require the usage of APDs in photon detection systems. How suitable these detectors may be, depends on their ability to detect low light level signals, in other words “photon counting”. It is therefore anticipated that the application of high security systems will be in high demand in a variety of fields such as banking sector, military, medical care, e-commerce, e-government etc.Ⅱ. AV ALANCHE PHOTO DIODEA. Structure of the APDFig. 1 shows a schematic diagram of the structure of an APD. The APD is a photodiode with a built-in amplification mechanism. The applied reverse potential difference causes accelerates photo-generated carriers to very high speeds so that a transfer of momentum occurs upon collisions, which liberates other electrons. Secondary electrons are accelerated in turn and the result is an avalanche process. The photo generated carriers traverse the high electric field region causing further ionization by releasing bound electrons in the valence band upon collision. This carrier generation mechanism is known as impact ionization. When carriers collide with the crystal lattice, they lose some energy to the crystal. If the kinetic energy of a carrier is greater than the band-gap, the collision will free a bound electron. The free electrons and holes so created also acquire enough energy to cause further impact ionization. The result is an avalanche, where the number of free carriers grows exponentially as the process continues.The number of ionization collisions per unit length for holes and electrons is designated ionization coefficients αn and αp, respectively. The type of materials and their band structures are responsible for the variation in αn and αp. Ionization coefficients also depend on the applied electric field according tothe following relationship:,exp[]n p b a Eαα=- (1) For αn = αp = α, the multiplication factor, M takes the form11aW M -= (2)W is the width of the depletion region. It can be observed that M tends to ∞ when αW →1, whichsignifies the condition for junction breakdown. Therefore, the high values of M can be obtained whenthe APD is biased close to the breakdown region.The thickness of the multiplication region for M = 1000, has been calculated and compared withthose found by other workers and the results are shown in Table 1. The layer thickness for undoped InPis 10μm, for a substrate thickness of 100μm .The photon-generated electron-hole pairs in the absorption layer are accelerated under theinfluence of an electric field of 3.105V/cm. The acceleration process is constantly interrupted by randomcollisions with the lattice. The two competing processes will continue until eventually an averagesaturation velocity is reached. Secondary electron-hole pairs are generated at any time during theprocess, when they acquire energy larger than the band gap Eg. The electrons are then accelerated andmay cause further impact ionization.Impact ionization of holes due to bound electrons is not as effective as that due to free electrons.Hence, most of the ionization is achieved by free electrons. The avalanche process then proceedsprincipally from the p to the n side of the device. It terminates after a certain time, when the electronsarrive at the n side of the depletion layer. Holes moving to the left create electrons that move to the right,which in turn generate further holes moving to the left in a possibly unending circulation. Although this feedback process increases the gain of the device, it is nevertheless undesirable for several reasons. First, it is time consuming and reduces the device bandwidth. Second, it is a random process and therefore increases the noise in the device. Third, it is unstable, which may cause avalanche breakdown.It may be desirable to fabricate APDs from materials that permit impact ionization by only one type of carriers either electrons or holes. Photo detector materials generally exhibit different ionization rates for electrons and holes. The ratio ofthe two ionization rates k = βi/αi is a measure of the photodiode performance. If for example, electrons have higher ionization coefficient, optimal behavior is achieved by injecting electrons of photo-carrier pairs at the p-type edge of the depletion layer and by using a material with k value as small as possible. If holes are injected, they should be injected at the n-type edge of the depletion layer and k should be as large as possible. Single-carrier multiplication is achieved ideally, when k = 0 with electrons or with k = ∞for holes.B.Geiger ModeGeiger mode (GM) operation means that the diode is operated slightly above the breakdown threshold voltage, where a single electron–hole pair can trigger a strong avalanche. In the case of such an event, the electronics reduce the diode voltage to below the threshold value for a short time called “dead time”, during which the avalanche is stopped and the detector is made ready to detect the next batch of photons. GM operation is one of the basic of Quantum Counting techniques when utilizing an avalanche process (APD) that increases the detector efficiency significantly.There are a number of parameters related to Geiger mode. The general idea however is to temporarily disturb the equilibrium inside the APD.The Geiger mode is placing the APD in a gated regime and the bias is raised above the breakdownvoltage for a short period of time. Fig. 2 shows the parameters characterizing the Geiger operation. The rise and fall times of the edges are neglected because they are made fast. Detection of single photons occurs during the gate window.作者:Khalid A. S. Al-Khateeb, Nazmus Shaker Nafi, Khalid Hasan国籍:美国出处:Computer and Communication Engineering (ICCCE), 2010 International Conference on 11-12 May 2010用于量子密钥的单光子APD探测器设计摘要本文提出的是单光子探测器及其各种特性的建模与设计。
Bid Compensation Decision Model for Projectswith Costly Bid PreparationS.Ping Ho,A.M.ASCE 1Abstract:For projects with high bid preparation cost,it is often suggested that the owner should consider paying bid compensation to the most highly ranked unsuccessful bidders to stimulate extra effort or inputs in bid preparation.Whereas the underlying idea of using bid compensation is intuitively sound,there is no theoretical basis or empirical evidence for such suggestion.Because costly bid preparation often implies a larger project scale,the issue of bid compensation strategy is important to practitioners and an interest of study.This paper aims to study the impacts of bid compensation and to develop appropriate bid compensation strategies.Game theory is applied to analyze the behavioral dynamics between competing bidders and project owners.A bid compensation model based on game theoretic analysis is developed in this study.The model provides equilibrium solutions under bid compensation,quantitative formula,and quali-tative implications for the formation of bid compensation strategies.DOI:10.1061/(ASCE )0733-9364(2005)131:2(151)CE Database subject headings:Bids;Project management;Contracts;Decision making;Design/build;Build/Operate/Transfer;Construction industry .IntroductionAn often seen suggestion in practice for projects with high bid preparation cost is that the owner should consider paying bid compensation,also called a stipend or honorarium,to the unsuc-cessful bidders.For example,according to the Design–build Manual of Practice Document Number 201by Design–Build In-stitute of America (DBIA )(1996a ),it is suggested that that “the owner should consider paying a stipend or honorarium to the unsuccessful proposers”because “excessive submittal require-ments without some compensation is abusive to the design–build industry and discourages quality teams from participating.”In another publication by DBIA (1995),it is also stated that “it is strongly recommended that honorariums be offered to the unsuc-cessful proposers”and that “the provision of reasonable compen-sation will encourage the more sought-after design–build teams to apply and,if short listed,to make an extra effort in the prepara-tion of their proposal.”Whereas bid preparation costs depend on project scale,delivery method,and other factors,the cost of pre-paring a proposal is often relatively high in some particular project delivery schemes,such as design–build or build–operate–transfer (BOT )contracting.Plus,costly bid preparation often im-plying a large project scale,the issue of bid compensation strat-egy should be important to practitioners and of great interest of study.Existing research on the procurement process in constructionhas addressed the selection of projects that are appropriate for certain project delivery methods (Molenaar and Songer 1998;Molenaar and Gransberg 2001),the design–build project procure-ment processes (Songer et al.1994;Gransberg and Senadheera 1999;Palaneeswaran and Kumaraswamy 2000),and the BOT project procurement process (United Nations Industrial Develop-ment Organization 1996).However,the bid compensation strat-egy for projects with a relatively high bid preparation cost has not been studied.Among the issues over the bidder’s response to the owner’s procurement or bid compensation strategy,it is in own-er’s interest to understand how the owner can stimulate high-quality inputs or extra effort from the bidder during bid prepara-tion.Whereas the argument for using bid compensation is intuitively sound,there is no theoretical basis or empirical evi-dence for such an argument.Therefore,it is crucial to study under what conditions the bid compensation is effective,and how much compensation is adequate with respect to different bidding situa-tions.This paper focuses on theoretically studying the impacts of bid compensation and tries to develop appropriate compensation strategies for projects with a costly bid preparation.Game theory will be applied to analyze the behavioral dynamics between com-peting bidders.Based on the game theoretic analysis and numeric trials,a bid compensation model is developed.The model pro-vides a quantitative framework,as well as qualitative implica-tions,on bid compensation strategies.Research Methodology:Game TheoryGame theory can be defined as “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers”(Myerson 1991).Among economic theories,game theory has been successfully applied to many important issues such as negotiations,finance,and imperfect markets.Game theory has also been applied to construction management in two areas.Ho (2001)applied game theory to analyze the information asymme-try problem during the procurement of a BOT project and its1Assistant Professor,Dept.of Civil Engineering,National Taiwan Univ.,Taipei 10617,Taiwan.E-mail:spingho@.twNote.Discussion open until July 1,2005.Separate discussions must be submitted for individual papers.To extend the closing date by one month,a written request must be filed with the ASCE Managing Editor.The manuscript for this paper was submitted for review and possible publication on March 5,2003;approved on March 1,2004.This paper is part of the Journal of Construction Engineering and Management ,V ol.131,No.2,February 1,2005.©ASCE,ISSN 0733-9364/2005/2-151–159/$25.00.D o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .implication in project financing and government policy.Ho and Liu (2004)develop a game theoretic model for analyzing the behavioral dynamics of builders and owners in construction claims.In competitive bidding,the strategic interactions among competing bidders and that between bidders and owners are com-mon,and thus game theory is a natural tool to analyze the prob-lem of concern.A well-known example of a game is the “prisoner’s dilemma”shown in Fig.1.Two suspects are arrested and held in separate cells.If both of them confess,then they will be sentenced to jail for 6years.If neither confesses,each will be sentenced for only 1year.However,if one of them confesses and the other does not,then the honest one will be rewarded by being released (in jail for 0year )and the other will be punished for 9years in jail.Note that in each cell,the first number represents player No.1’s payoff and the second one represents player No.2’s.The prisoner’s dilemma is called a “static game,”in which they act simultaneously;i.e.,each player does not know the other player’s decision before the player makes the decision.If the payoff matrix shown in Fig.1is known to all players,then the payoff matrix is a “common knowledge”to all players and this game is called a game of “complete information.”Note that the players of a game are assumed to be rational;i.e.,to maximize their payoffs.To answer what each prisoner will play/behave in this game,we will introduce the concept of “Nash equilibrium ,”one of the most important concepts in game theory.Nash equilibrium is a set of actions that will be chosen by each player.In a Nash equilib-rium,each player’s strategy should be the best response to the other player’s strategy,and no player wants to deviate from the equilibrium solution.Thus,the equilibrium or solution is “strate-gically stable”or “self-enforcing”(Gibbons 1992).Conversely,a nonequilibrium solution is not stable since at least one of the players can be better off by deviating from the nonequilibrium solution.In the prisoner’s dilemma,only the (confess,confess )solution where both players choose to confess,satisfies the stabil-ity test or requirement of Nash equilibrium.Note that although the (not confess,not confess )solution seems better off for both players compared to Nash equilibrium;however,this solution is unstable since either player can obtain extra benefit by deviating from this solution.Interested readers can refer to Gibbons (1992),Fudenberg and Tirole (1992),and Myerson (1991).Bid Compensation ModelIn this section,the bid compensation model is developed on the basis of game theoretic analysis.The model could help the ownerform bid compensation strategies under various competition situ-ations and project characteristics.Illustrative examples with nu-merical results are given when necessary to show how the model can be used in various scenarios.Assumptions and Model SetupTo perform a game theoretic study,it is critical to make necessary simplifications so that one can focus on the issues of concern and obtain insightful results.Then,the setup of a model will follow.The assumptions made in this model are summarized as follows.Note that these assumptions can be relaxed in future studies for more general purposes.1.Average bidders:The bidders are equally good,in terms oftheir technical and managerial capabilities.Since the design–build and BOT focus on quality issues,the prequalification process imposed during procurement reduces the variation of the quality of bidders.As a result,it is not unreasonable to make the “average bidders”assumption.plete information:If all players consider each other tobe an average bidder as suggested in the first assumption,it is natural to assume that the payoffs of each player in each potential solution are known to all players.3.Bid compensation for the second best bidder:Since DBIA’s(1996b )manual,document number 103,suggests that “the stipend is paid only to the most highly ranked unsuccessful offerors to prevent proposals being submitted simply to ob-tain a stipend,”we shall assume that the bid compensation will be offered to the second best bidder.4.Two levels of efforts:It is assumed that there are two levelsof efforts in preparing a proposal,high and average,denoted by H and A ,respectively.The effort A is defined as the level of effort that does not incur extra cost to improve quality.Contrarily,the effort H is defined as the level of effort that will incur extra cost,denoted as E ,to improve the quality of a proposal,where the improvement is detectable by an effec-tive proposal evaluation system.Typically,the standard of quality would be transformed to the evaluation criteria and their respective weights specified in the Request for Pro-posal.5.Fixed amount of bid compensation,S :The fixed amount canbe expressed by a certain percentage of the average profit,denoted as P ,assumed during the procurement by an average bidder.6.Absorption of extra cost,E :For convenience,it is assumedthat E will not be included in the bid price so that the high effort bidder will win the contract under the price–quality competition,such as best-value approach.This assumption simplifies the tradeoff between quality improvement and bid price increase.Two-Bidder GameIn this game,there are only two qualified bidders.The possible payoffs for each bidder in the game are shown in a normal form in Fig.2.If both bidders choose “H ,”denoted by ͑H ,H ͒,both bidders will have a 50%probability of wining the contract,and at the same time,have another 50%probability of losing the con-tract but being rewarded with the bid compensation,S .As a re-sult,the expected payoffs for the bidders in ͑H ,H ͒solution are ͑S /2+P /2−E ,S /2+P /2−E ͒.Note that the computation of the expected payoff is based on the assumption of the average bidder.Similarly,if the bidders choose ͑A ,A ͒,the expected payoffswillFig.1.Prisoner’s dilemmaD o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .be ͑S /2+P /2,S /2+P /2͒.If the bidders choose ͑H ,A ͒,bidder No.1will have a 100%probability of winning the contract,and thus the expected payoffs are ͑P −E ,S ͒.Similarly,if the bidders choose ͑A ,H ͒,the expected payoffs will be ͑S ,P −E ͒.Payoffs of an n -bidder game can be obtained by the same reasoning.Nash EquilibriumSince the payoffs in each equilibrium are expressed as functions of S ,P ,and E ,instead of a particular number,the model will focus on the conditions for each possible Nash equilibrium of the game.Here,the approach to solving for Nash equilibrium is to find conditions that ensure the stability or self-enforcing require-ment of Nash equilibrium.This technique will be applied throughout this paper.First,check the payoffs of ͑H ,H ͒solution.For bidder No.1or 2not to deviate from this solution,we must haveS /2+P /2−E ϾS →S ϽP −2E͑1͒Therefore,condition (1)guarantees ͑H ,H ͒to be a Nash equilib-rium.Second,check the payoffs of ͑A ,A ͒solution.For bidder No.1or 2not to deviate from ͑A ,A ͒,condition (2)must be satisfiedS /2+P /2ϾP −E →S ϾP −2E͑2͒Thus,condition (2)guarantees ͑A ,A ͒to be a Nash equilibrium.Note that the condition “S =P −2E ”will be ignored since the con-dition can become (1)or (2)by adding or subtracting an infinitely small positive number.Thus,since S must satisfy either condition (1)or condition (2),either ͑H ,H ͒or ͑A ,A ͒must be a unique Nash equilibrium.Third,check the payoffs of ͑H ,A ͒solution.For bid-der No.1not to deviate from H to A ,we must have P −E ϾS /2+P /2;i.e.,S ϽP −2E .For bidder No.2not to deviate from A to H ,we must have S ϾS /2+P /2−E ;i.e.,S ϾP −2E .Since S cannot be greater than and less than P −2E at the same time,͑H ,A ͒solution cannot exist.Similarly,͑A ,H ͒solution cannot exist either.This also confirms the previous conclusion that either ͑H ,H ͒or ͑A ,A ͒must be a unique Nash equilibrium.Impacts of Bid CompensationBid compensation is designed to serve as an incentive to induce bidders to make high effort.Therefore,the concerns of bid com-pensation strategy should focus on whether S can induce high effort and how effective it is.According to the equilibrium solu-tions,the bid compensation decision should depend on the mag-nitude of P −2E or the relative magnitude of E compared to P .If E is relatively small such that P Ͼ2E ,then P −2E will be positive and condition (1)will be satisfied even when S =0.This means that bid compensation is not an incentive for high effort when the extra cost of high effort is relatively low.Moreover,surprisingly,S can be damaging when S is high enough such that S ϾP −2E .On the other hand,if E is relatively large so that P −2E is negative,then condition (2)will always be satisfied since S can-not be negative.In this case,͑A ,A ͒will be a unique Nash equi-librium.In other words,when E is relatively large,it is not in the bidder’s interest to incur extra cost for improving the quality of proposal,and therefore,S cannot provide any incentives for high effort.To summarize,when E is relatively low,it is in the bidder’s interest to make high effort even if there is no bid compensation.When E is relatively high,the bidder will be better off by making average effort.In other words,bid compensation cannot promote extra effort in a two-bidder game,and ironically,bid compensa-tion may discourage high effort if the compensation is too much.Thus,in the two-bidder procurement,the owner should not use bid compensation as an incentive to induce high effort.Three-Bidder GameNash EquilibriumFig.3shows all the combinations of actions and their respective payoffs in a three-bidder game.Similar to the two-bidder game,here the Nash equilibrium can be solved by ensuring the stability of the solution.For equilibrium ͑H ,H ,H ͒,condition (3)must be satisfied for stability requirementS /3+P /3−E Ͼ0→S Ͼ3E −P͑3͒For equilibrium ͑A ,A ,A ͒,condition (4)must be satisfied so that no one has any incentives to choose HS /3+P /3ϾP −E →S Ͼ2P −3E͑4͒In a three-bidder game,it is possible that S will satisfy conditions (3)and (4)at the same time.This is different from the two-bidder game,where S can only satisfy either condition (1)or (2).Thus,there will be two pure strategy Nash equilibria when S satisfies conditions (3)and (4).However,since the payoff of ͑A ,A ,A ͒,S /3+P /3,is greater than the payoff of ͑H ,H ,H ͒,S /3+P /3−E ,for all bidders,the bidder will choose ͑A ,A ,A ͒eventually,pro-vided that a consensus between bidders of making effort A can be reached.The process of reaching such consensus is called “cheap talk,”where the agreement is beneficial to all players,and no player will want to deviate from such an agreement.In the design–build or BOT procurement,it is reasonable to believe that cheap talk can occur.Therefore,as long as condition (4)is satis-fied,͑A ,A ,A ͒will be a unique Nash equilibrium.An important implication is that the cheap talk condition must not be satisfied for any equilibrium solution other than ͑A ,A ,A ͒.In other words,condition (5)must be satisfied for all equilibrium solution except ͑A ,A ,A͒Fig.2.Two-biddergameFig.3.Three-bidder gameD o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .S Ͻ2P −3E ͑5͒Following this result,for ͑H ,H ,H ͒to be unique,conditions (3)and (5)must be satisfied;i.e.,we must have3E −P ϽS Ͻ2P −3E͑6͒Note that by definition S is a non-negative number;thus,if one cannot find a non-negative number to satisfy the equilibrium con-dition,then the respective equilibrium does not exist and the equi-librium condition will be marked as “N/A”in the illustrative fig-ures and tables.Next,check the solution where two bidders make high efforts and one bidder makes average effort,e.g.,͑H ,H ,A ͒.The ex-pected payoffs for ͑H ,H ,A ͒are ͑S /2+P /2−E ,S /2+P /2−E ,0͒.For ͑H ,H ,A ͒to be a Nash equilibrium,S /3+P /3−E Ͻ0must be satisfied so that the bidder with average effort will not deviate from A to H ,S /2+P /2−E ϾS /2must be satisfied so that the bidder with high effort will not deviate from H to A ,and condi-tion (5)must be satisfied as argued previously.The three condi-tions can be rewritten asS Ͻmin ͓3E −P ,2P −3E ͔andP −2E Ͼ0͑7͒Note that because of the average bidder assumption,if ͑H ,H ,A ͒is a Nash equilibrium,then ͑H ,A ,H ͒and ͑A ,H ,H ͒will also be the Nash equilibria.The three Nash equilibria will constitute a so-called mixed strategy Nash equilibrium,denoted by 2H +1A ,where each bidder randomizes actions between H and A with certain probabilities.The concept of mixed strategy Nash equilib-rium shall be explained in more detail in next section.Similarly,we can obtain the requirements for solution 1H +2A ,condition (5)and S /2+P /2−E ϽS /2must be satisfied.The requirements can be reorganized asS Ͻ2P −3EandP −2E Ͻ0͑8͒Note that the conflicting relationship between “P −2E Ͼ0”in condition (7)and “P −2E Ͻ0”in condition (8)seems to show that the two types of Nash equilibria are exclusive.Nevertheless,the only difference between 2H +1A and 1H +2A is that the bidder in 2H +1A equilibrium has a higher probability of playing H ,whereas the bidder in 1H +2A also mixes actions H and A but with lower probability of playing H .From this perspective,the difference between 2H +1A and 1H +2A is not very distinctive.In other words,one should not consider,for example,2H +1A ,to be two bidders playing H and one bidder playing A ;instead,one should consider each bidder to be playing H with higher probabil-ity.Similarly,1H +2A means that the bidder has a lower probabil-ity of playing H ,compared to 2H +1A .Illustrative Example:Effectiveness of Bid Compensation The equilibrium conditions for a three-bidder game is numerically illustrated and shown in Table 1,where P is arbitrarily assumed as 10%for numerical computation purposes and E varies to rep-resent different costs for higher efforts.The “*”in Table 1indi-cates that the zero compensation is the best strategy;i.e.,bid compensation is ineffective in terms of stimulating extra effort.According to the numerical results,Table 1shows that bid com-pensation can promote higher effort only when E is within the range of P /3ϽE ϽP /2,where zero compensation is not neces-sarily the best strategy.The question is that whether it is benefi-cial to the owner by incurring the cost of bid compensation when P /3ϽE ϽP /2.The answer to this question lies in the concept and definition of the mix strategy Nash equilibrium,2H +1A ,as explained previously.Since 2H +1A indicates that each bidderwill play H with significantly higher probability,2H +1A may already be good enough,knowing that we only need one bidder out of three to actually play H .We shall elaborate on this concept later in a more general setting.As a result,if the 2H +1A equilib-rium is good enough,the use of bid compensation in a three-bidder game will not be recommended.Four-Bidder Game and n-Bidder GameNash Equilibrium of Four-Bidder GameThe equilibrium of the four-bidder procurement can also be ob-tained.As the number of bidders increases,the number of poten-tial equilibria increases as well.Due to the length limitation,we shall only show the major equilibria and their conditions,which are derived following the same technique applied previously.The condition for pure strategy equilibrium 4H ,is4E −P ϽS Ͻ3P −4E͑9͒The condition for another pure strategy equilibrium,4A ,isS Ͼ3P −4E͑10͒Other potential equilibria are mainly mixed strategies,such as 3H +1A ,2H +2A ,and 1H +3A ,where the numeric number asso-ciated with H or A represents the number of bidders with effort H or A in a equilibrium.The condition for the 3H +1A equilibrium is3E −P ϽS Ͻmin ͓4E −P ,3P −4E ͔͑11͒For the 2H +2A equilibrium the condition is6E −3P ϽS Ͻmin ͓3E −P ,3P −4E ͔͑12͒The condition for the 1H +3A equilibrium isS Ͻmin ͓6E −3P ,3P −4E ͔͑13͒Illustrative Example of Four-Bidder GameTable 2numerically illustrates the impacts of bid compensation on the four-bidder procurement under different relative magni-tudes of E .When E is very small,bid compensation is not needed for promoting effort H .However,when E grows gradually,bid compensation becomes more effective.As E grows to a larger magnitude,greater than P /2,the 4H equilibrium would become impossible,no matter how large S is.In fact,if S is too large,bidders will be encouraged to take effort A .When E is extremely large,e.g.,E Ͼ0.6P ,the best strategy is to set S =0.The “*”in Table 2also indicates the cases that bid compensation is ineffec-Table pensation Impacts on a Three-Bidder GameEquilibriumE ;P =10%3H 2H +1A 1H +2A 3A E ϽP /3e.g.,E =2%S Ͻ14%*N/A N/N 14%ϽS P /3ϽE ϽP /2e.g.,E =4%2%ϽS Ͻ8%S Ͻ2%N/A 8%ϽS P /2ϽE Ͻ͑2/3͒P e.g.,E =5.5%N/AN/AS Ͻ3.5%*3.5%ϽS͑2/3͒P ϽEe.g.,E =7%N/A N/A N/A Always*Note:*denotes that zero compensation is the best strategy;and N/A =the respective equilibrium does not exist.D o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .tive.To conclude,in a four-bidder procurement,bid compensation is not effective when E is relatively small or large.Again,similar to the three-bidder game,when bid compensation becomes more effective,it does not mean that offering bid compensation is the best strategy,since more variables need to be considered.Further analysis shall be performed later.Nash Equilibrium of n -Bidder GameIt is desirable to generalize our model to the n -bidder game,al-though only very limited qualified bidders will be involved in most design–build or BOT procurements,since for other project delivery methods it is possible to have many bidders.Interested readers can follow the numerical illustrations for three-and four-bidder games to obtain the numerical solutions of n -bidder game.Here,only analytical equilibrium solutions will be solved.For “nA ”to be the Nash equilibrium,we must have P −E ϽS /n +P /n for bidder A not to deviate.In other words,condition (14)must be satisfiedS Ͼ͑n −1͒P −nE͑14͒Note that condition (14)can be rewritten as S Ͼn ͑P −E ͒−P ,which implies that it is not likely for nA to be the Nash equilib-rium when there are many bidders,unless E is very close to or larger than P .Similar to previous analysis,for “nH ”to be the equilibrium,we must have S /n +P /n −E Ͼ0for stability requirement,and condition (15)for excluding the possibility of cheap talk or nA equilibrium.The condition for the nH equilibrium can be reorga-nized as condition (16).S Ͻ͑n −1͒P −nE ͑15͒nE −P ϽS Ͻ͑n −1͒P −nE͑16͒Note that if E ϽP /n ,condition (16)will always be satisfied and nH will be a unique equilibrium even when S =0.In other words,nH will not be the Nash equilibrium when there are many bidders,unless E is extremely small,i.e.,E ϽP /n .For “aH +͑n −a ͒A ,where 2Ͻa Ͻn ”to be the equilibrium so-lution,we must have S /a +P /a −E Ͼ0for bidder H not to devi-ate,S /͑a +1͒+P /͑a +1͒−E Ͻ0for bidder A not to deviate,and condition (15).These requirements can be rewritten asaE −P ϽS Ͻmin ͓͑a +1͒E −P ,͑n −1͒P −nE ͔͑17͒Similarly,for “2H +͑n −2͒A ,”the stability requirements for bidder H and A are S /͑n −1͒ϽS /2+P /2−E and S /3+P /3−E Ͻ0,re-spectively,and thus the equilibrium condition can be written as ͓͑n −1͒/͑n −3͔͒͑2E −P ͒ϽS Ͻmin ͓3E −P ,͑n −1͒P −nE ͔͑18͒For the “1H +͑n −1͒A ”equilibrium,we must haveS Ͻmin ͕͓͑n −1͒/͑n −3͔͒͑2E −P ͒,͑n −1͒P −nE ͖͑19͒An interesting question is:“What conditions would warrant that the only possible equilibrium of the game is either “1H +͑n −1͒A ”or nA ,no matter how large S is?”A logical response to the question is:when equilibria “aH +͑n −a ͒A ,where a Ͼ2”and equilibrium 2H +͑n −2͒A are not possible solutions.Thus,a suf-ficient condition here is that for any S Ͼ͓͑n −1͒/͑n −3͔͒͑2E −P ͒,the “S Ͻ͑n −1͒P −nE ”is not satisfied.This can be guaranteed if we have͑n −1͒P −nE Ͻ͓͑n −1͒/͑n −3͔͒͑2E −P ͒→E Ͼ͓͑n −1͒/͑n +1͔͒P͑20͒Conditions (19)and (20)show that when E is greater than ͓͑n −1͒/͑n +1͔͒P ,the only possible equilibrium of the game is either 1H +͑n −1͒A or nA ,no matter how large S is.Two important practical implications can be drawn from this finding.First,when n is small in a design–build contract,it is not unusual that E will be greater than ͓͑n −1͒/͑n +1͔͒P ,and in that case,bid compensa-tion cannot help to promote higher effort.For example,for a three-bidder procurement,bid compensation will not be effective when E is greater than ͑2/4͒P .Second,when the number of bidders increases,bid compensation will become more effective since it will be more unlikely that E is greater than ͓͑n −1͒/͑n +1͔͒P .The two implications confirm the previous analyses of two-,three-,and four-bidder game.After the game equilibria and the effective range of bid compensation have been solved,the next important task is to develop the bid compensation strategy with respect to various procurement situations.Table pensation Impacts on a Four-Bidder GameEquilibriumE ;P =10%4H 3H +1A 2H +2A 1H +3A 4A E ϽP /4e.g.,E =2%S Ͻ22%*N/A N/A N/A S Ͼ22%P /4ϽE ϽP /3e.g.,E =3%2%ϽS Ͻ18%S Ͻ2%N/A N/A S Ͼ18%P /3ϽE ϽP /2e.g.,E =4%6%ϽS Ͻ14%2%ϽS Ͻ6%S Ͻ2%N/A S Ͼ14%P /2ϽE Ͻ͑3/5͒P e.g.,E =5.5%N/A 6.5%ϽS Ͻ8%3%ϽS Ͻ6.5%S Ͻ3%S Ͼ8%͑3/5͒P ϽE Ͻ͑3/4͒P e.g.,E =6.5%N/AN/AN/AS Ͻ4%*S Ͼ4%͑3/4͒P ϽEe.g.,E =8%N/A N/A N/A N/AAlways*Note:*denotes that zero compensation is the best strategy;and N/A=respective equilibrium does not exist.D o w n l o a d e d f r o m a s c e l i b r a r y .o r g b y N A N J I N G U N I VE R S I T Y OF o n 01/06/14. C o p y r i g h t A S C E . F o r p e r s o n a l u s e o n l y ; a l l r i g h t s r e s e r v e d .。
毕业设计外文文献翻译Graduation design of foreign literature translation 700 words Title: The Impact of Artificial Intelligence on the Job Market Abstract:With the rapid development of artificial intelligence (AI), concerns arise about its impact on the job market. This paper explores the potential effects of AI on various industries, including healthcare, manufacturing, and transportation, and the implications for employment. The findings suggest that while AI has the potential to automate repetitive tasks and increase productivity, it may also lead to job displacement and a shift in job requirements. The paper concludes with a discussion on the importance of upskilling and retraining for workers to adapt to the changing job market.1. IntroductionArtificial intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence. AI has made significant advancements in recent years, with applications in various industries, such as healthcare, manufacturing, and transportation. As AI technology continues to evolve, concerns arise about its impact on the job market. This paper aims to explore the potential effects of AI on employment and discuss the implications for workers.2. Potential Effects of AI on the Job Market2.1 Automation of Repetitive TasksOne of the major impacts of AI on the job market is the automation of repetitive tasks. AI systems can perform tasks faster and moreaccurately than humans, particularly in industries that involve routine and predictable tasks, such as manufacturing and data entry. This automation has the potential to increase productivity and efficiency, but also poses a risk to jobs that can be easily replicated by AI.2.2 Job DisplacementAnother potential effect of AI on the job market is job displacement. As AI systems become more sophisticated and capable of performing complex tasks, there is a possibility that workers may be replaced by machines. This is particularly evident in industries such as transportation, where autonomous vehicles may replace human drivers, and customer service, where chatbots can handle customer inquiries. While job displacement may lead to short-term unemployment, it also creates opportunities for new jobs in industries related to AI.2.3 Shifting Job RequirementsWith the introduction of AI, job requirements are expected to shift. While AI may automate certain tasks, it also creates a demand for workers with the knowledge and skills to develop and maintain AI systems. This shift in job requirements may require workers to adapt and learn new skills to remain competitive in the job market.3. Implications for EmploymentThe impact of AI on employment is complex and multifaceted. On one hand, AI has the potential to increase productivity, create new jobs, and improve overall economic growth. On the other hand, it may lead to job displacement and a shift in job requirements. To mitigate the negative effects of AI on employment, it is essentialfor workers to upskill and retrain themselves to meet the changing demands of the job market.4. ConclusionIn conclusion, the rapid development of AI has significant implications for the job market. While AI has the potential to automate repetitive tasks and increase productivity, it may also lead to job displacement and a shift in job requirements. To adapt to the changing job market, workers should focus on upskilling and continuous learning to remain competitive. Overall, the impact of AI on employment will depend on how it is integrated into various industries and how workers and policymakers respond to these changes.。
外文翻译毕业设计题目:基于APD的弱光信号探测系统的设计及其相关特性研究原文1:Design and Characterization of Single Photon APD Detector for QKD Application译文1:用于量子密钥的单光子APD探测器设计原文2:High Performance 10 Gb/s PIN and APD Optical Receivers译文2:10 Gb / s的高性能PIN和APD光接收器Design and Characterization of Single Photon APD Detector forQKD ApplicationAbstractModeling and design of a single photon detector and its various characteristics are presented. It is a type of avalanche photo diode (APD) designed to suit the requirements of a Quantum Key Distribution (QKD) detection system. The device is modeled to operate in a gated mode at liquid nitrogen temperature for minimum noise and maximum gain. Different types of APDs are compared for best performance. The APD is part of an optical communication link, which is a private channel to transmit the key signal. The encrypted message is sent via a public channel. The optical link operates at a wavelength of 1.55μm. The design is based on InGaAs with a quantum efficiency of more than 75% and a multiplication factor of 1000. The calculated dark current is below 10-12A with an overall signal to noise ratio better than 18dB. The device sensitivity is better than -40dBm, which is more than an order of magnitude higher than the dark current, corresponding to a detection sensitivity of two photons in pico-second pulses.I. INTRODUCTIONPhoton detectors sensitive to extremely low light levels are needed in a variety of applications. It was not possible to introduce these devices commercially several years ago because of the stringent requirements of QKD. Research efforts however resulted in photon detectors with reasonably good performance characteristics. The objective here is to model a single photon detector of high sensitivity, suitable for a QKD system. The detector is basically an APD, which needs cooling to very low temperature (77K) for the dark current to be low. The wavelength of interest is 1.55μm. Different applications may impose different requirements, and hence the dependence of the various parameters on wavelength, temperature, responsivity, dark current, noise etc, are modeled. Comparison of the results from calculations based on a suitable model provides amenable grounds to determine the suitability of each type of APD for a specific application.Attacks on communication systems in recent years have become a main concern accompanying the technological advances. The measures and counter measures against attacks have driven research effort towards security techniques that aim at absolute infallibility. Quantum Mechanics is considered one of the answers, due to inherent physical phenomena. QKD systems which depend on entangled pairs orpolarization states will inevitably require the usage of APDs in photon detection systems. How suitable these detectors may be, depends on their ability to detect low light level signals, in other words “photon coun ting”. It is therefore anticipated that the application of high security systems will be in high demand in a variety of fields such as banking sector, military, medical care, e-commerce, e-government etc.Ⅱ. AV ALANCHE PHOTO DIODEA. Structure of the APDFig. 1 shows a schematic diagram of the structure of an APD. The APD is a photodiode with a built-in amplification mechanism. The applied reverse potential difference causes accelerates photo-generated carriers to very high speeds so that a transfer of momentum occurs upon collisions, which liberates other electrons. Secondary electrons are accelerated in turn and the result is an avalanche process. The photo generated carriers traverse the high electric field region causing further ionization by releasing bound electrons in the valence band upon collision. This carrier generation mechanism is known as impact ionization. When carriers collide with the crystal lattice, they lose some energy to the crystal. If the kinetic energy of a carrier is greater than the band-gap, the collision will free a bound electron. The free electrons and holes so created also acquire enough energy to cause further impact ionization. The result is an avalanche, where the number of free carriers grows exponentially as the process continues.The number of ionization collisions per unit length for holes and electrons is designated ionization coefficients αn and αp, respectively. The type of materials and their band structures are responsible for the variation in αn and αp. Ionization coefficients also depend on the applied electric field according tothe following relationship:,exp[]n p b a Eαα=- (1) For αn = αp = α, the multiplication factor, M takes the form11aW M -= (2)W is the width of the depletion region. It can be observed that M tends to ∞ when αW →1, whichsignifies the condition for junction breakdown. Therefore, the high values of M can be obtained whenthe APD is biased close to the breakdown region.The thickness of the multiplication region for M = 1000, has been calculated and compared withthose found by other workers and the results are shown in Table 1. The layer thickness for undoped InPis 10μm, for a substrate thickness of 100μm .The photon-generated electron-hole pairs in the absorption layer are accelerated under theinfluence of an electric field of 3.105V/cm. The acceleration process is constantly interrupted by randomcollisions with the lattice. The two competing processes will continue until eventually an averagesaturation velocity is reached. Secondary electron-hole pairs are generated at any time during theprocess, when they acquire energy larger than the band gap Eg. The electrons are then accelerated andmay cause further impact ionization.Impact ionization of holes due to bound electrons is not as effective as that due to free electrons.Hence, most of the ionization is achieved by free electrons. The avalanche process then proceedsprincipally from the p to the n side of the device. It terminates after a certain time, when the electronsarrive at the n side of the depletion layer. Holes moving to the left create electrons that move to the right,which in turn generate further holes moving to the left in a possibly unending circulation. Although this feedback process increases the gain of the device, it is nevertheless undesirable for several reasons. First, it is time consuming and reduces the device bandwidth. Second, it is a random process and therefore increases the noise in the device. Third, it is unstable, which may cause avalanche breakdown.It may be desirable to fabricate APDs from materials that permit impact ionization by only one type of carriers either electrons or holes. Photo detector materials generally exhibit different ionization rates for electrons and holes. The ratio ofthe two ionization rates k = βi/αi is a measure of the photodiode performance. If for example, electrons have higher ionization coefficient, optimal behavior is achieved by injecting electrons of photo-carrier pairs at the p-type edge of the depletion layer and by using a material with k value as small as possible. If holes are injected, they should be injected at the n-type edge of the depletion layer and k should be as large as possible. Single-carrier multiplication is achieved ideally, when k = 0 with electrons or with k = ∞for holes.B.Geiger ModeGeiger mode (GM) operation means that the diode is operated slightly above the breakdown threshold voltage, where a single electron–hole pair can trigger a strong avalanche. In the case of such an event, the electronics reduce the diode voltage to below the threshold value for a short time called “dead time”, during which the avalanche is stopped and the detector is made ready to detect the next batch of photons. GM operation is one of the basic of Quantum Counting techniques when utilizing an avalanche process (APD) that increases the detector efficiency significantly.There are a number of parameters related to Geiger mode. The general idea however is to temporarily disturb the equilibrium inside the APD.The Geiger mode is placing the APD in a gated regime and the bias is raised above the breakdownvoltage for a short period of time. Fig. 2 shows the parameters characterizing the Geiger operation. The rise and fall times of the edges are neglected because they are made fast. Detection of single photons occurs during the gate window.作者:Khalid A. S. Al-Khateeb, Nazmus Shaker Nafi, Khalid Hasan国籍:美国出处:Computer and Communication Engineering (ICCCE), 2010 International Conference on 11-12 May 2010用于量子密钥的单光子APD探测器设计摘要本文提出的是单光子探测器及其各种特性的建模与设计。