重庆交通大学毕业设计中英文翻译
- 格式:doc
- 大小:56.00 KB
- 文档页数:7
英文翻译:PARTⅠ各种光纤接入技术Optical Fiber Technology With Various Access1 光网络主流1.1 光纤技术光纤生产技术已经成熟,现在大批量生产,广泛应用于今天的零色散波长λ0=1.3μm的单模光纤,而零色散波长λ0=1.55μm的单模光纤已开发并已进入实用阶段,这是非常小的1.55μm的波长衰减,约0.22dB/km,它更适合长距离大容量传输,是首选的长途骨干传输介质。
目前,为了适应不同的线路和局域网的发展要求,已经制定了一个非分散纤维,低色散斜率光纤,大有效面积光纤,水峰光纤等新型光纤。
长波光学研究人员研究认为,传输距离可以达到数千公里的理论,可以实现无中继传输距离,但它仍然是阶段理论。
1.2 光纤放大器1550nm波长掺铒(ER)的光纤放大器(EDFA),掺铒数字,模拟和相干光通信中继器可以以不同的速率传输光纤放大器,也可以发送特定波长的光信号。
在从模拟信号转换成数字信号、从低到高比特率比特率的光纤网络升级中,系统采用光复用技术的扩大,他们都不必改变掺铒放大器电路和设备。
掺铒放大器可作为光接收机前置放大器,后置放大器的光发射机和放大器的补偿光源装置。
1.3 宽带接入不同的环境中企业和住宅客户提供了多种宽带接入解决方案。
接入系统主要完成三大功能:高速传输,复用/路由,网络的扩展。
目前,接入系统的主流技术,ADSL 技术可以双绞铜线传输经济每秒几兆比特的信息,即支持传统的语音服务,而且还支持面向数据的因特网接入位,理事会结束的ADSL多路复用访问的数据流量,路由的分组网络,语音流量将传送到PSTN,ISDN或其它分组网络。
电缆调制解调器在HFC网络提供高速数据通信,将带宽分为上行和下行信道同轴电缆渠道,它可以提供挥发性有机化合物的在线娱乐,互联网接入等服务,同时还提供PSTN业务。
固定无线接入系统如智能天线和接收机的无线接入系统使用了许多高新技术,是一个以创新的方式接入的技术,作为目前仍滞留在今后进一步探索实践的方式最不确定的接入技术。
Anti-Aircraft Fire Control and the Development of IntegratedSystems at SperryT he dawn of the electrical age brought new types of control systems. Able to transmit data between distributed components and effect action at a distance, these systems employed feedback devices as well as human beings to close control loops at every level. By the time theories of feedback and stability began to become practical for engineers in the 1930s a tradition of remote and automatic control engineering had developed that built distributed control systems with centralized information processors. These two strands of technology, control theory and control systems, came together to produce the large-scale integrated systems typical of World War II and after.Elmer Ambrose Sperry (I860-1930) and the company he founded, the Sperry Gyroscope Company, led the engineering of control systems between 1910 and 1940. Sperry and his engineers built distributed data transmission systems that laid the foundations of today‟s command and control systems. Sperry‟s fire control systems included more than governors or stabilizers; they consisted of distributed sensors, data transmitters, central processors, and outputs that drove machinery. This article tells the story of Sperry‟s involvement in anti-aircraft fire control between the world wars and shows how an industrial firm conceived of control systems before the common use of control theory. In the 1930s the task of fire control became progressively more automated, as Sperry engineers gradually replaced human operators with automatic devices. Feedback, human interface, and system integration posed challenging problems for fire control engineers during this period. By the end of the decade these problems would become critical as the country struggled to build up its technology to meet the demands of an impending war.Anti-Aircraft Artillery Fire ControlBefore World War I, developments in ship design, guns, and armor drove the need for improved fire control on Navy ships. By 1920, similar forces were at work in the air: wartime experiences and postwar developments in aerial bombing created the need for sophisticated fire control for anti-aircraft artillery. Shooting an airplane out of the sky is essentially a problem of “leading” the target. As aircraft developed rapidly in the twenties, their increased speed and altitude rapidly pushed the task of computing the lead out of the range of human reaction and calculation. Fire control equipment for anti-aircraft guns was a means of technologically aiding human operators to accomplish a task beyond their natural capabilities.During the first world war, anti-aircraft fire control had undergone some preliminary development. Elmer Sperry, as chairman of the Aviation Committee of the Naval Consulting Board, developed two instruments for this problem: a goniometer,a range-finder, and a pretelemeter, a fire director or calculator. Neither, however, was widely used in the field.When the war ended in I918 the Army undertook virtually no new development in anti-aircraft fire control for five to seven years. In the mid-1920s however, the Army began to develop individual components for anti-aircraft equipment including stereoscopic height-finders, searchlights, and sound location equipment. The Sperry Company was involved in the latter two efforts. About this time Maj. Thomas Wilson, at the Frankford Arsenal in Philadelphia, began developing a central computer for firecontrol data, loosely based on the system of “director firing” that had developed in naval gunn ery. Wilson‟s device resembled earlier fire control calculators, accepting data as input from sensing components, performing calculations to predict the future location of the target, and producing direction information to the guns.Integration and Data TransmissionStill, the components of an anti-aircraft battery remained independent, tied together only by telephone. As Preston R. Bassett, chief engineer and later president of the Sperry Company, recalled, “no sooner, however, did the components get to the point of functioning satisfactorily within themselves, than the problem of properly transmitting the information from one to the other came to be of prime importance.”Tactical and terrain considerations often required that different fire control elements be separated by up to several hundred feet. Observers telephoned their data to an officer, who manually entered it into the central computer, read off the results, and telephoned them to the gun installations. This communication system introduced both a time delay and the opportunity for error. The components needed tighter integration, and such a system required automatic data communications.In the 1920s the Sperry Gyroscope Company led the field in data communications. Its experience came from Elmer Spe rry‟s most successful invention, a true-north seeking gyro for ships. A significant feature of the Sperry Gyrocompass was its ability to transmit heading data from a single central gyro to repeaters located at a number of locations around the ship. The repeaters, essentially follow-up servos, connected to another follow-up, which tracked the motion of the gyro without interference. These data transmitters had attracted the interest of the Navy, which needed a stable heading reference and a system of data communication for its own fire control problems. In 1916, Sperry built a fire control system for the Navy which, although it placed minimal emphasis on automatic computing, was a sophisticated distributed data system. By 1920 Sperry had installed these systems on a number of US. battleships.Because of the Sperry Company‟s experience with fire control in the Navy, as well as Elmer Sperry‟s earlier work with the goniometer and the pretelemeter, the Army approached the company for help with data transmission for anti-aircraft fire control. To Elmer Sperry, it looked like an easy problem: the calculations resembled those in a naval application, but the physical platform, unlike a ship at sea, anchored to the ground. Sperry engineers visited Wilson at the Frankford Arsenal in 1925, and Elmer Sperry followed up with a letter expressing his interest in working on the problem. He stressed his company‟s experience with naval problems, as well as its recent developments in bombsights, “work from the other end of the pro position.” Bombsights had to incorporate numerous parameters of wind, groundspeed, airspeed, and ballistics, so an anti-aircraft gun director was in some ways a reciprocal bombsight . In fact, part of the reason anti-aircraft fire control equipment worked at all was that it assumed attacking bombers had to fly straight and level to line up their bombsights. Elmer Sperry‟s interests were warmly received, and in I925 and 1926 the Sperry Company built two data transmission systems for the Army‟s gun directors.The original director built at Frankford was designated T-1, or the “Wilson Director.” The Army had purchased a Vickers director manufactured in England, but encouraged Wilson to design one thatcould be manufactured in this country Sperry‟s two data tran smission projects were to add automatic communications between the elements of both the Wilson and the Vickers systems (Vickers would eventually incorporate the Sperry system into its product). Wilson died in 1927, and the Sperry Company took over the entire director development from the Frankford Arsenal with a contract to build and deliver a director incorporating the best features of both the Wilson and Vickers systems. From 1927 to 193.5, Sperry undertook a small but intensive development program in anti-aircraft systems. The company financed its engineering internally, selling directors in small quantities to the Army, mostly for evaluation, for only the actual cost of production [S]. Of the nearly 10 models Sperry developed during this period, it never sold more than 12 of any model; the average order was five. The Sperry Company offset some development costs by sales to foreign govemments, especially Russia, with the Army‟s approval 191.The T-6 DirectorSperry‟s modified version of Wilson‟s director was designated T-4 in development. This model incorporated corrections for air density, super-elevation, and wind. Assembled and tested at Frankford in the fall of 1928, it had problems with backlash and reliability in its predicting mechanisms. Still, the Army found the T-4 promising and after testing returned it to Sperry for modification. The company changed the design for simpler manufacture, eliminated two operators, and improved reliability. In 1930 Sperry returned with the T-6, which tested successfully. By the end of 1931, the Army had ordered 12 of the units. The T-6 was standardized by the Army as the M-2 director.Since the T-6 was the first anti-aircraft director to be put into production, as well as the first one the Army formally procured, it is instructive to examine its operation in detail. A technical memorandum dated 1930 explained the theory behind the T-6 calculations and how the equations were solved by the system. Although this publication lists no author, it probably was written by Earl W. Chafee, Sperry‟s director of fire control engineering. The director was a complex mechanical analog computer that connected four three-inch anti-aircraft guns and an altitude finder into an integratedsystem (see Fig. 1). Just as with Sperry‟s naval fire control system, the primary means of connection were “data transmitters,” similar to those that connected gyrocompasses to repeaters aboard ship.The director takes three primary inputs. Target altitude comes from a stereoscopic range finder. This device has two telescopes separated by a baseline of 12 feet; a single operator adjusts the angle between them to bring the two images into coincidence. Slant range, or the raw target distance, is then corrected to derive its altitude component. Two additional operators, each with a separate telescope, track the target, one for azimuth and one for elevation. Each sighting device has a data transmitter that measures angle or range and sends it to the computer. The computer receives these data and incorporates manual adjustments for wind velocity, wind direction, muzzle velocity, air density, and other factors. The computer calculates three variables: azimuth, elevation, and a setting for the fuze. The latter, manually set before loading, determines the time after firing at which the shell will explode. Shells are not intended to hit the target plane directly but rather to explode near it, scattering fragments to destroy it.The director performs two major calculations. First, pvediction models the motion of the target and extrapolates its position to some time in the future. Prediction corresponds to “leading” the target. Second, the ballistic calculation figures how to make the shell arrive at the desired point in space at the future time and explode, solving for the azimuth and elevation of the gun and the setting on the fuze. This calculation corresponds to the traditional artillery man‟s task of looking up data in a precalculated “firing table” and setting gun parameters accordingly. Ballistic calculation is simpler than prediction, so we will examine it first.The T-6 director solves the ballistic problem by directly mechanizing the traditional method, employing a “mechanical firing table.” Traditional firing tables printed on paper show solutions for a given angular height of the target, for a given horizontal range, and a number of other variables. The T-6 replaces the firing table with a Sperry ballistic cam.” A three-dimensionally machined cone shaped device, the ballistic cam or “pin follower” solves a pre-determined function. Two independent variables are input by the angular rotation of the cam and the longitudinal position of a pin that rests on top of the cam. As the pin moves up and down the length of the cam, and as the cam rotates, the height of the pin traces a function of two variables: the solution to the ballistics problem (or part of it). The T-6 director incorporates eight ballistic cams, each solving for a different component of the computation including superelevation, time of flight, wind correction, muzzle velocity. air density correction. Ballistic cams represented, in essence, the stored data of the mechanical computer. Later directors could be adapted to different guns simply by replacing the ballistic cams with a new set, machined according to different firing tables. The ballistic cams comprised a central component of Sperry‟s mechanical computing technology. The difficulty of their manufacture would prove a major limitation on the usefulness of Sperry directors.The T-6 director performed its other computational function, prediction, in an innovative way as well. Though the target came into the system in polar coordinates (azimuth, elevation, and range), targets usually flew a constant trajectory (it was assumed) in rectangular coordinates-i.e. straight andlevel. Thus, it was simpler to extrapolate to the future in rectangular coordinates than in the polar system. So the Sperry director projected the movement of the target onto a horizontal plane, derived the velocity from changes in position, added a fixed time multiplied by the velocity to determine a future position, and then converted the solution back into polar coordinates. This method became known as the “plan prediction method”because of the representation of the data on a flat “plan” as viewed from above; it was commonly used through World War II. In the plan prediction method, “the actual movement of the target is mechanically reproduced on a small scale within the Computer and the desired angles or speeds can be measured directly from the movements of these elements.”Together, the ballistic and prediction calculations form a feedback loop. Operators enter an estimated “time of flight” for the shell when they first begin tracking. The predictor uses this estimate to perform its initial calculation, which feeds into the ballistic stage. The output of the ballistics calculation then feeds back an updated time-of-flight estimate, which the predictor uses to refine the initial estimate. Thus “a cumulative cycle of correction brings the predicted future position of the target up to the point indicated by the actual future time of flight.”A square box about four feet on each side (see Fig. 2) the T-6 director was mounted on a pedestal on which it could rotate. Three crew would sit on seats and one or two would stand on a step mounted to the machine. The remainder of the crew stood on a fixed platform; they would have had to shuffle around as the unit rotated. This was probably not a problem, as the rotation angles were small. The direc tor‟s pedestal mounted on a trailer, on which data transmission cables and the range finder could be packed for transportation.We have seen that the T-6 computer took only three inputs, elevation, azimuth, and altitude (range), and yet it required nine operators. These nine did not include the operation of the range finder, which was considered a separate instrument, but only those operating the director itself. What did these nine men do?Human ServomechanismsTo the designers of the director, the operato rs functioned as “manual servomechanisms.”One specification for the machine required “minimum dependence on …human element.‟ The Sperry Company explained, “All operations must be made as mechanical and foolproof as possible; training requirements must visualize the conditions existent under rapid mobilization.” The lessons of World War I ring in this statement; even at the height of isolationism, with the country sliding into depression, design engineers understood the difficulty of raising large numbers of trained personnel in a national emergency. The designers not only thought the system should account for minimal training and high personnel turnover, they also considered the ability of operators to perform their duties under the stress of battle. Thus, nearly all the work for the crew was in a “follow-the-pointer”mode: each man concentrated on an instrument with two indicating dials, one the actual and one the desired value for a particular parameter. With a hand crank, he adjusted the parameter to match the two dials.Still, it seems curious that the T-6 director required so many men to perform this follow-the-pointer input. When the external rangefinder transmitted its data to the computer, it appeared on a dial and an operator had to follow the pointer to actually input the data into the computing mechanism. The machine did not explicitly calculate velocities. Rather, two operators (one for X and one for Y) adjusted variable-speed drives until their rate dials matched that of a constant-speed motor. When the prediction computation was complete, an operator had to feed the result into the ballistic calculation mechanism. Finally, when the entire calculation cycle was completed, another operator had to follow the pointer to transmit azimuth to the gun crew, who in turn had to match the train and elevation of the gun to the pointer indications.Human operators were the means of connecting “individual elements” into an integrated system. In one sense the men were impedance amplifiers, and hence quite similar to servomechanisms in other mechanical calculators of the time, especially Vannevar Bush‟s differential analyzer .The term “manual servomechanism”itself is an oxymoron: by the conventional definition, all servomechanisms are automatic. The very use of the term acknowledges the existence of an automatic technology that will eventually replace the manual method. With the T-6, this process was already underway. Though the director required nine operators, it had already eliminated two from the previous generation T-4. Servos replaced the operator who fed back superelevation data and the one who transmitted the fuze setting. Furthermore, in this early machine one man corresponded to one variable, and the machine‟s requirement for operators corresponded directly to the data flow of its computation. Thus the crew that operated the T-6 director was an exact reflection of the algorithm inside it.Why, then, were only two of the variables automated? This partial, almost hesitating automation indicates there was more to the human servo-motors than Sperry wanted to acknowledge. As much as the company touted “their duties are purely mechanical and little skill or judgment is required on the part of the operators,” men were still required to exercise some judgment, even if unconsciously. The data were noisy, and even an unskilled human eye could eliminate complications due to erroneous or corrupted data. The mechanisms themselves were rather delicate and erroneous input data, especially if it indicated conditions that were not physically possible, could lock up or damage the mechanisms. Theoperators performed as integrators in both senses of the term: they integrated different elements into a system.Later Sperry DirectorsWhen Elmer Sperry died in 1930, his engineers were at work on a newer generation director, the T-8. This machine was intended to be lighter and more portable than earlier models, as well as less expensive and “procurable in quantities in case of emergency.” The company still emphasized the need for unskilled men to operate the system in wartime, and their role as system integrators. The operators were “mechanical links in the apparatus, thereby making it possible to avoid mechanical complication which would be involved by the use of electrical or mechanical servo motors.” Still, army field experience with the T-6 had shown that servo-motors were a viable way to reduce the number of operators and improve reliability, so the requirements for the T-8 specified that wherever possible “electrical shall be used to reduce the number of operators to a minimum.” Thus the T-8 continued the process of automating fire control, and reduced the number of operators to four. Two men followed the target with telescopes, and only two were required for follow-the-pointer functions. The other follow-the-pointers had been replaced by follow-up servos fitted with magnetic brakes to eliminate hunting. Several experimental versions of the T-8 were built, and it was standardized by the Army as the M3 in 1934.Throughout the remain der of the …30s Sperry and the army fine-tuned the director system in the M3. Succeeding M3 models automated further, replacing the follow-the-pointers for target velocity with a velocity follow-up which employed a ball-and-disc integrator. The M4 series, standardized in 1939, was similar to the M3 but abandoned the constant altitude assumption and added an altitude predictor for gliding targets. The M7, standardized in 1941, was essentially similar to the M4 but added full power control to the guns for automatic pointing in elevation and azimuth. These later systems had eliminated errors. Automatic setters and loaders did not improve the situation because of reliability problems. At the start of World War II, the M7 was the primary anti-aircraft director available to the army.The M7 was a highly developed and integrated system, optimized for reliability and ease of operation and maintenance. As a mechanical computer, it was an elegant, if intricate, device, weighing 850 pounds and including about 11,000 parts. The design of the M7 capitalized on the strength of the Sperry Company: manufacturing of precision mechanisms, especially ballistic cams. By the time the U.S. entered the second world war, however, these capabilities were a scarce resource, especially for high volumes. Production of the M7 by Sperry and Ford Motor Company as subcontractor was a “real choke” and could not keep up with production of the 90mm guns, well into 1942. The army had also adopted an English system, known as the “Kerrison Director” or M5, which was less accurate than the M7 but easier to manufacture. Sperry redesigned the M5 for high-volume production in 1940, but passed in 1941.Conclusion: Human Beings as System IntegratorsThe Sperry directors we have examined here were transitional, experimental systems. Exactly for that reason, however, they allow us to peer inside the process of automation, to examine the displacement of human operators by servomechanisms while the process was still underway. Skilled asthe Sperry Company was at data transmission, it only gradually became comfortable with the automatic communication of data between subsystems. Sperry could brag about the low skill levels required of the operators of the machine, but in 1930 it was unwilling to remove them completely from the process. Men were the glue that held integrated systems together.As products, the Sperry Company‟s anti-aircraft gun directors were only partially successful. Still, we should judge a technological development program not only by the machines it produces but also by the knowledge it creates, and by how that knowledge contributes to future advances. Sperry‟s anti-aircraft directors of the 1930s were early examples of distributed control systems, technology that would assume critical importance in the following decades with the development of radar and digital computers. When building the more complex systems of later years, engineers at Bell Labs, MIT, and elsewhere would incorporate and build on the Sperry Company‟s experience,grappling with the engineering difficulties of feedback, control, and the augmentation of human capabilities by technological systems.在斯佩里防空炮火控和集成系统的发展电气时代的到来带来了新类型的控制系统。
ORIGINAL PAPEREggshell crack detection based on acoustic response and support vector data description algorithmHao Lin ÆJie-wen Zhao ÆQuan-sheng Chen ÆJian-rong Cai ÆPing ZhouReceived:21May 2009/Revised:27August 2009/Accepted:28August 2009/Published online:22September 2009ÓSpringer-Verlag 2009Abstract A system based on acoustic resonance and combined with pattern recognition was attempted to dis-criminate cracks in eggshell.Support vector data descrip-tion (SVDD)was employed to solve the classification problem due to the imbalanced number of training samples.The frequency band was between 1,000and 8,000Hz.Recursive least squares adaptive filter was used to process the response signal.Signal-to-noise ratio of acoustic impulse response was remarkably enhanced.Five charac-teristics descriptors were extracted from response fre-quency signals,and some parameters were optimized in building model.Experiment results showed that in the same condition SVDD got better performance than con-ventional classification methods.The performance of SVDD model was achieved with crack detection level of 90%and a false rejection level of 10%in the prediction set.Based on the results,it can be concluded that the acoustic resonance system combined with SVDD has significant potential in the detection of cracked eggs.Keywords Eggshell ÁCrack ÁDetection ÁAcoustic resonance ÁSupport vector data descriptionIntroductionIn the egg industry,the presence of cracks in eggshells is one of the main defects of physical quality.Cracked eggsare very vulnerable to bacterial infections leading to health hazards [1].It mostly results in significant economic loss in the egg industry.Recent research shows that it is possible to detect cracks in eggshells using acoustic response analysis [2–5].Supervised pattern recognition models were also employed to discriminate intact and cracked eggs [6].In these previous researches,training of discrimination models needs a considerable amount of intact egg samples and also corresponding defective ones.However,it is more difficult to acquire sufficient naturally cracked eggs samples than intact ones.Artificial infliction of cracking in eggs is time-consuming and a waste.Moreover,the artificially cracked eggs may not provide completely authentic information on naturally cracked ones.So,the traditional discrimination model shows poor performance when the numbers of sam-ples from the two classes are seriously unbalanced,because the samples of minority group cannot provide sufficient information to support the ultimate decision function.Support vector data description (SVDD),which is inspired by the theory of two-class support vector machine (SVM),is custom-tailored for one-class classification [7].One-class classification is always used to deal with a two-class classification problem,where each of the two classes has a special meaning [8].The two classes in SVDD are target class and outlier class,respectively.Target class is assumed to be sampled well,and many (training)example objects are available.The outlier class can be sampled very sparsely,or can be totally absent.The basic idea of SVDD is to define a boundary around samples of target with a volume as small as possible [9].SVDD has been used to solve the problem of unbalanced samples in the field of machine faults diagnosis,intrusion detection in the network,recog-nition of handwritten digits,face recognition,etc.[10–13].In this work,the algorithm of SVDD was employed to solve the classification problem of eggs due to imbalancedH.Lin ÁJ.Zhao (&)ÁQ.Chen ÁJ.Cai ÁP.ZhouSchool of Food and Biological Engineering,Jiangsu University,212013Zhenjiang,People’s Republic of Chinae-mail:zjw-205@;zhao_jiewen@ H.Line-mail:linhaolt794@Eur Food Res Technol (2009)230:95–100DOI 10.1007/s00217-009-1145-6number of samples.In addition,recursive least squares (RLS)adaptive filter was used to enhance the signal-to-noise ratio.Some excitation resonant frequency charac-teristics of signals were used as input vectors of SVDD model to discriminate intact and cracked eggs.Materials and methods Samples preparationAll barn egg samples were collected naturally from a poultry farm and they were intensively reared.These eggs were on maximum 3days old when they were measured.As much as 130eggs with intact shells and 30eggs with cracks were measured.The sizes of eggs ranged from peewee to jumbo.Irregular eggs were not incorporated into the data analysis.The cracks,which were 10–40mm long and less than 15-l m wide,were measured by a micrometer.Both,intact and cracked samples,were divided into two subsets.One of them called calibration set was used to build a model,and the other one called prediction set was used to test the robustness of the model.The calibration set contained 120samples;the number of intact and cracked samples were 110and 10,respectively.The remaining 40samples constituted the prediction set,with 20intact eggs and 20cracked ones.Experimental systemA system based on acoustic resonance was developed for the detection of crack in eggshell.The system consists of a product support,a light exciting mechanism,a microphone,signal amplifiers,a personal computer (PC)and software to acquire and analyze the results.A schematic diagram of the system is presented in Fig.1.A pair of rolls made of hard rubber was used to support the eggs,and the shape of the support was focused to normal eggshell surfaces.The excitation set included an electromagnetic driver,an adjustable volt DC power and a light metallic stick.The total mass of the stick was 6g,and its length 6cm.The excitation force is an important factor that affects the magnitude and width of the pulse.The adjustable volt DC power was used to control the excitation force.Based on previous test,the voltage of excitation was set at 30V.In this case,optimal signals were achieved without instrumentation overload.The impacting position was close to the crack in the cracked eggshells,which was placed randomly among intact eggshells.Data acquisition and analysisResponse signals obtained from the microphone were amplified,filtered and captured by a 16-bit data acquisition card.The program of data acquisition was compiled based on LabVIEW8.2software(National Instruments,USA)that allows a fast acquisition and processing of the response signal.The sampling rate was 22.05kHz.The time signal was transformed to a frequency signal by using a 512-point fast Fourier (FFT)transformation.The linear frequency spectrum accepted was transformed to a power spectrum.A band-pass filter was used to preserve the information of the frequency band between 1,000and 8,000Hz,because the features of response signals were legible in this frequency band and the signal-to-noise here was also favorable.Brief introduction of support vector data description (SVDD)SVDD is inspired by the idea of SVM [14,15].It is a method of data domain description also calledone-classFig.1Eggshell crackmeasurement system based on acoustic resonance analysisclassification.The basic idea of SVDD is to envelop samples or objects within a high-dimensional space with the volume as small as possible byfitting a hypersphere around the samples.The sketch map in two dimensions of SVDD is shown in Fig.2.By introducing kernels,this inflexible model becomes much more powerful and can give reliable results when a suitable kernel is used[16]. The problem of SVDD is tofind center a and radius R, which have the minimum volume of hypersphere contain-ing all samples X i.For a data set containing i normal data objects,when one or a few very remote objects are in it,a very large sphere is obtained,which will not represent the data very well.Therefore,we allow for some data points outside the sphere and introduce slack variable n i.As a result,the minimization problem can be denoted in thefollowing form:min LðRÞ¼R2þCX Ni¼1n i;s:t x iÀak k2R2þn i;n i!0ði¼1;2;...;NÞ;9>>>>>=>>>>>;ð1Þwhere the variable C gives the trade-off between simplicity (volume of the sphere)and the number of errors(number of target objects rejected).The above problem is usually solved by introducing Lagrange multipliers and can be transformed into maximizing the following function L with respect to the Lagrange multipliers.For an object x,we definef2ðxÞ¼xÀak k2¼ðxÁxÞÀ2X Ni¼1a iðzÁx iÞþX Ni¼1X Nj¼1a i a jðx iÁx jÞ:ð2ÞThe test objects x is accepted when the distance is smaller than the radius.These objects are called the support objects of the description or the SVs.Objects lying outside the sphere are also called bounded support vectors(BSVs). When a sphere is not always a goodfit for the boundary of data distribution,the inner product(x,y)is generalized by a kernel function k x;yðÞ¼/xðÞ;/yðÞf g;where a mapping/ of the data to a new feature space is applied.With such mapping,Eq.(2)will then becomeL¼P Ni¼1a i kðx i;x iÞÀP Ni¼1P Nj¼1a i a j kðx i;x jÞ;s:t0a i C;P Ni¼1a i¼1and a¼Pia i/ðx iÞ:9>>>=>>>;ð3ÞIn brief,SVDDfirst maps the data which are not linearly separable into a high-dimensional feature space and then describe the data by the maximal margin hypersphere.SoftwareAll data-processing algorithms were implemented with the statistical software Matlab7.1(Mathworks,USA)under Windows XP.SVDD Matlab codes were downloaded from http://www-ict.ewi.tudelft.nl/*davidt/dd_tools.html free of charge.Result and discussionResponse signalsSince the acoustic response was an instantaneous impulse, it was difficult to discriminate between the different response signals of cracked and intact eggs in the time domain.The time domain signals were transformed by FFT to frequency domain signals for the next analysis.Typical power spectra of intact egg and cracked egg are shown in Fig.3,and the areas under the spectral envelope for the intact eggs were smaller than that of the cracked eggs.For the intact eggs,the peak frequencies were prominent, generally found in the middle place(3,500–5,000Hz).In contrast,the peak frequencies of cracked eggs were dis-perse and not prominent.Adaptive RLSfilteringSince the detection of cracked eggshells is based on acoustic response measurement,it is vulnerably interfered by the surrounding noise.This fact is reinforced by the much damped behaviors of agro-products[17].Therefore, response signal should be processed to remove noise in further analysis.Adaptive interference canceling is a standard approach to remove environmental noise[18,19].The RLS is a popular algorithm in thefield of adaptive signal processing. In adaptive RLSfiltering,the coefficients are adjusted from sample to sample to minimize the mean square error(MSE) between a measured noisy scalar signal and itsmodeledvalue from the filter [20,21].A scalar,real output signal,y k ,is measured at the discrete time k ,in response to a set of scalar input signals X k ði Þ;i ¼1;2;...;n ;where n is an arbitrary number of filter taps.For this research,n is set to the number of degrees of freedom to ensure conformity of the resulting filter matrices.The input and the output sig-nals are related by the simple regression model:y k ¼X n À1i ¼0w ði ÞÁx k ði Þþe k :ð4Þwhere e k represents measurement error and w (i )represents the proportion that is contained in the primary scalar signal y k .The implementation of the RLS algorithm is optimized by exploiting the inversion matrix lemma and provides fast convergence and small error rates [22].System identification of a 32-coefficient FIR filter combined with adaptive RLS filtering was used to process the signals.The forgetting factor was 1,and the vector of initial filter coefficients was 0.Figure 4shows the fre-quency signals before and after adaptive RLS filtering.Variable selectionBased on the differences of frequency domain response signals from intact and cracked eggs,five characteristic descriptors were extracted from the response frequency signals as the inputs of the discrimination model.These are shown in Table 1.Parameter optimization in SVDD modelThe basic concept of SVDD is to map nonlinearly the original data X into a higher-dimensional feature space.The transformation into a higher-dimensional space is implemented by a kernel function [23].So,selection of kernel function has a high influence on the performance of the SVDD model.Several kernel functions have been proposed for the SVDD classifier.Not all kernel functions are equally useful for the SVDD.It has been demonstrated that Gaussian kernel results in tighter description and gives a good performance under general smoothness assumptions [24].Thus,Gaussian kernel was adopted in this study.To obtain a good performance,the regularization parameter C and the kernel function r have to be opti-mized.Parameter C determines the trade-off between minimizing the training error and minimizing model complexity.By using Gaussian kernel,the data description transforms from a solid hyper-sphere to a Parzen density estimator.An appropriate selection with width parameter r of Gaussian kernel is important to the density estimation of target objects.There is no systematic methodology for the optimization of these parameters.In this study,the procedure of opti-mization was carried out in two search steps.First,a comparatively large step length was attempted to search optimal value of parameters.The favorable results of the model were found with values of C between 0.005and 0.1,and values of r between 10and 500.Therefore,a much smaller step length was employed for further searching these parameters.In the second search step,50parameter r values with the step of 10(r =10,20–500)and 20parameter C values with the step of 0.005(C =0.005,0.01–1)were tested simultaneously in the building model.Identification results of SVDD model influenced by values of r and C are shown in Fig.5.The optimal model was achieved when r was equal to 420and C was equal to 0.085or 0.09.Here,the identification rates of intactandFig.3Typical response frequency signal ofeggsFig.4Frequency signals before and after adaptive RLS filteringcracked eggs were both 90%in the prediction set.Fur-thermore,it was found that the performance of the SVDD model could not be improved by smaller search parison of discrimination modelsConventional two-class linear discrimination analysis (LDA)model and SVM model were used comparatively to classify intact and cracked eggs.Gaussian kernel was recommended as the kernel function of the SVM model.Parameters of SVM model were also optimized as in SVDD.Table 2shows the optimal results from three dis-crimination models in the prediction set.Identification rates of intact eggs were both 100%in the LDA and SVM models,but 50and 35%for cracked eggs,respectively.In other words,at least 50%of cracked eggs could not be identified in conventional discrimination model.However,detection of cracked eggs is the task we focus on.The identification rates of intact and cracked eggs were both 90%in the SVDD pared with conventional two-class discrimination models,SVDD model showed its superior performance in the discrimination of cracked eggs.LDA is a linear and parametric method with discrimi-nating character.In terms of a set of discriminant functions,the classifier is said to assign an unknown example X to thecorresponding class [25].In the case of conventional LDA classification,the ultimate decision function is based on sufficient information support from two-class training samples.In general,such classification does not pay enough attention to the samples in minority class in building model.It is possible to obtain an inaccurate estimation of the centroid between the two classes.Conventional LDA clas-sification always poorly describes the specific class with scarce training samples.Therefore,it is often unpractical to solve the classification problem using tradition LDA clas-sifier,in case of imbalanced number in training samples.The basic concept of SVM is to map the original data X into a higher-dimensional feature space and find the ‘optimal’hyperplane boundary to separate the two classes [26].In SVM classification,the ‘optimal’boundary is defined as the most distant hyperplane from both sets,which is also called the ‘middle point’between the clas-sification sets.This boundary is expected to be the optimal classification of the sets,since it is the best isolated from the two sets [27].The margin is the minimal distance from the separating hyperplane to the closest data points [28].In general,when the information support from both positive and negative training sets are sufficient and equal,an appropriate separating hyperplane can be obtained.How-ever,when the samples from one class are insufficient to support the separating hyperplane,it will result in the hyperplane being excessively close to this class.As a result,most of the unknown sets may be recognized as the other class.Therefore,compared with other discrimination models,SVM showed poorest performance in discrimi-nating cracked eggs.Differing from conventional classification-based app-roach,SVDD is an approach for one-class classification.ItTable 1Frequencycharacteristics selection and expressionSome Low frequency band:1,000–3,720Hz,Middlefrequency band:3,720–7,440HzVariables Resonance frequency characteristics Expression X1Value of the area of amplitudeX 1¼P512i ¼0PiX2Value of the standard deviation of amplitude X 2¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðPi ÀP Þq=nX3Value of the frequency band of maximum amplitude X 3¼Index max ðPi ÞX4Mean of top three frequency amplitude values X 4¼Max 1:3ðPi Þ=3X5Ratio of amplitude values of middle frequency bands to low frequency bandX 5¼P 200i ¼1Pi P 400i ¼201Pi200Fig.5Identification rates of SVDD models with different values ofparameter r and CTable 2Comparison of results from three discrimination models ModelIdentification rates in the prediction set (%)Intact eggsCracked eggs LDA 10050SVM 10035SVDD9090focuses mainly on normal or target objects.SVDD can handle cases with only a few outlier objects.The advantage of SVDD is that the target class can be any one of two training classes.The selection of the target class depends on the reliability of the information provided from training samples.In general,the class containing more samples may provide sufficient information,and it can be selected as target class[29].Furthermore,SVDD can adapt to the real shape of samples andfindflexible boundary with a mini-mum volume by introducing kernel function.The boundary is described by a few training objects,the support vectors. It is possible to replace normal inner products with kernel functions and obtain moreflexible data descriptions[30]. Width parameter r can be set to give the desired number of support vectors.In addition,extra data on the form of outlier objects can be helpful to improve the performance of the SVDD model.ConclusionsDetection of crack in eggshell based on acoustic impulse resonance was attempted in this work.The SVDD method was employed for solving classification problem where the samples of cracked eggs were not sufficient.The results indicated that detection of crack in eggshell based on the acoustic impulse resonance was feasible,and the SVDD model showed its superior performance in contrast to conventional two-class discrimination models.It can be concluded that SVDD is an excellent method of classifi-cation problem with imbalanced numbers.It is a promising method that uses acoustic resonance technique combined with SVDD to detect cracked eggs.Some relative ideas would be attempted for further improvement of the per-formance of SVDD model in our future work,such as follows:(1)introduce new kernel functions,which can help to obtain a moreflexible boundary;(2)try more methods for selection of parameters to obtain the optimal ones,since parameters of kernel functions are closely related to the tightness of the constructed boundary and the target rejection rate,and appropriate parameters are important to improve the performance of SVDD models;(3)investigate the contribution of abnormal targets to the calibration model and develop a robust model,which has an excellent ability to deal with abnormal targets.Acknowledgments This work is a part of the National Key Tech-nology R&D Program of China(Grant No.2006BAD11A12).We are grateful to the Web site http://www-ict.ewi.tudelft.nl/*davidt/ dd_tools.html,where we downloaded SVDD Matlab codes free of charge.References1.Lin J,Puri VM,Anantheswaran RC(1995)Trans ASAE38(6):1769–17762.Cho HK,Choi WK,Paek JK(2000)Trans ASAE43(6):1921–19263.De Ketelaere B,Coucke P,De Baerdemaeker J(2000)J Agr EngRes76:157–1634.Coucke P,De Ketelaere B,De Baerdemaeker J(2003)J SoundVib266:711–7215.Wang J,Jiang RS(2005)Eur Food Res Technol221:214–2206.Jindal VK,Sritham E(2003)ASAE Annual International Meet-ing,USA7.Tax DMJ,Duin RPW(1999)Pattern Recognit Lett20:1191–11998.Pan Y,Chen J,Guo L(2009)Mech Syst Signal Process23:669–6819.Lee SW,Park JY,Lee SW(2006)Patten Recognit39:1809–181210.Podsiadlo P,Stachowiak GW(2006)Tribol Int39:1624–163311.Sanchez-Hernandeza C,Boyd DS,Foody GM(2007)Ecol Inf2:83–8812.Liu YH,Lin SH,Hsueh YL,Lee MJ(2009)Expert Syst Appl36:1978–199813.Cho HW(2009)Expert Syst Appl36:434–44114.Tax DMJ,Duin RPW(2001)J Mach Learn Res2:155–17315.Tax DMJ,Duin RPW(2004)Mach Learn54:45–6616.Guo SM,Chen LC,Tsai JHS(2009)Pattern Recognit42:77–8317.De Ketelaere B,Maertens K,De Baerdemaeker J(2004)MathComput Simul65:59–6718.Adall T,Ardalan SH(1999)Comput Elect Eng25:1–1619.Madsen AH(2000)Signal Process80:1489–150020.Chase JG,Begoc V,Barroso LR(2005)Comput Struct83:639–64721.Wang X,Feng GZ(2009)Signal Process89:181–18622.Djigan VI(2006)Signal Process86:776–79123.Bu HG,Wang J,Huang XB(2009)Eng Appl Artif Intell22:224–23524.Tao Q,Wu GW,Wang J(2005)Pattern Recognit38:1071–107725.Xie JS,Qiu ZD(2007)Pattern Recognit40:557–56226.Devos O,Ruckebusch C,Durand A,Duponchel L,Huvenne JP(2009)Chemom Intell Lab Syst96:27–3327.Liu X,Lu WC,Jin SL,Li YW,Chen NY(2006)Chemom IntellLab Syst82:8–1428.Chen QS,Zhao JW,Fang CH,Wang DM(2007)SpectrochimActa Pt A Mol Biomol Spectrosc66:568–57429.Huang WL,Jiao LC(2008)Prog Nat Sci18:455–46130.Foody GM,Mathur A,Sanchez-Hernandez C,Boyd DS(2006)Remote Sens Environ104:1–14。
英文The road (highway)The road is one kind of linear construction used for travel。
It is made of the roadbed,the road surface, the bridge, the culvert and the tunnel. In addition, it also has the crossing of lines, the protective project and the traffic engineering and the route facility。
The roadbed is the base of road surface, road shoulder,side slope, side ditch foundations. It is stone material structure, which is designed according to route's plane position .The roadbed, as the base of travel, must guarantee that it has the enough intensity and the stability that can prevent the water and other natural disaster from corroding.The road surface is the surface of road. It is single or complex structure built with mixture。
The road surface require being smooth,having enough intensity,good stability and anti—slippery function. The quality of road surface directly affects the safe, comfort and the traffic。
Industrial Power Plants and Steam SystemSteam power plants comprise the major generating and process steam sources throughout the world today. Internal-combustion engine and hydro plants generate less electricity and steam than power plants. For this reason we will give our initial attention in this book to steam power plants and their design application.In the steam power field two major types of plants sever the energy needs of customer-industrial plants for factories and other production facilities-and central-station utility plants for residential, commercial, industrial demands. Of these two types of plants, the industrial power plant probably has more design variations than the utility plant. The reason for this is that the demands of industrial tend to be more varied than the demands of the typical utility customer.To assist the power-plant designer in understanding better variations in plant design, industrial power plants are considered first in this book. And to provide the widest design variables, a power plant serving several process operation and all utility is considered.In the usual industrial power plant, a steam generation and distribution system must be capable of responding to a wide range of operating conditions, and often must be more reliable than the plants electrical system. The system design is often the last to be settled but the first needed for equipment procurement and plant startup. Because of these complications the power plant design evolves slowly, changing over the life of a project.Process steam loadsSteam is a source of power and heating, and may be involved in process reaction. Its applications include serving as a stripping, fluidizing, agitating , atomizing, ejector-motive and direct-heating steam. Its quantities, Pressure Levels and degrees of superheat are set by such process needs.As reaction steam, it becomes a part of the process kinetics, as in H2, ammonia and coal-gasification plants. Although such plants may generate all the steam needed. steam from another source must be provided for startup and backup.The second major process consumption of steam is for indirect heating, such as in distillation-tower reboilers , amine-system reboilers, process heaters, piping tracing and building heating. Because the fluids in these applications generally do not need to be above 350F,steam is a convenient heat source.Again, the quantities of steam required for the services are set by the process design of the facility. There are many options available to the process designer in supplying some of these low-level heat requirements, including heat-exchange system , and circulating heat-transfer-fluid systems, as well as system and electricity. The selection of an option is made early in the design stage and is based predominantly on economic trade-off studies.Generating steam from process heat affords a means of increasing the overall thermal efficiency of a plant. After providing for the recovery of all the heat possible via exchanges, the process designer may be able to reduce cooling requirements by making provisions for the generation of low-pressure(50-150 psig)steam. Although generation at this level may be feasible from a process-design standpoint, the impact of this on the overall steam balance must be considered, because low-pressure steam is excessive in most steam balances, and the generation of additional quantities may worsen the design. Decisions of this type call close coordination between the process and utility engineers.Steam is often generated in the convection section of fired process heaters in order to improve a plant’s thermal efficiency. High-pressure steam can be generated in the furnace convection section of process heater, which have radiant heat duty only.Adding a selective –catalytic-reduction unit for the purpose of lowing NOx emissions may require the generation of waste-heat steam to maintain correct operating temperature to the catalytic-reduction unit.Heat from the incineration of waste gases represents still another source of process steam. Waste-heat flues from the CO boilers of fluid-catalytic crackers and from fluid-coking units, for example, are hot enough to provide the highest pressure level in a steam system.Selecting pressure and temperature levelsThe selecting of pressure and temperature levels for a process steam system is based on:(1)moisture content in condensing-steam turbines,(2)metallurgy of the system,(3)turbine water rates,(4)process requirements ,(5)water treatment costs, and(6)type of distribution system.Moisture content in condensing-steam turbines---The selection of pressure and temperature levels normally starts with the premise that somewhere in the system there will be a condensing turbine. Consequently, the pressure and temperature of the steam must be selected so that its moisture content in the last row of turbine blades will be less than 10-13%. In high speed, a moisture content of 10%or less is desirable. This restriction is imposed in order to minimize erosion of blades by water particles. This, in turn, means that there will be a minimum superheat for a given pressure level, turbine efficiency and condenser pressure for which the system can be designed.System mentallurgy- A second pressure-temperature concern in selecting the appropriate steam levels is the limitation imposed by metallurgy. Carbon steel flanges, for example, are limited to a maximum temperature of 750F because of the threat of graphite (carbides) precipitating at grain boundaries. Hence, at 600 psig and less, carbon-steel piping is acceptable in steam distribution systems. Above 600 psig, alloy piping is required. In a 900- t0 1,500-psig steam system, the piping must be either a r/2 carbon-1/2 molybdenum or a l/2 chromium% molybdenum alloyTurbine water rates - Steam requirements for a turbine are expressed as water rate, i.e., lb of steam/bph, or lb of steam/kWh. Actual water rate is a function of two factors: theoretical water rate and turbine efficiency.The first is directly related to the energy difference between the inlet and outlet of a turbine, based on the isentropic expansion of the steam. It is, therefore, a function of the turbine inlet and outlet pressures and temperatures.The second is a function of size of the turbine and the steam pressure at the inlet, and of turbine operation (i.e., whether the turbine condenses steam, or exhausts some of it to an intermediate pressure level). From an energy stand point, the higher the pressure and temperature, the higher the overall cycle efficiency. _Process requirements - When steam levels are being established, consideration must be given to process requirements other than for turbine drivers. For example, steam for process heating will have to be at a high-enough pressure to prevent process fluids from leaking into the steam. Steam for pipe tracing must be at a certain minimum pressure so that low-pressure condensate can be recovered.Water treatment costs - The higher the steam pressure, the costlier the boiler feedwater treatment. Above 600 psig, the feedwater almost always must be demineralized; below 600 psig, soft,ening may be adequate. It may have to be of high quality if the steam is used in the process, such as in reactions over a catalyst bed (e.g., in hydrogen production).Type of distribution system - There are two types of systems: local, as exemplified by powerhouse distribution; and complex, by wluch steam is distributed to many units in a process plant. For a small local system, it is not impractical from a cost standpoint for steam pressures to be in the 600-1,500-psig range. For a large system, maintaining pressures within the 150-600-psig range is desirable because of the cost of meeting the alloy requirements for higher-pressure steam distribution system.Because of all these foregoing factors, the steam system in a chemical process complex or oil refinery frequently ends up as a three-level arrangement. The highest level, 600 psig, serves primarily as a source of power. The intermediate level, 150 psig, is ideally suitable for small emergency turbines, tracing off the plot, and process heating. The low level, normally 50 psig, can be used for heating services, tracing within the plot, and process requirements. A higher fourth level normally not justified, except in special cases as when alarge amount ofelectric power must be generated.Whether or not an extraction turbine will be included in the process will have a bearing on the intermediate-pressure level selected, because the extraction pressure should be less than 50% of the high-pressure level, to take into account the pressure drop through the throttle valve and the nozzles of the high-pressure section of' the turbine.Drivers for pumps and compressorsThe choice between a steam and an electric driver for a particular pump or compressor depends on a number of things, including the operational philosophy. In the event of a power failure, it must be possible to shut down a plant orderly and safely if normal operation cannot be continued. For an orderly and safe shutdown, certain services must be available during a power failure: (1) instrument air, (2) cooling water, (3) relief and blow down pump out systems, (4) boiler feedwater pumps, (5) boiler fans, (6) emergency power generators, and (7) fire water pumps.These services are normally supplied by steam or diesel drivers because a plant's steam or diesel emergency system is considered more reliable than an electrical tie-line.The procedure for shutting down process units must be analyzed for each type of processplant and specific design. In general, the following represent the minimum services for which spare pumps driven by steam must be provided: column reflux, bottoms and purge-oil circulation, and heater charging. Most important is to maintain cooling; next, to be able to safely pump the plant's inventory into tanks.Driver selection cannot be generalized; a plan and procedure must be developed for each process unit.The control required for a process is at times another consideration in the selection of a driver. For example, a compressor may be controlled via flow or suction pressure. The ability to vary driver speed, easily obtained with a steam turbine, may be basis for selecting a steam driver instead of a constant-speed induction electric motor. This is especially important when the molecular weight of the gas being compressed may vary, as in catalytic-cracking and catalytic-reforming processes.In certain types of plants, gas flow must be maintained to prevent uncontrollable high-temperature excursions during shutdown. For example, hydrocrackers are purged of heavy hydrocarbon with recycle gas to prevent the exothermic reactions from producing high bed temperatures. Steam-driven compressors can do this during a power failure.Each process operation must be analyzed from such a safety viewpoint when selecting drivers for critical equipment. The size of a relief and blowdown system can be reduced by installing steam drivers. In most cases, the size of such a system is based on a total power failure. If heat-removal powered by steam drivers, the relief system can be smaller. For example, a steam driver will maintain flow in the pump-around circuit for removing heat from a column during a power failure, reducing the relief load imposed on the flare system.Equipment support services (such as lubrication and sea-oil systems for compressors) that could be damaged during a loss of power should also be powered by steam drivers.Driver size can also be a factor. An induction electric motor requires large starting currents - typically six times the normal load. The drop in voltage caused by the startup of such a motor imposes a heavy transient demand on the electrical distribution system. For this reason, drivers larger than 10,000 hp are normally steam turbines, although synchronous motors as large as 25,000 hp are used.The reliability of life-support facilities - e.g., building heat, potable water, pipe tracing, emergency lighting-during power failures is of particular concern mates. In such a case, at least one boiler should be equipped with steam-driven auxiliaries to provide these services.Lastly, steam drivers are also selected for the purpose of balancing steam systems and avoiding large amounts of letdown between steam levels. Such decisions regarding drivers are made after the steam balances have been refined and the distribution system has been fully defined. There must be sufficient flexibility to allow balancing the steam system under all operating conditions.Selecting steam driversAfter the number of steam drivers and their services have been established, the utility, or process engineer will estimate the steam consumption for making the steam balance.The standard method of doing this is to use the isentropic expansion of steam correeted for turbine efficiency.Actual steam consumption by a turbine is determined via:SR = (TSR)(bhp)/EHere, SR = actual steam rate, lb/h; TSR = theoretical steam rate, lb/hr/bhp ; bhp = turbine brake horsepower; and E = turbine efficiency.When exhaust steam can be used for process heating, the highest thermodynamic efficiency can be achieved by means of backpressure turbines. Large drivers, which are of high efficiency and require low theoretical steam rates, are normally supplied by the high-pressure header, thus minimizing steam consumption.Small turbines that operate only in emergencies can be allowed to exhaust to atmosphere. Although their water rates are poor, the water lost in short-duration operations may not represent a significant cost. Such turbines obviously play a small role in steam balance planning.Constructing steam balancesAfter the process and steam-turbine demands have been established, the next step is to construct a steam balance for the chemical complex or oil refinery. A sample balance is shown in Fig. 1-4. It shows steam production and consumption, the header systems, letdown stations, and boiler plant. It illustrates a normal (winter) case.It should be emphasized that there is not one balance but a series, representing a variety of operating modes. The object of the balances is to determine the design basis for establishing boiler she, letdown stations and deaerator capacities, boiler feedwater requirements, and steam flows in various parts of the system.The steam balance should cover the following operating modes: normal, all units operating; winter and summer conditions; shutdown of major units; startup of major units; loss of largest condensate source; power failure with flare in service; loss of large process steam generators; and variations in consumption by large steam users.From 50 t0 100 steam balances could be required to adequately cover all the major impacts on the steam system of a large complex.At this point, the general basis of the steam system design should have been developed by the completion of the following work:1. All significant loads have been examined, with particular attention focused on those for which there is relatively little design freedom - i.e., reboilers, sparing steam for process units, large turbines required because of electric power limitation and for shutdown safety.2. Loads have been listed for which the designer has some liberty in selecting drivers. These selections are based on analyses of cost competitiveness.3. Steam pressure and temperature levels have been established.4. The site plan has been reviewed to ascertain where it is not feasible to deliver steam or recover condensate, because piping costs would be excessive.5. Data on the process units are collected according to the pressure level and use of steam - i.e., for the process, condensing drivers and backpressure drivers.6. After Step 5, the system is balanced by trial-and-error calculations or computerized techniques to determine boiler, letdown, deaerator and boiler feedwater requirements.7. Because the possibility of an electric power failure normally imposes one of the major steam requirements, normal operation and the eventuality of such a failure must both be investigated, as a minimum.Checking the design basisAfter the foregoing steps have been completed, the following should be checked:Boiler capacity - Installed boiler capacity would be the maximum calculated (with an allowance of l0-20% for uncertainties in the balance), corrected for the number of boilers operating (and on standby).The balance plays a major role in establishing normal-case boiler specifications, both number and size. Maximum firing typically is based on the emergency case. Normal firing typically establishes the number of boilers required, because each boiler will have to be shut down once a year for the code-required drum inspection. Full-firing levels of the remaining boilers will be set by the normal steam demand. The number of units required (e.g., three 50% units, four 33%units, etc.) in establishing installed boiler capacity is determined from cost studies. It is generally considered double-jeopardy design to assume that a boiler will be out of service during a power failure.Minimum boiler turndown - Most fuel-fired boilers can be operated down to approximately 20% of the maximum continuous rate. The maximum load should not be expected to be below this level.Differences between normal and maximum loads –If the maximum load results from an emergency (such as power failure), consideration should be given to shedding process steam loads under this condition in order to minimize in- stalled boiler capacity. However, the consequences of shedding should be investigated by the process designer and the operating engineers to ensure the safe operation of the entire process.Low-level steam consumption - The key to any steam balance is the disposition of low-level steam. Surplus low-level steam can be reduced only by including more condensing steam turbines in the system, or devising more process applications for it, such as absorption refrigeration for cooling process streams and ranking-cycle systems for generating power. In general, balancing the supply and consumption of low-level steam is a critical factor in the design of the steam system.Quantity of steam at pressure-reducing stations - Because useful work is not recovered from the steam passing through a pressure-reducing station, such flow should be kept at a minimum. In the Fig. 1-5 150/50-psig station, a flow of only 35,000 lb/h was established as normal for this steam balance case (normal, winter). The loss of steam users on the 50-psig systems should be considered, particularly of the large users, because a shutdown of one may demand that the 150/50-psig station close off beyond its controllable limit. If this happened, the 50-psig header would be out of control, and an immediate-pressure buildup in the header wouldbegin, setting off the safety relief valves.The station's full-open capacity should also be checked to ensure that it can make up any 50-psig steam that may be lost through the shutdown of a single large 50-psig source (a turbine sparing a large electric motor, for example}. It would be undesirable for the station to be sized so that it opens more than 80%. In some cases, range ability requirements may dictate two valves (one small and one large).Intermediate pressure level - If large steam users or suppliers may come on stream or go off steam, the normal(day-to-day) operation should be checked. No such change in normal operation should result in a significant upset (e.g.,relief valves set off, or the system pressure control lost).If a large load is lost, the steam supply should be reduced by the letdown-station. If the load suddenly increases, the 600/150-psig station must be able of supplying the additional steam. If steam generated via the process disappears, the station must be capable of making up theload. If150-psig steam is generated unexpectedly, the 600/150-psig station must be able to handle the cutback.The important point here is that where the steam flow could rise t0 700,000 lb/h, this flow should be reduced by a cutback at the 600/150-psig station, not by an increase in the flow to the lower-pressure level, because this steam would have nowhere to go. The normal (600/150-psig) letdown station must be capable of handling some of the negative load swings, even though, overall, this letdown needs to be kept to a minimum.On the other hand, shortages of steam at the 150-psig level can be made up relatively easily via the 600/150-psig station. Such shortages are routinely small in quantity or duration, or both-(startup, purging, electric drive maintenance, process unit shutdown, etc.)High-pressure level - Checking the high-pressure level is generally more straightforward because rate control takes place directly at the boilers. Firing can be increased or lowered to accommodate a shortage or surplus.Typical steam-balance casesThe Fig. 1-4 steam balance represents steady-state condition, winter operation, all process units operating, and no significant unusual demands for steam.An analysis similar to the foregoing might also be required for the normal summertime case, in which a single upset must not jeopardize control but the load may be less (no tank heating, pipe tracing, etc.)The balance representing an emergency (e.g., loss of electric power) is significant. In this case, the pertinent test point is the system's ability to simply weather the upset, not to maintain normal, stable operation. The maximum relief pressure that would develop in any of the headers represents the basis for sizing relief valves. The loss of boiler feed water or condensate return, or both, could result in a major upset, or even a shutdown.Header pressure control during upsetsAt the steady-state conditions associated with the multiplicity of balances, boiler capacity can be adjusted to meet user demands. However, boiler load cannot be changed quickly to accommodate a sharp upset. Response rate is typically limited to 20% of capacity per minute. Therefore, other elements must be relied on to control header pressures during transient conditions.The roles of several such elements in controlling pressures in the three main headers during transient conditions are listed in Table l-3. A control system having these elements will result in a steam system capable of dealing with the transient conditions experienced in moving from one balance point to another.Tracking steam balancesBecause of schedule constraints, steam balances and boiler size are normally established early in the design stage. These determinations are based on assumptions regarding turbine efficiencies, process steam generated in waste-heat furnaces, and other quantities of steam that depend on purchased equipment. Therefore, a sufficient number of steam balances should be tracked through the design period to ensure that the equipment purchased will satisfy the original design concept of the steam system.This tracking represents an excellent application for a utility data-base system and a system linear programming model. During the course of the mechanical design of a large "grass roots" complex, 40 steam balances were continuously updated for changes in steam loads via such an application.Cost tradeoffsTo design an efficient but least-expensive system, the designer ideally develops a total minimum-cost curve – which incorporates all the pertinent costs related to capital expenditures, installation, fuel, utilities, operations and maintenance-and performs a cost study of the final system. However, because the designer is under the constraint of keeping to a project schedule, major, highly expensive equipment must be ordered early in the project, when many key parts of the design puzzle are not available (e.g., a complete load summary, turbine water rates, equipment efficiencies and utility costs).A practical alternative is to rely on comparative-cost estimates, as are conventionally used in assisting with engineering decision points. This approach is particularly useful in making early equipment selections when fine-tuning is not likely to alter decisions, such as regarding the number of boilers required, whether boilers should be shop-fabricated or field-erected, and the practicality of generating steam from waste heat or via cogeneration.The significant elements of a steam-system cost-comparative study are costs for: equipment and installation; ancillaries (i.e., miscellaneous items required to support the equipment,such as additional stacks, upgraded combustion control, more extensive blowdown facilities, etc.); operation(annual); maintenance (annual); and utilities.The first two costs may be obtained from in-house data or from vendors. Operational and maintenance costs can be factored from the capital cost for equipment based on an assessment of the reliability of the purchased equipment.Utility costs are generally the most difficult to establish at an early stage because sources frequently depend on the site of the plant. Some examples of such costs are: purchased fuel gas - $5.35/million Btu, raw water - $0.60/1,000 gal, electricity - $0.07{kWh, and demineralized boiler feedwater -$1.50/1,000 gal. The value of steam at the various pressureLevels can be developed [5J.Let it be further assumed that the emergency balance requires 2,200,000 lb/h of steam (all boilers available). Listed in Table 1 4 are some combinations of boiler installations that meet the design conditions previously stipulated.Table l-4 indicates that any of the several combinations of power-boiler number and size could meet both normal and emergency demand. Therefore, a comparative-cost analysis would be made to assist in making an early decision regarding the number and size of the power boilers.(Table l-4 is based on field-erected, industrial-type boiler Conventional sizing of this type of boiler might range from 100,000 lb/h through 2,000,000 lb/h for each.)An alternative would be the packaged boiler option (although it does not seem practical at this load level. Because it is shop-fabricated, this type of boiler affords a significant saving in terms of field installation cost. Such boilers are available up to a nominal capacity of 100,000 lb/h, with some versions up t0 250,000 lb7h.Selecting turbine water rate i.e., efficiency) represents another major cost concern. Beyond the recognized payout period (e.g., 3 years), the cost of drive steam can be significant in comparison with the equipment capital cost. The typical 30% efficiency ofthe medium-pressure backpressure turbine can be boosted significantly.Driver selections are frequently made with the help of cost-tradeoff studies, unless overriding considerations preclude a drive medium. Electric pump drives are typically recommended on the basis of such studies.Steam tracing has long been the standard way of winterizing piping, not only because of its history of successful performance but also because it is an efficient way to use low-pressure steam.Design consideratonsAs the steam system evolves, the designer identifies steam loads and pressure levels, locates steam loads, checks safety aspects, and prepares cost-tradeoff studies, in order to provide low-cost energy safely, always remaining aware of the physical entity that will arise from the design.How are design concepts translated into a design document? And what basic guidelines will ensure that the physical plant will represent what was intended conceptually?Basic to achieving these ends is the piping and instrument diagram (familiar as the P&ID). Although it is drawn up primarily for the piping designers benefit, it also plays a major role in communicating to the instrumentation designer the process-control strategy, as well as in conveying specialty information to electrical, civil, structural, mechanical and architectural engineers. It is the most important document for representing the specification of the steam。
本科毕业设计(论文)外文翻译译文题目:建筑招投标与赢者诅咒:博弈论方法学院:经济与管理学院专业:工程造价学生姓名:**学号:************指导教师:***完成时间:2017年4月5日译自:Muaz O.Ahmed1;Islam H. El-adaway, M.ASCE2; Kalyn T. Coatney3;and Mohamed S. Eid4. Construction Bidding and the Winner’s Curse: Game Theory Approach[J],Construction Engineering and Management,2016,142(2).建筑招投标与赢者诅咒:博弈论方法Muaz O.Ahmed1;Islam H. El-adaway, M.ASCE2; Kalyn T. Coatney3;and Mohamed S. Eid4美国密西西比州立大学土木与环境工程系摘要:在建筑业中,竞争投标一直是承包商选择的一种方法。
由于施工的真实成本直到项目完工后才知道,所以逆向选择是一个重大问题。
逆向选择是当合同的赢家低估了项目的真实成本,从而中标承包商很有可能赚取负或至少低于正常利润。
赢家的诅咒是当中标人提交一个被低估的出价,因此诅咒被选中承担项目。
在多阶段招标环境下,分包商由总承包商雇用,胜利者的诅咒可能会复合。
在一般情况下,承包商遭受赢者的诅咒,因为各种各样的原因包括项目成本估计不准确;新的承包商进入建筑市场;在建筑行业的衰退的情况下减少损失;在建筑市场激烈的竞争;差的机会成本,从而影响承包商的行为;以及要赢该项目然后弥补订单变更、索赔和其他机制而带来的损失。
本文通过博弈论方法旨在分析并减少潜在的施工招投标中赢家诅咒的影响。
为此,作者确定在两个共同的施工招标环境的赢家诅咒的程度,即单级招标和多级招标。
我们的目标是比较上述两个施工招标环境,并确定如何学习从过去的投标决策和经验可以减轻赢家的诅咒。
Integrated circuitAn integrated circuit or monolithic integrated circuit (also referred to as IC, chip, or microchip) is an electronic circuit manufactured by the patterned diffusion of trace elements into the surface of a thin substrate of semiconductor material. Additional materials are deposited and patterned to form interconnections between semiconductor devices.Integrated circuits are used in virtually all electronic equipment today and have revolutionized the world of electronics. Computers, mobile phones, and other digital appliances are now inextricable parts of the structure of modern societies, made possible by the low cost of production of integrated circuits.IntroductionICs were made possible by experimental discoveries showing that semiconductor devices could perform the functions of vacuum tubes and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach tocircuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, much less material is used to construct a packaged IC than to construct a discrete circuit. Performance is high because the components switch quickly and consume little power (compared to their discrete counterparts) as a result of the small size and close proximity of the components. As of 2006, typical chip areas range from a few square millimeters to around 350 mm2, with up to 1 million transistors per mm2.TerminologyIntegrated circuit originally referred to a miniaturized electronic circuit consisting of semiconductor devices, as well as passive components bonded to a substrate or circuit board.[1] This configuration is now commonly referred to as a hybrid integrated circuit. Integrated circuit has since come to refer to the single-piece circuit construction originally known as a monolithic integrated circuit.[2]InventionEarly developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate arranged in a 2-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.The idea of the integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence, Geoffrey W.A. Dummer (1909–2002). Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952.[4] He gave many sympodia publicly to propagate his ideas, and unsuccessfully attempted to build such a circuit in 1956.A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a tridimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micro module Program. However, as the project was gaining momentum, Jack Kilby came up with a new, revolutionary design: the IC.Newly employed by Texas Instruments, Jack Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on September 12, 1958.In his patent application of February 6, 1959, Jack Kilby described his new device as ―a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.‖Jack Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit.Jack Kilby's work was named an IEEE Milestone in 2009.Noyce also came up with his own idea of an integrated circuit half a year later than Jack Kilby. His chip solved many practical problems that Jack Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Jack Kilby chip was made of germanium. GenerationsIn the early days of integrated circuits, only a few transistors could be placed on a chip, as the scale used was large because of the contemporary technology, and manufacturing yields were low by today's standards. As the degree of integration was small, the design was done easily. Over time, millions, and today billions of transistors could be placed on one chip, and to make a good design became a task to be planned thoroughly. This gave rise to new design methods.SSI, MSI and LSIThe first integrated circuits contained only a few transistors. Called "small-scale integration" (SSI), digital circuits containing transistors numbering in the tens for example, while early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The term Large Scale Integration was first used by IBM scientist Rolf Landauer when describing the theoretical concept, from there came the terms for SSI, MSI, VLSI, and ULSI.SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology,while the Minuteman missile forced it into mass-production. The Minuteman missile program and various other Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government space and defense spending still accounted for 37% of the $312 million total production. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow firms to penetrate the industrial and eventually the consumer markets. The average price per integrated circuit dropped from $50.00 in1962 to $2.33 in 1968.[13] Integrated circuits began to appear in consumer products by the turn of the decade, a typical application being FMinter-carrier sound processing in television receivers.The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI).They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.Further development, driven by the same economic factors, led to "large-scale integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.VLSIThe final step in the development process, starting in the 1980s and continuing through the present, was "very large-scale integration" (VLSI). The development started with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2009. Multiple developments were required to achieve this increased density. Manufacturers moved to smaller design rules and cleaner fabrication facilities, so that they could make chips with more transistors and maintain adequate yield. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Better texts such as the landmark textbook by Mead and Conway helped schools educate more designers, among other factors.In 1986 the first one megabit RAM chips were introduced, which contained more than one million transistors. Microprocessor chips passed the million transistor mark in 1989 and the billion transistor mark in 2005.[14] The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors.[15]ULSI, WSI, SOC and 3D-ICTo reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of complexityof more than 1 million transistors.Wafer-scale integration (WSI) is a system of building very-large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging).A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers useson-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.Advances in integrated circuitsAmong the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers and cellular phones to digital microwave ovens. Digital memory chips and ASICs are examples of other families of integrated circuits that are important to the modern information society. While the cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality—see Moore's law which, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost per unit and the switching power consumption godown, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current (see subthreshold leakage for a discussion of this), although these problems are not insurmountable and will likely be solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors (ITRS).In current research projects, integrated circuits are also developed for sensoric applications in medical implants or other bioelectronic devices. Particular sealing strategies have to be taken in such biogenic environments to avoid corrosion or biodegradation of the exposed semiconductor materials.[16] As one of the few materials well established in CMOS technology, titanium nitride (TiN) turned out as exceptionally stable and well suited for electrode applications in medical implants.[17][18] ClassificationIntegrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).Digital integrated circuits can contain anything from one to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and micro controllers, work using binary mathematics to process "one" and "zero" signals.Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, and mixing. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.ManufacturingFabricationRendering of a small standard cell with three metal layers (dielectric has been removed). The sand-colored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of tungsten. The reddish structures are poly-silicon gates, and the solid at the bottom is the crystalline silicon bulk.Schematic structure of a CMOS chip, as built in the early 2000s. The graphic shows LDD-Misfit's on an SOI substrate with five materialization layers and solder bump for flip-chip bonding. It also shows the section for FEOL (front-end of line), BEOL (back-end of line) and first parts of back-end process.The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon monocrystals are the main substrate used for ICs although someIII-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals without defects in the crystalline structure of the semiconducting material.Semiconductor ICs are fabricated in a layer process which includes these key process steps:∙Imaging∙Deposition∙EtchingThe main process steps are supplemented by doping and cleaning.∙Integrated circuits are composed of many overlapping layers, each defined by photolithography, and normally shown in different colors.Some layers mark where various dopants are diffused into thesubstrate (called diffusion layers), some define where additional ions are implanted (implant layers), some define the conductors(poly-silicon or metal layers), and some define the connectionsbetween the conducting layers (via or contact layers). All components are constructed from a specific combination of these layers.∙In a self-aligned CMOS process, a transistor is formed wherever the gate layer (poly-silicon or metal) crosses a diffusion layer.∙Capacitive structures, in form very much like the parallel conducting plates of a traditional electrical capacitor, are formedaccording to the area of the "plates", with insulating material between the plates. Capacitors of a wide range of sizes are common on ICs.∙Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most logic circuits do not need any resistors.The ratio of the length of the resistive structure to its width, combined with its sheet resistivity, determines the resistance.∙More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar devices.A random access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminum (or gold) bond wires which are welded and/or thermosonic bonded to pads, usually found around the edge of the die. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or higher cost devices.As of 2005, a fabrication facility (commonly known as a semiconductor fab) costs over $1 billion to construct,[19] because much of the operation is automated. Today, the most advanced processes employ the following techniques:∙The wafers are up to 300 mm in diameter (wider than a common dinner plate).∙Use of 32 nanometer or smaller chip manufacturing process. Intel, IBM, NEC, and AMD are using ~32 nanometers for their CPU chips.IBM and AMD introduced immersion lithography for their 45 nmprocesses[20]∙Copper interconnects where copper wiring replaces aluminium for interconnects.∙Low-K dielectric insulators.∙Silicon on insulator (SOI)∙Strained silicon in a process used by IBM known as strained silicon directly on insulator (SSDOI)∙Multigate devices such as trin-gate transistors being manufactured by Intel from 2011 in their 22 nim process.PackagingIn the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages.Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the packageballs via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery.Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.When multiple dies are put in one package, it is called SiP, for System In Package. When multiple dies are combined on a small substrate, often ceramic, it's called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes fuzzy. Chip labeling and manufacture dateMost integrated circuits large enough to include identifying information include four common sections: the manufacturer's name or logo, the part number, a part production batch number and/or serial number, and a four-digit code that identifies when the chip was manufactured. Extremely small surface mount technology parts often bear only a number used in a manufacturer's lookup table to find the chip characteristics.The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983. Legal protection of semiconductor chip layoutsLike most of the other forms of intellectual property, IC layout designs are creations of the human mind. They are usually the result of an enormous investment, both in terms of the time of highly qualified experts, and financially. There is a continuing need for the creation of new layout-designs which reduce the dimensions of existing integrated circuits and simultaneously increase their functions. The smaller an integrated circuit, the less the material needed for its manufacture, and the smaller the space needed to accommodate it. Integrated circuits are utilized in a large range of products, including articles of everyday use, such as watches, television sets, washing machines, automobiles, etc., as well as sophisticated data processing equipment.The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is the main reason for the introduction of legislation for the protection of layout-designs.A diplomatic conference was held at Washington, D.C., in 1989, which adopted a Treaty on Intellectual Property in Respect of Integrated Circuits (IPIC Treaty). The Treaty on Intellectual Property in respect of Integrated Circuits, also called Washington Treaty or IPIC Treaty (signed at Washington on May 26, 1989) is currently not in force, but was partially integrated into the TRIPs agreement.National laws protecting IC layout designs have been adopted in a number of countries.Other developmentsIn the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders and registers. Current devices called field-programmable gate arrays can now implement tens of thousands of LSI circuits in parallel and operate up to 1.5 GHz (Anachronism holding the speed record).The techniques perfected by the integrated circuits industry over the last three decades have been used to create very small mechanical devices driven by electricity using a technology known asmicroelectromechanical systems. These devices are used in a variety of commercial and military applications. Example commercial applications include DLP projectors, inkjet printers, and accelerometers used to deploy automobile airbags.In the past, radios could not be fabricated in the same low-cost processes as microprocessors. But since 1998, a large number of radio chips have been developed using CMOS processes. Examples include Intel's DECT cordless phone, or Atheros's 802.11 card.Future developments seem to follow the multi-coremulti-microprocessor paradigm, already used by the Intel and AMD dual-core processors. Intel recently unveiled a prototype, "not for commercial sale" chip that bears 80 microprocessors. Each core is capable of handling its own task independently of the others. This is in response to the heat-versus-speed limit that is about to be reached using existing transistor technology. This design provides a new challenge to chip programming. Parallel programming languages such as theopen-source X10 programming language are designed to assist with this task.集成电路集成电路或单片集成电子电路(也称为IC、集成电路片或微型集成电路片)是一种电子电路制作的图案扩散微量元素分析在基体表面形成一层薄的半导体材料。
毕业设计论文中英文翻译要求Graduation Thesis Translation RequirementsEnglish translation of Graduation Thesis:1. Accuracy: The English translation of the Graduation Thesis should accurately reflect the content and meaning of the original Chinese text. It should convey the same ideas and arguments as presented in the original text.2. Clarity: The translation should be clear and easy to understand. The language used should be appropriate and the sentences should be well-structured.3. Grammar and Syntax: The translation should follow the rules of English grammar and syntax. There should be no grammatical errors or awkward sentence constructions.4. Vocabulary: The translation should make use of appropriate vocabulary that is relevant to the topic of the Graduation Thesis. Technical terms and concepts should be accurately translated.5. Style: The translation should maintain the academic style and tone of the original Chinese text. It should use formal language and avoid colloquial or informal expressions.6. References: If the Graduation Thesis includes citations or references, the English translation should accurately reflectthese citations and references. The formatting of citations and references should follow the appropriate style guide.7. Proofreading: The English translation should be thoroughly proofread to ensure there are no spelling or punctuation errors. It should also be reviewed for any inconsistencies or inaccuracies.Minimum word count: The English translation of the Graduation Thesis should be at least 1200 words. This requirement ensures that the translation adequately captures the main points and arguments of the original text.It is important to note that there may be specific guidelines or requirements provided by your academic institution or supervisor for the translation of your Graduation Thesis. Please consult these guidelines and follow them accordingly.。
Bridge Waterway OpeningsIn a majority of cases the height and length of a bridge depend solely upon the amount of clear waterway opening that must be provided to accommodate the floodwaters of the stream. Actually, the problem goes beyond that of merely accommodating the floodwaters and requires prediction of the various magnitudes of floods for given time intervals. It would be impossible to state that some given magnitude is the maximum that will ever occur, and it is therefore impossible to design for the maximum, since it cannot be ascertained. It seems more logical to design for a predicted flood of some selected interval ---a flood magnitude that could reasonably be expected to occur once within a given number of years. For example, a bridge may be designed for a 50-year flood interval; that is, for a flood which is expected (according to the laws of probability) to occur on the average of one time in 50 years. Once this design flood frequency, or interval of expected occurrence, has been decided, the analysis to determine a magnitude is made. Whenever possible, this analysis is based upon gauged stream records. In areas and for streams where flood frequency and magnitude records are not available, an analysis can still be made. With data from gauged streams in the vicinity, regional flood frequencies can be worked out; with a correlation between the computed discharge for the ungauged stream and the regional flood frequency, a flood frequency curve can be computed for the stream in question. Highway CulvertsAny closed conduit used to conduct surface runoff from one side of a roadway to the other is referred to as a culvert. Culverts vary in size from large multiple installations used in lieu of a bridge to small circular or elliptical pipe, and their design varies in significance. Accepted practice treats conduits under the roadway as culverts. Although the unit cost of culverts is much less than that of bridges, they are far more numerous, normally averaging about eight to the mile, and represent a greater cost in highway. Statistics show that about 15 cents of the highway construction dollar goes to culverts, as compared with 10 cents for bridge. Culvert design then is equally as important as that of bridges or other phases of highway and should be treated accordingly.Municipal Storm DrainageIn urban and suburban areas, runoff waters are handled through a system of drainage structures referred to as storm sewers and their appurtenances. The drainage problem is increased in these areas primarily for two reasons: the impervious nature of the area creates a very high runoff; and there is little room for natural water courses. It is often necessary to collect the entire storm water into a system of pipes and transmit it over considerable distances before it can be loosed again as surface runoff. This collection and transmission further increase the problem, since all of the water must be collected with virtually no ponding, thus eliminating any natural storage; and though increased velocity the peak runoffs are reached more quickly. Also, the shorter times of peaks cause the system to be more sensitive to short-duration, high-intensity rainfall. Storm sewers, like culverts and bridges, are designed for storms of various intensity –return-period relationship, depending upon the economy and amount of ponding that can be tolerated.Airport DrainageThe problem of providing proper drainage facilities for airports is similar in many ways to that of highways and streets. However, because of the large and relatively flat surface involved the varying soil conditions, the absence of natural water courses and possible side ditches, and the greater concentration of discharge at the terminus of the construction area, some phases of the problem are more complex. For the average airport the overall area to be drained is relatively large and an extensive drainage system is required. The magnitude of such a system makes it even more imperative that sound engineeringprinciples based on all of the best available data be used to ensure the most economical design. Overdesign of facilities results in excessive money investment with no return, and underdesign can result in conditions hazardous to the air traffic using the airport.In other to ensure surfaces that are smooth, firm, stable, and reasonably free from flooding, it is necessary to provide a system which will do several things. It must collect and remove the surface water from the airport surface; intercept and remove surface water flowing toward the airport from adjacent areas; collect and remove any excessive subsurface water beneath the surface of the airport facilities and in many cases lower the ground-water table; and provide protection against erosion of the sloping areas. Ditches and Cut-slope DrainageA highway cross section normally includes one and often two ditches paralleling the roadway. Generally referred to as side ditches these serve to intercept the drainage from slopes and to conduct it to where it can be carried under the roadway or away from the highway section, depending upon the natural drainage. To a limited extent they also serve to conduct subsurface drainage from beneath the roadway to points where it can be carried away from the highway section.A second type of ditch, generally referred to as a crown ditch, is often used for the erosion protection of cut slopes. This ditch along the top of the cut slope serves to intercept surface runoff from the slopes above and conduct it to natural water courses on milder slopes, thus preventing the erosion that would be caused by permitting the runoff to spill down the cut faces.12 Construction techniquesThe decision of how a bridge should be built depends mainly on local conditions. These include cost of materials, available equipment, allowable construction time and environmental restriction. Since all these vary with location and time, the best construction technique for a given structure may also vary. Incremental launching or Push-out MethodIn this form of construction the deck is pushed across the span with hydraulic rams or winches. Decks of prestressed post-tensioned precast segments, steel or girders have been erected. Usually spans are limited to 50~60 m to avoid excessive deflection and cantilever stresses , although greater distances have been bridged by installing temporary support towers . Typically the method is most appropriate for long, multi-span bridges in the range 300 ~ 600 m ,but ,much shorter and longer bridges have been constructed . Unfortunately, this very economical mode of construction can only be applied when both the horizontal and vertical alignments of the deck are perfectly straight, or alternatively of constant radius. Where pushing involves a small downward grade (4% ~ 5%) then a braking system should be installed to prevent the deck slipping away uncontrolled and heavy bracing is then needed at the restraining piers.Bridge launching demands very careful surveying and setting out with continuous and precise checks made of deck deflections. A light aluminum or steel-launching nose forms the head of the deck to provide guidance over the pier. Special teflon or chrome-nickel steel plate bearings are used to reduce sliding friction to about 5% of the weight, thus slender piers would normally be supplemented with braced columns to avoid cracking and other damage. These columns would generally also support the temporary friction bearings and help steer the nose.In the case of precast construction, ideally segments should be cast on beds near the abutments and transferred by rail to the post-tensioning bed, the actual transport distance obviously being kept to the minimum. Usually a segment is cast against the face of the previously concerted unit to ensure a good fit when finally glued in place with an epoxy resin. If this procedure is not adopted , gaps of approximately 500mm shold be left between segments with the reinforcements running through andstressed together to form a complete unit , but when access or space on the embankment is at a premium it may be necessary to launch the deck intermittently to allow sections to be added progressively .The correponding prestressing arrangements , both for the temporary and permanent conditions would be more complicated and careful calculations needed at all positions .The pricipal advantage of the bridge-launching technique is the saving in falsework, especially for high decks. Segments can also be fabricated or precast in a protected environment using highly productive equipment. For concrete segment, typically two segment are laid each week (usually 10 ~ 30 m in length and perhaps 300 to 400 tonnes in weight) and after posttensioning incrementally launched at about 20 m per day depending upon the winching/jacking equipment.Balanced Cantiulever ConstructionDevelopment in box section and prestressed concrete led to short segment being assembled or cast in place on falsework to form a beam of full roadway width. Subsequently the method was refined virtually to eliminate the falsework by using a previously constructed section of the beam to provide the fixing for a subsequently cantilevered section. The principle is demonsrated step-by-step in the example shown in Fig.1.In the simple case illustrated, the bridge consists of three spans in the ratio 1:1:2. First the abutments and piers are constructed independently from the bridge superstructure. The segment immediately above each pier is then either cast in situ or placed as a precast unit .The deck is subsequently formed by adding sections symmetrically either side.Ideally sections either side should be placed simultaneously but this is usually impracticable and some inbalance will result from the extra segment weight, wind forces, construction plant and material. When the cantilever has reached both the abutment and centre span,work can begin from the other pier , and the remainder of the deck completed in a similar manner . Finally the two individual cantilevers are linked at the centre by a key segment to form a single span. The key is normally cast in situ.The procedure initially requires the first sections above the column and perhaps one or two each side to be erected conventionally either in situ concrete or precast and temporarily supported while steel tendons are threaded and post-tensioned . Subsequent pairs of section are added and held in place by post-tensioning followed by grouting of the ducts. During this phase only the cantilever tendons in the upper flange and webs are tensioned. Continuity tendons are stressed after the key section has been cast in place. The final gap left between the two half spans should be wide enough to enable the jacking equipment to be inserted. When the individual cantilevers are completed and the key section inserted the continuity tendons are anchored symmetrically about the centre of the span and serve to resist superimposed loads, live loads, redistribution of dead loads and cantilever prestressing forces.The earlier bridges were designed on the free cantilever principle with an expansion joint incorporated at the center .Unfortunately,settlements , deformations , concrete creep and prestress relaxation tended to produce deflection in each half span , disfiguring the general appearance of the bridge and causing discomfort to drivers .These effects coupled with the difficulties in designing a suitable joint led designers to choose a continuous connection, resulting in a more uniform distribution of the loads and reduced deflection. The natural movements were provided for at the bridge abutments using sliding bearings or in the case of long multi-span bridges, joints at about 500 m centres.Special Requirements in Advanced Construction TechniquesThere are three important areas that the engineering and construction team has to consider:(1) Stress analysis during construction: Because the loadings and support conditions of the bridge are different from the finished bridge, stresses in each construction stage must be calculated to ensurethe safety of the structure .For this purpose, realistic construction loads must be used and site personnel must be informed on all the loading limitations. Wind and temperature are usually significant for construction stage.(2) Camber: In order to obtain a bridge with the right elevation, the required camber of the bridge at each construction stage must be calculated. It is required that due consideration be given to creep and shrinkage of the concrete. This kind of the concrete. This kind of calculation, although cumbersome, has been simplified by the use of the compiters.(3) Quality control: This is important for any method construction, but it is more so for the complicated construction techniques. Curing of concrete, post-tensioning, joint preparation, etc. are detrimental to a successful structure. The site personnel must be made aware of the minimum concrete strengths required for post-tensioning, form removal, falsework removal, launching and other steps of operations.Generally speaking, these advanced construction techniques require more engineering work than the conventional falsework type construction, but the saving could be significant.大桥涵洞在大多数情况中桥梁的高度和跨度完全取决于河流的流量,桥梁的高度和跨度必须能够容纳最大洪水量.事实上,这不仅仅是洪水最大流量的问题,还需要在不同时间间隔预测不同程度的水灾。
Legal Environment for Warranty ContractingIntroductionIn the United State, state highway agencies are under increasing pressure to provide lasting and functional transporting infrastructures rapidly and at an optimum life-cycle cost. To meet the challenge, state highway agencies are expected to pursue innovative practices when programming and executing projects. One area of the innovative practices is the implementation of long-term, performance-based warranties to shift maintenance liabilities to the highway industry. Use of warranties by state highway agencies began in the early-1990s after the Federal Highway Administration’s (FHWA) decision to allow warranty provisions to be included in construction contracts for items over which the contractor had complete control (Bayraktar et al. 2004). Special Experiment Project Number 14(SEP-14) was created to study the effects of this and other new techniques. Over the past decade, some states have incorporated this innovative technique into their existing programs. Projects have ranged from New Mexico’s 20-year warranty for the reconstruction of US550 to smaller scale projects, such as bridge painting and preventative maintenance jobs.These projects have met with varying degrees of success, causing some states to broaden the use of warranties, whereas others have abandoned them completely. Several states have sacrificed time and money to fine tune the use of warranties. However, on a national level, there is still a need for research and the exchange of ideas and best practices. One area that needs further consideration is the legal environment surrounding the use of warranties. Preliminary use in some states has required changes to state laws and agency regulations, as well as the litigation of new issues. This paper will discuss the laws and regulations needed to successfully incorporate warranties into current contracting practices and avoid litigation. The state of Alabama is used as an example of a state considering the use of long-term, performance-based warranties and proposals for laws and regulations will be outlined. This paper persents a flowchart to help an agency determine if a favorable legal environment exists for the use of warranties.Warranty Contracting in Highway ConstructionA warranty in highway construction, like the warranty for a manufactured product, is a guarantee that holds the contractor accountable for the repair and replacement of deficiencies under his or her control for a given period of time. Warranty provisions were prohibited in federal-aid infrastrure projects until the passage of the Intermodal Surface Transportation Efficiency Act in 1991 because warranty provisions could indirectly result in federal aid participation in maintenance costs, which at that time were a federal aid nonparticipating item(FHWA 2004). Under the warranty interim final rule that was published on April 19, 1996, the FHWA allwoed warranty provisions to be applied only to items considered to be within the control of contractors. Ordinary wear and tear, damage caused by others, and routine maintenance remained the responsibility of the state highway agencies(Anderson and Russel 2001). Eleven states participated in the warranty experiment under Special Experiment Project Number 14 referred to as SEP-14, which was created by the FHWA to study the effects of innovative contracting techniques. Warranty contracting was one of the four innovative techniques that FHWA investigated under SEP-14 and the follo-on SEP-15 program.In accordance with the National Cooperative Highway Research Program Synthesis 195(Hancher 1994), a warranty is defined as a guarantee of the integrity of a product and the maker’s responsibility for the repair or replacement of the deficiencies. A warranty is used tospecify the desired performance characeristics of a particular product over a specified period of time and to define who is responsible for the product (Blischke1995). Warranties are typically assigned to the prime contractor, but may be passed down to the paving contractors as pass-through warranties.The warranty approach in highway construction contrasts sharply with traditional highway contracting practices. Under the standard contracting option, the state highway agencies provide a detailed design and decide on the construction processes and materials to be used. Contractors perform the construction and bear no responsibility for future repairs once the project is accepted. Stringent quality control and inspection are necessary to make sure that contractors are complying with the specifications and the design. The warranty approach, usually used with performance-based specifications, changes almost every step in the standard contracting system. The changes go beyond the manner in which projects are bid, awarded, and constructed. Most important, contractors are bound by the warranty and are required to come back to repair and maintain the highway whenever certain threshold values are exceeded. In return for the shift in responsibility, contractors are given the freedom to select construction materials, methods, and even mix designs.Legal assessment framework for warranty contractingAs public sector organizations, state highway agencies must follow state laws and proper project procurement procedures. State legislation impacting state highway agencies include statutes on public work, highways and roads, state government, and special statutes. These statutes define general responsibilities and liabilities of the state highway agency and must be investigated before a state highway agency moves to any innovative contracting method. Additionally, the state highway agency may develop appropriate regulatory standards and procedures tailored to meet special needs. State highway agencies should also investigate and assess warranties contract and construction.In order to develop a legal and contractual framework against which to evaluate the state of Alabama and other states not active in warranty contracting the writers reviewed the statutes in numerous states that are active in warranty contracting. Ohio, Michigan, Minnesota, Florida, Texas, Illinois, Montana, and others have all been more or less active in warranty contracting. Their statutes were reviewed, as well as the specifications they use for measuring actual road performance against warranted performance. Also, numerous national studies were reviewed. The writers determined that regardless of whether warranties are imposed by legislative mandate or initiated by a state DOT or other body, there are three elements that are consistently found in successful programs, and these elements often require modification of the existing statues. These three elements are design-build contracting, bidding laws that allow for flexibility and innovation, and realistic bonding requirements. Given those elements as a starting point, the actual contract specifications must address when the warranty period commences, the inspection frequency, clear defect definitions, allocation of responsibility for repair, emergency maintenance, circumstances that void the warranty, and dispute resolution.The foregoing statutes and regulations are termed the legal assessment framework for performance warranties. The three broad steps in the framework: initiation of warranty contracting, statute assessment, and regulatory assessment are discussed in detail in the following sections.Initiation of WarrantiesSeveral states initiated the use of warranties as a result of a legislative mandate. For example, in 1999, the Illinois legislature passed a bill that required 20 of the projects outlined in the Illinois Department of Transportation’s Five Year Plan to include 5-year performance warranties (IDOT 2004). Ten of those projects were to be designed to have 35 life cycles (Illinois Compiled Statutes Ch.605*5/4-410). Also in 1999,Ohio began using warranties due to a legislative mandate that required a minimum of one-fifth of road construction projects to be bid with a warranty. According to Ohio Revised Code *5525.25, the requirements were later changed on the suggestion of the highway agency to make the minimums into maximums so it could spend more time evaluating what types or projects are best suited for warranties(ODOT 1999). The warranties were to range from 2 to 7 years, depending on the type of construction. Finally, in a less demanding mandate, the Michigan Compiled Laws*247.661, in a state highway funds appropriation bill, included the instruction that,” the Department [of Transportation] shall, where possible, secure warranties of not less than five-year, full replacement guarantee for contracted construction work..” These types of mandates generally require the agency to first come up with an outline of how it plans to incorporate these directives into existing procedures and specifications, as well as prepare reports regarding the success of these programs and their cost effectiveness.Alternatively, some agencies begin the use of warranties on their own initiative. In Texas, the State Comptroller’s Office issued a report on the Department of Transportation’s (DOT) operations and strongly recommended the use of more innovative methods, including warranties, to better meet the transportation needs of the state(Strayhorn2001). As a result, the Texas Transportation Institute commenced its own investigation of warranties and developed an implementation plan for the Texas DOT(Anderson et al.2006). One of the reasons cited for the study was the potential for a future legislative mandate, and the need to research the area before the agency wad forced to make use of warranties. Montana acted without any government influence by initiating a bill(Bill Draft No.LC0443) that called for the formation of a committee to study the feasibility of design-build and warranty contracting. This committee was to include members of the House and Senate, Department of Transportation officials, representatives from contractor’s associations, and a representative from the general public and would submit a report to the office of Budget and Program Planning. This bill was not enacted, but the Department continued their efforts by preparing a report containing specific suggestions as to how Montana could implement warranties on future highway construction projects (Stephens et al.2002).Like Texas and Montana, most states have made their own investigations into the use of performance-based warranties. Generally, state highway agencies have worked with research teams, contractors and industry associations to extensively evaluate the feasibility of warranted projects. Although sometimes a political push may be needed to encourage the use of innovative methods, states, which begin researching new ideas on their own, may have more time to carefully select the best use for these innovations. As exemplified by Ohio, who found it infeasible to meet existing legislative mandates, states may have to amend the legislation later, indicating the legislature may not be best suited to make the first move.Statutory AssessmentAs pointed-out earlier, statutes regarding public work, public transportation, state government, and other related statutes should be evaluated in terms of the legal environment of the warranty contracting. Three related major legislations are project delivery, public bidding procedures, andbonding requirements.Legislation regarding Design-Build Project DeliveryHistorically, contractors are told what materials to use and how to use them in the construction project. State personnel oversee the construction and perform continuous quality assurance testing to ensure the contractor is following the specifications. Legislation may restrict a state to this process, which does not allow for the increased contractor control that use of a warranty may dictate. Several transportation agencies have explicit authorization for design-build contracting methods. For instance, Ohio Revised Code *5517.011 allows for a value-based selection process where technical proposals can be weighted and the bid awarded to the contractor with the lowest adjusted price. These projects may be limited to a specific type of construction, such as tollway or bridge projects, or by the dollar amount of design-build contracts that may be awarded annually. Oregon Revised Statute *383.005 allows for tollway contracts to be awarded considering cost, design, quality, structural integrity, and experience. Wisconsin Statute *84.11(5n) allows for certain bridge projects to be bid under design-build after a prequalification process, assessment of a variety of Transportation and the governor. In Ohio, the Revised Code *5517.011, however, limits design-build contracts to $ 250 million biennially.Other statutes are more general, simply stating that public agencies are permitted to use design-build contracting methods, e.g. Idaho Code * 67-2309. In state where design-build contracts are specifically outlawed by statute(e.g. Tenn. Code *4-5-102), the agency has few options. In Texas, where design-build is not allowed, the agency has implemented a rigid, multistep prequalification process in an effort to factor in advantages one contractor may have over another, when still complying with the traditional design-bid-build laws (Strayhorn 2001). Design-build and warranties seem to go hand-in-hand, allowing less agency interaction from the beginning of the project and more confidence in the contractor’s ability to fulfill the warranty requirements. However, the proper statutes need to be in place for an agency to utilize this innovative contracting method.Legislation of Public Bidding ProceduresThe use of warranties and other innovative contracting methods may not fit cleanly within existing bidding procedures for public contracts. If the request for proposals details the project in terms of performance based specifications, bidding laws must account for the different methods and materials proposed by bidders. Traditionally, bidding laws require an agency to solicit bids through a competitive, sealed bidding process and award the contract to the “ lowest responsible bidder.”Exceptions to the lowest bidder rule are sometimes built into statutes, but the more common exceptions only allow an agency to reject all bids if they are all unreasonable or when it is in the interest of the awarding authority to reject all bids, e. g., Alabama Code *39-2-6(c). However, the lowest responsible bidder language presents a way through which a state may avoid contracting with simply the lowest pecuniary bidder, which may better serve the goals of the project.Application of Assessment Framework to AlabamaThe proposed assessment framework was used to investigate the laws and regulations necessary in Alabama to successfully incorporate warranties into current contracting practices and at the same time, avoiding litigation. Currently, the state of Alabama has no legislative directive requiring the use of warranties. Therefore, the Alabama DOT, working with the surety industry, contractors and academics, will need to develop a plan if they intend to implement warranties. Indoing so, the agency should look at statutes which may impede the use of warranties. Please refer to the Appendix for a list of Alabama Statutes.为保证合同的法律环境介绍在美国国务院,国道机构正在受到越来越大的压力,以提供持久的运输基础设施和功能迅速在最佳生命周期成本。