5.外文原文
- 格式:pdf
- 大小:434.39 KB
- 文档页数:24
73rd Annual Meeting of ICOLDTehran, IRANMay 1- 6, 2005Paper No.: 012-W4 REVIEW OF SEISMIC DESIGN CRITERIA OF LARGE CONCRETEAND EMBANKMENT DAMSMartin WielandChairman, ICOLD Committee on Seismic Aspects of Dam Design, c/o Electrowatt-Ekono Ltd. (Jaakko Pöyry Group),Hardturmstrasse 161, CH-8037 Zurich, SwitzerlandABSTRACTOf all the actions that dams have to resist from the natural and man-made environment, earthquakes pose probably the greatest challenge to dam engineers as earthquake ground shaking affects all structures (dam body, grout curtains, diaphragm walls, underground structures and different appurtenant structures) and all components (hydromechanical, electromechanical, etc.) at the same time (ICOLD, 2001; ICOLD, 2002). Thus all these elements have to be able to resist some degree of earthquake action. This also applies to temporary structures like cofferdams, etc. For the specification of the degree of earthquake action these components have to be able to resist an assessment of the consequences of possible failure scenarios is needed.Little experience exists about the seismic performance of modern concrete-face rockfill dams, which are being built in increasing number today. A qualitative assessment of these dams is given.Keywords: concrete dams, embankment dams, concrete-face rockfill dams, CFRD, grout curtain, earthquake design criteria for damsINTRODUCTIONSince the 1971 San Fernando earthquake in California, major progress has been achieved in the understanding of earthquake action on concrete dams. The progress was mainly due to the development of computer programs for the dynamic analysis of dams. However, it is still not possible to reliably predict the behaviour of dams during very strong ground shaking due to the difficulty in modelling joint opening and the crack formation in the dam body, the nonlinear behaviour of the foundation, the insufficient information on the spatial variation of ground motion in arch dams and other factors. The same applies to embankment dams where the results of inelastic dynamic analyses performed by different computer programs tend to differ even more than in the case of concrete dams. Also, considerable progress has been made in the definition of seismic input, which is one of the main uncertainties in the seismic design and seismic safety evaluation of dams.It has been recognized that during strong earthquakes such as the maximum credible earthquake (MCE), the maximum design earthquake (MDE) or the safety evaluation earthquake (SEE) ground motions can occur, which exceed those used for the design of large dams in the past. Already a moderate shallow-focus earthquake with a magnitude of say 5.5 to 6 can cause peak ground accelerations (PGA) of 0.5 g. However, the duration of strong ground shaking of such events is quite short and the predominant frequencies of the acceleration time history are also rather high. Therefore, smaller concrete dams may be morevulnerable to such actions than high dams, which have predominant eigenfrequencies that are smaller than those of such ground acceleration records.We have to recognize that most of the existing dams have been designed against earthquake actions using the pseudostatic approach with an acceleration of 0.1 g. In regions of high seismicity like Iran the PGA of the SEE may exceed 0.5 g at many dam sites. Therefore, some damage may have to be expected in concrete dams designed by a pseudostatic method with a seismic coefficient of 0.1.Because of the large differences between the design acceleration and the PGA, and because of the uncertainties in estimating the ground motion of very strong earthquakes at a dam site, mechanisms are needed that ensure that a dam will not fail if the design acceleration is exceeded substantially.In the case of large dams, ICOLD recommends to use the MCE as the basis for the dam safety checks and dam design. Theoretically no ground motion should occur, which exceeds that of the MCE. However, in view of the difficulties in estimating the ground motion at a dam site, it is still possible that larger ground motions may occur. Some 50 years ago, many structural engineers considered a value of ca. 0.2 g as the upper bound of the PGA, but today with more earthquake records available, the upper bound has exceeded 1 g and some important structures have already been checked against such high ground accelerations.1. SEISMIC DESIGN CRITERIA FOR LARGE DAM PROJECTSAccording to ICOLD Bulletin 72 (1989), large dams have to be able to withstand the effects of the MCE. This is the strongest earthquake that could occur in the region of a dam, and is considered to have a return period of several thousand years (typically 10’000 years in regions of low to moderate seismicity).Per definition, the MCE is the largest event that can be expected to affect the dam. This event can be very powerful and can happen close to the dam. The designer must take into account the motions resulting from any earthquake at any distance from the dam site, and possible movement of the foundation if a potentially active fault crosses the dam site. Having an active fault in the foundation is sometimes unavoidable, especially in highly seismically active regions, and should be considered as one of the most severe design challenges requiring special attention.It has to be kept in mind that each dam is a prototype structure and that the experience gained from the seismic behaviour of other dams has limited value, therefore, observations have to be combined with sophisticated analyses, which should reflect reality as close as possible.We also have to realize that earthquake engineering is a relatively young discipline with plenty of unsolved problems. Therefore, every time there is another strong earthquake some new unexpected phenomena are likely to emerge, which have implications on regulations and codes. This is particularly true for dams as very few modern dams have actually been exposed to very strong ground motions.As mentioned earlier, the time of the pseudostatic design with a seismic coefficient of 0.1 has long passed. Of course, this concept was very much liked by designers because the small seismic coefficients being used did not require any special analyses and the seismic requirement could easily be satisfied. As a result, the seismic load case was usually not the governing one. This situation has changed and the earthquake load case has become the governing one for the design for most high risk (dam) projects, especially in regions of moderate to high seismicity.As a general guideline, the following minimum seismic requirements should be observed:(i)Dam and safety-relevant elements: Design earthquakes (i.e. operating basis earthquake (OBE),and SEE) are determined based on a seismic hazard study and are usually represented by appropriate response spectra (and the PGA or the effective peak acceleration).(ii)All appurtenant structures and non safety-relevant elements of dam: Use of local seismic building codes (including seismic zonation, importance factor, and local soil conditions) if no other regulations are available. However, the seismic design criteria should not be less thanthose given in building codes. As shown in Fig. 1 the earthquake damage to appurtenant structures can be very severe.(iii)Temporary structures and construction phase: The PGA for temporary structures and the construction phase should be of the order of 50% of the PGA of the design earthquake of the seismic building code. For important cofferdams a separate seismic hazard assessment may be needed. According to Eurocode 8 (2004), the design PGA for temporary structures and the construction phase, PGA c can be taken asPGA c = PGA (t rc/t ro)kwhere PGA is according to the building code, t ro = 475 years (probability of exceedance of 10% in 50 years), for k a value between 0.3 and 0.4 can be used depending on the seismicity of the region andt rc = approx. t c/pwhere t c is the duration of the construction phase and p is the acceptable probability of exceedance of the design seismic event during this phase, typically a value of p = 0.05 is selected.Therefore, assuming k = 0.35 and a construction phase of an appurtenant structure of 3 years results in PGA c = 0.48 PGA, and for a cofferdam, which may stay for 10 years, we obtain PGA c = 0.74 PGA. This numerical example shows that the seismic action for temporary structures and for construction phases can be quite substantial. In many cases the effects of seismic action during dam construction has been underestimated or ignored.In general, building codes exclude dams, powerplants, hydraulic structures, underground structures and other appurtenant structures and equipment. It is expected that separate codes or regulations cover these special infrastructure projects. However, only few countries have such regulations. Therefore, either the ICOLD Bulletins (ICOLD Bulletin 123, 2002) are used as a reference or the (local) seismic building codes (Eurocode 8, 2004). The seismic building codes are very useful reference documents for checking the design criteria for a dam. As the return period of the SEE of a large dam is usually much longer than the return period of the design earthquake for buildings, which in many parts of the world is taken as 475 years, the PGA values of the SEE should be larger than that of the design earthquake for building structures with a 475 year return period multiplied with the importance factor for high risk projects. If this basic check is not satisfied, then a building located at the dam site would have to be designed for stronger ground motions than the dam. In such controversial situations further investigations and justifications are needed.In the past, the pseudostatic acceleration of 0.1 g was used for many dams. This acceleration can be assumed as effective ground acceleration, which is ca. 2/3 of the PGA. Therefore, the SEE for a dam with large damage potential should not have a PGA of less than say 0.15 g as this would be less than what was used in the past. However, this is mainly a concern for dam projects in regions of low to moderate seismicity (Wieland, 2003).For the seismic design of equipment, the acceleration response calculated at the location of the equipment (so-called floor response spectra) shall be used. It should also be noted that the peak acceleration at the equipment location is normally larger than the PGA. For example, the radial acceleration on the crest of an arch dam is about 5 times larger than the PGA of the SEE (nonlinear dynamic response of the dam) and up to 8 times larger in the case of the OBE (values of up to 13 have been measured in Satsunai concrete gravity dam during the 26.9.2003 Tokachi-Oki earthquake in Japan).2. SEISMIC SAFETY ASPECTS AND SEISMIC PERFORMANCE CRITERIABasically, the seismic safety of a dam depends on the following factors (Wieland, 2003):(1)Structural Safety: site selection; optimum dam type and shape; construction materials and qualityof construction; stiffness to control static and dynamic deformations; strength to resist seismic forces without damage; capability to absorb high seismic forces by inelastic deformations (opening of joints and cracks in concrete dams; movements of joints in the foundation rock;plastic deformation characteristics of embankment materials); stability (sliding and overturning stability), etc.(2)Safety Monitoring and Proper Maintenance: strong motion instrumentation of dam andfoundation; visual observations and inspection after an earthquake; data analysis and interpretation; post-earthquake safety assessment etc. (The dams should be maintained properly including periodic inspections).(3)Operational Safety: Rule curves and operational guidelines for post-earthquake phase;experienced and qualified dam maintenance staff, etc.(4)Emergency Planning: water alarm; flood mapping and evacuation plans; safe access to dam andreservoir after a strong earthquake; lowering of reservoir; engineering back-up, etc.These basic safety elements are almost independent of the type of hazard. In general, dams, which can resist the strong ground shaking of the MCE, will also perform well under other types of loads.In the subsequent sections, the emphasis will be put on the structural safety aspects, which can be improved by structural measures. Safety monitoring, operational safety and emergency planning are non-structural measures as they do not reduce the seismic vulnerability of the dam directly.For the seismic design of dams, abutments and safety relevant components (spillway gates, bottom outlets, etc.) the following types of design earthquakes are used (ICOLD, 1989):•Operating Basis Earthquake (OBE): The OBE design is used to limit the earthquake damage to a dam project and, therefore, is mainly a concern of the dam owner. Accordingly, there are no fixed criteria for the OBE although ICOLD has proposed an average return period of ca.145 years (50% probability of exceedance in 100 years). Sometimes return periods of 200 or 500 years are used. The dam shall remain operable after the OBE and only minor easily repairable damage is accepted.•Maximum Credible Earthquake (MCE), Maximum Design Earthquake (MDE) or Safety Evaluation Earthquake (SEE): Strictly speaking, the MCE is a deterministic event, and is the largest reasonably conceivable earthquake that appears possible along a recognized fault or within a geographically defined tectonic province, under the presently known or presumed tectonic framework. But in practice, due to the problems involved in estimating of the corresponding ground motion, the MCE is usually defined statistically with a typical return period of 10,000 years for countries of low to moderate seismicity. Thus, the terms MDE or SEE are used as substitutes for the MCE. The stability of the dam must be ensured under the worst possible ground motions at the dam site and no uncontrolled release of water from the reservoir shall take place, although significant structural damage is accepted. In the case of significant earthquake damage, the reservoir may have to be lowered.Historically, the performance criteria for dams and other structures have evolved from the observation of damage and/or experimental investigations. The performance criteria for dams during the OBE and MCE/SEE are of very general nature and have to be considered on a case-by-case basis.3. EARTHQUAKE DESIGN ASPECTS OF CONCRETE DAMSThere are several design details that are regarded as contributing to a favourable seismic performance of arch dams (ICOLD, 2001; Wieland, 2002), i.e.:•Design of a dam shape with symmetrical and anti-symmetrical mode shapes that are excited by along valley and cross-canyon components of ground shaking.•Maintenance of continuous compressive loading along the foundation, by shaping of the foundation, by thickening of the arches towards the abutments (filets) or by a plinth structure to support the dam and transfer load to the foundation.•Limiting the crest length to height ratio, to assure that the dam carries a substantial portion of the applied seismic forces by arch action, and that nonuniform ground motions excite higher modes and lead to undesired stress concentrations.•Providing contraction joints with adequate interlocking.•Improving the dynamic resistance and consolidation of the foundation rock by appropriate excavation, grouting etc.•Provision of well-prepared lift surfaces to maximize bond and tensile strength.•Increasing the crest width to reduce high dynamic tensile stresses in crest region.•Minimizing unnecessary mass in the upper portion of the dam that does not contribute effectively to the stiffness of the crest.•Maintenance of low concrete placing temperatures to minimize initial, heat-induced tensile stresses and shrinkage cracking.•Development and maintenance of a good drainage system.The structural features, which improve the seismic performance of gravity and buttress dams, are essentially the same as that for arch dams. Earthquake observations have shown that a break in slope on the downstream faces of gravity and buttress dams should be avoided to eliminate local stress concentrations and cracking under moderate earthquakes. The webs of buttresses should be sufficiently massive to prevent damage from cross-canyon earthquake excitations.4. DYNAMIC ANALYSIS ASPECTS OF CONCRETE DAMSThe main factor, which governs the dynamic response of a dam subjected to small-amplitude ground motions, can be summarized under the term damping. Structural damping ratios obtained from forced and ambient vibration tests are surprisingly low, i.e. damping ratios of the lowest modes of vibrations are of the order of 1 to 2% of critical. In these field measurements the effect of radiation damping in the foundation and the reservoir are already included. Linear-elastic analyses of dam-foundation-reservoir systems would suggest damping ratios of about 10% for the lowest modes of vibration and even higher values for the higher modes. Under earthquake excitations the dynamic stresses in an arch dam might be a factor of 2 to 3 smaller than those obtained from an analysis with 5% damping where the reservoir is assumed to be incompressible and the dynamic interaction effects with the foundation are represented by the foundation flexibility only (massless foundation). Therefore, the dam engineer may be willing to invest more time in a sophisticated dynamic interaction analysis in order to reduce the seismic response of an arch dam. Unfortunately, there is a lack of observational evidence, which would justify the use of large damping ratios in seismic analyses of concrete dams.Under strong ground shaking caused by the MCE or SEE tensile stresses are likely to occur that exceed the dynamic tensile strength of mass concrete. In addition, there are contraction joints and lift joints, which have tensile strength properties that are inferior to those of the parent mass concrete. Therefore, in the highly stressed upper portion of an arch dam the contraction joints will start to open first and cracks will develop along the lift joints (this coincides with the seismic behaviour of Sefid Rud dam). Along the up- and downstream contacts between the dam and the foundation rock local stress concentrations will occur, which will lead to the formation of cracks in the concrete and the foundation rock. Little information exists on this type of cracks. But such cracks are also likely to develop at the upstream heel of the dam under the hydrostatic water load. Thus, the dynamic deformations of the dam will mainly occur at these contraction and lift joints and along a few cracks in the mass concrete. The remaining parts of the dam will behave more or like as rigid bodies and will exhibit relatively small dynamic tensile stresses. Joint opening and crack formation will also lead to higher compressive stresses in the dam. However, these dynamic compressive stresses are unlikely to cause any damage and the engineer can focus on tensile stresses only. For the analysis of joint opening other numerical models are needed than for the linear-elastic dynamic analysis of a dam.Two concepts are used for the nonlinear analysis of a cracked dam, i.e. the smeared crack approach, and the discrete crack approach. The main advantages of discrete crack models are their simplicity (only information on strength properties of joints is needed) and their ability to model the observed behaviour of dams during strong ground shaking. Further developments are still needed.In nonlinear dam models radiation damping may play a less prominent role as in the case of a linear-elastic analysis where this issue is still controversial. Once cracks along joints are fully developed, the viscous damping in the cracked region of a dam may be replaced by hysteretic damping in the joints. As these damping mechanisms are still not well understood or are too complex to be considered in practical analyses, it is recommended to perform a sensitivity analysis in which the effect of damping on the dynamic response of a dam is investigated.The dynamic tensile strength of mass concrete, ft, is another key parameter in the seismic safety assessment of concrete dams. It depends on the following factors:•uniaxial compressive strength of mass concrete, f c’, different correlations between f t and f c’ have been proposed in the literature;•age effect on concrete strength (it may be assumed that the OBE or SEE occurs when the dam has reached about one-third of the design life);•strain-rate effect (under earthquake loading, the tensile strength increases); and•size effect (the tensile strength depends on the fracture toughness of mass concrete and the thickness of the dam).If the size effect is considered, the dynamic tensile strength of mass concrete in relatively thick arch-gravity dams drops to below 3 to 4 MPa.An additional factor, which is hardly considered in arch dam analyses, is the spatial variation of the earthquake ground motion.The criteria for the assessment of the safety of the dam foundation during strong ground shaking need further improvements. Today, the dam analyst delivers the seismic abutment forces to the geotechnical engineer, who performs rock stability analyses.5. ASSESSMENT OF SEISMIC DESIGN OF EMBANKMENT DAMSBasically, the seismic safety and performance of embankment dams is assessed by investigating the following aspects (Wieland, 2003):•permanent deformations experienced during and after an earthquake (e.g. loss of freeboard);•stability of slopes during and after the earthquake, and dynamic slope movements;•build-up of excess pore water pressures in embankment and foundation materials (soil liquefaction);•damage to filter, drainage and transition layers (i.e. whether they will function properly after the earthquake);•damage to waterproofing elements in dam and foundation (core, upstream concrete or asphalt membranes, geotextiles, grout curtain, diaphragm walls in foundation, etc.) •vulnerability of dam to internal erosion after formation of cracks and limited sliding movements of embankment slopes, or formation of loose material zones due to high shear (shear bands), etc.•vulnerability of hydromechanical equipment to ground displacements and vibrations, etc.•damage to intake and outlet works (release of the water from the reservoir may be endangered).The dynamic response of a dam during strong ground shaking is governed by the deformational characteristics of the different soil materials. Therefore, most of the above factors are directly related to deformations of the dam.Liquefaction phenomena are a major problem for tailings dams and small earth dams constructed of or founded on relatively loose cohesionless materials, and used for irrigation and water supply schemes that have not been designed against earthquakes. This can be assessed based on relatively simple in situ tests. For example, there exist empirical relations between SPT blow counts and liquefaction susceptibility for different earthquake ground motions, which are characterized by the number of stress cycles and the ground acceleration.For large storage dams, the earthquake-induced permanent deformations must be calculated. Damage categories are, e.g., expressed in terms of the ratio of crest settlement to dam height. The calculations of the permanent settlement of large rockfill dams based on dynamic analyses are still very approximate, as most of the dynamic soil tests are usually carried out with maximum aggregate size of less than 5 cm. This is a particular problem for rockfill dams and other dams with large rock aggregates and in dams, where the shell materials, containing coarse rock aggregates, have not been compacted at the time of construction. Poorly compacted rockfill may settle significantly during strong ground shaking but may well withstand strong earthquakes.To get information on the dynamic material properties, dynamic direct shear or triaxial tests with large samples are needed. These tests are too costly for most rockfill dams. But as information on the dynamic behaviour of rockfill published in the literature is also scarce, the settlement prediction involves sensitivity analyses and engineering judgment.Transverse cracking as a result of deformations is an important aspect. Cracks could cause failure of embankment dams that do not have filter, drain and transition zones; or have filter, drain and transition zones that do not extend above the reservoir water surface; or modern filter criteria were not used to design the dam.6. SEISMIC ASPECTS OF CONCRETE FACED ROCKFILL DAMSThe seismic safety of concrete-faced rockfill dams (CFRDs) is often assumed to be superior to that of conventional rockfill dams with impervious core. However, the crucial element in CFRDs is the behaviour and performance of the concrete slab during and after an earthquake.The settlements of a rockfill dam caused by the MCE or SEE are rather difficult to predict and depend on the type of rockfill and the compaction of the rockfill during dam construction. Depending on the valley section, the dam deformations will also be non-uniform along the upstream face, causing differential support movements of the concrete face, local buckling in compression zones etc.In many cases, embankment dams are analysed with the equivalent linear method using a two-dimensional model of the highest dam section. In such a seismic analysis, only reversible elastic deformations and stresses are calculated, which are small and do not cause high dynamic stresses in the concrete face. These simple models have to be complemented by models, which also include the cross-canyon component of the earthquake ground motion as well as the inelastic deformations of the dam body. For such a dynamic analysis, a three-dimensional dam model has to be used and the interface between the concrete face and the soil transition zones must be modelled properly.Due to the fact that the deformational behaviour of the concrete slab, which acts as a rigid diaphragm for vibrations in cross-canyon direction, is very different from that of the rockfill and transition zone material, the cross-canyon response of the rockfill may be restrained by the relatively rigid concrete slab. This may result in high in-plane stresses in the concrete slab. The seismic forces that can be transferred from the rockfill to the concrete slab are limited by the friction forces between the transition zone of the rockfill and the concrete slab. Due to the fact that the whole water load is supported by the concrete slab, these friction forces are quite high and, therefore, the in-plane stresses in the concrete slab may be sufficiently large to cause local buckling, shearing off of the slab along the joints or to damage the plinth.Although this is still a hypothetical scenario, it is necessary to look carefully into the behaviour of the concrete face under the cross-canyon component of the earthquake ground shaking. Therefore, it is also not so obvious that CFRDs are more suitable to cope with strong earthquakes than conventional embankment dams. The main advantage with CFRDs is their resistance to erosion if water seeps through a cracked face.As experience with the seismic behaviour of CFRDs is still very limited, more efforts have to be undertaken to study the seismic behaviour of these dams (Wieland, 2003; Wieland and Brenner, 2004).7. SEISMIC ASPECTS OF DIAPHRAGM WALLS AND GROUT CURTAINS7.1 Diaphragm WallsDiaphragm walls are used as waterproofing elements in embankment dams on soils or on very pervious rock and used to be made of ordinary reinforced concrete. Today, preference is given to plastic concrete. The wall should have a stiffness of similar magnitude to that of the surrounding soil or rock in order to prevent the attraction of load when the soil deforms after dam construction. The dynamic stiffness of both the wall and the surrounding soil or rock, however, is higher than the corresponding static value.Although earthquakes may still cause significant dynamic stresses in the plastic concrete cut-off wall, sufficient ductility of the plastic concrete will minimize the formation of cracks. The highest stresses in the wall are expected to be caused by seismic excitations in cross-canyon direction.。
The Mandel House, now completed ,was published again in Architectural Forum in August 1935. The article, spanning ten pages, was a significant commitment to an unknown architect and his first independent commission. Howard Myers’ article on the Mandel House launched Stone’s career. As the article stated, “Conceived by three young men—one of whom was the client, the Richard Mandel house is a vital expression of the aspirations of a young up-and-coming group. The designer, Edward D. Stone, has boldly and unhesitatingly translated a theory and scheme of living into the physical form of a house in which to live.”(Myers would later be widely credited with reviving Frank Lloyd Wright’s career when he published an issue of Forum devoted to him.) Stone’s work on the Mandel House led to another residential commission in Mount Kisco, for Ulrich and Elizabeth Kowalski. This house was more in keeping with the tenets of the International Style: the dominant and expressionistic curvilinear element of the Mandel House was replaced by a more subdued curvilinear volume containing a spiral stairway that was faced with glass block(fig.50). The relationship of the rooms and common area suggests an emphasis on functionality and the spare use of interior space(figs. 51 and 52). The influence of Mies van der Rohe’s Tugendhat House, which Stone may have seen when he visited Brno, Czechoslovakia, on his Rotch scholarship, was evident in the volumetric massing, fenestration, and detailing of the home, particularly that of rear façade(fig.49). Apparently the town was upset by the work, and Stone remarked that local zoning regulations were instituted as a result of the house to prevent architecture of the sort from reoccurring.In April 1936, Henry and Clare Boothe Luce purchased a 7,200-acre property called Mepkin, in Moncks Corner, South Carolina, some 40 miles north of Charleston. Returning from their honeymoon in Havana, they had visited the site in February on their way home to New York. Clare, touring the property in a driving rainstorm, was unimpressed until she saw the dense stands of live oaks by the Cooper River, which echoed the memories of her Tennessee childhood and led her to decide,”This is it.”The Luces had been seeking a plantation property in South Carolina since even before their marriage. The fact that her longtime love, Bernard M. Baruch, also had a 17,00-acre estate near Georgetown, South Carolina, which she had visited, may have also played a role in her decision. The Luces purchased the property for $150,000 from Mrs. Nicholas G. Rutgers, Jr, of New York, who had received the estate as a gift from her father, James Wood Johnson, one of the cofounders of the pharmaceutical company Johnson & Johnson. Mepkin derives from an Indian word meaning”serene and lovely”; photographs from the era reinforce that description.Before the Civil War, the land around the Cooper River had been under intensive cultivation, principally for rice farming. After the war, with the abolition of slavery, rice cultivation declined, and much of the area returned to alternating expanses of forests and wetlands, rich with waterfowl ,fish, and deer. Because of the wildlife, in the late nineteenth century the area had been promoted by realtors as a rebfuge for the elite, particularly as a site for large hunting estates. One brokerage firm, Elliman & Mullally, issued promotional maps of the region for interested buyers that listed some of the region’s property owners: Bernard M. Baruch, Nelson Doubleday, Robert Goelet, Eugene du Pont Harry Guggenheim, and George Widener, Jr.The attention given to Stone and the Mandel House in Architectural Forum during the fall of 1935, as well as Howard Myers’s advocacy in general, led to Henry and Clare Boothe Luce awarding him the commission for Mepkin. Luce had purchased Forum form Myers in 1932 and had retained Myers as the editor; Myers also assumed the informal role of Luce’s architecturaladviser. To complete the project, Stone was licensed in South Carolina in August 1936. He had sought a recommendation from Nelson A. Rockefeller to the licensing board, which Rockefeller provided in July 1936.Despite the Luces’prominent public profile, the work is remarkably restrained. Four small cottages arranged parallel to the edge of the Cooper River surround a walled garden, a bowling green, and a reflecting pool (fig.55). The site is approached through a dense grove of live oak trees (fig.53). The main house, called Claremont, is directly on axis with the entry gate, and a reflecting pool reinforces the axial relationship (fig. 54). Each of the houses is extensively glazed on the river façade (fig. 56). Less so on the courtyard façade. The whitewashed brick, rondels and quoining flanking the door openings are ornamental and referential, contrary to International Style tenets. Similarly, the serpentine brick wall overtly recalls Jefferson’s garden walls at the University of Virginia (fig. 57). Unlike Stone’s earlier work on the Mandel and Kowalski houses, Mepkin has a softer tone, less a modernist polemic and more an exercise in integrating vernacular and historicist elements into a modern context. This is also the first work of Stone’s that blurs the demarcation between the natural and the man-made environment. Stone’s love of the landscape, given voice in his enthusiasm for the atrium of the Alhambra in Granada, the Estufa Fria in Lisbon, and the Pan American Building in Washington, D.C, was made manifest here. The garden courtyard, with its bowling green and reflecting pool, was the most important space, interior or exterior, in the estate. Stone described the complex as “compatible with the Charleston tradition”of walled residential compounds oriented toward internal garden courtyards (fig. 58).As much as the setting provided a transporting fantasy of wildlife and landscape, the project provided a sobering example of the difficulties in using an inexperienced architect who is unfamiliar with the setting in which he is working. The flat roofs, which were not properly detailed for the amount of rainfall in South Carolina, leaked badly, ultimately damaging the structural framing and requiring its replacement. The air conditioning system belched black smoke, discoloring the traditional décor designed by the Luce’s friend and interior decorator, Gladys Freeman, and the generator that supplied power to the entire estate blew up. Repairing and replacing equipment in a remote and undeveloped area also proved daunting. In fairness, the mechanical and electrical problems could be traced to the incompetence of Stone’s engineering firm, but ultimately the architect bore responsibility for the engineer’s selection. Henry Luce was furious with Stone over the miscues, as Stone recounted:I can tell you I was no darling with Mr. Luce. He gave me bell…It was a disaster—an adventure in the wilderness that I just wasn’t well enough prepared for. I did the best I could, but without proper knowledge.Mepkin was published in Architectural Forum in June 1937, and Howard Myers was presented with the interesting dilemma of writing about a house that his employer and his best friend had produced. Forum’s commentary was glowing:The resule is a composition intimate in scale, which places admirable emphasis on the magnificent surrounding. Many and caustic critics have claimed—with some justice—that in the domestic field the modernist fails to invest the house with a quality of graciousness quite as important as its functioning. Here is the refutation. That a group could have been built, so thoroughly modern in design, and yet so profoundly influenced by thetraditions of southern living demonstrates the ability of the architect and the basic soundness and adaptability of modern architectureStone had now associated himself with nationally important families, the Rockefellers and the Luces, on significant architectural projects. He had formed strong relationships with major American architects, Wallace K. Harrison, Leonard Schultze, Henry R. Shepley, and Ralph T. Wallace. He had become close friends with the most influential architectural journalist of the era, Howard Myers. He had established himself as one of the leading American practitioners of modern architecture by designing one of the most significant and publicized modern houses in the nation. He had also architects. Outwardly, everything seemed to be going well for Stone, but he still struggled financially, unable to consistently generate work; he was growing bored in his marriage; and his consumption of alcohol was beginning to loom as a problem for him both personally and professionally.。
外文文献原稿和译文原稿The water level control circuit designWater source total ranks sixth in the world, per capita water resources is only a quarter of the world per capita consumption, and geographical distribution is very uneven, the vast region north of the Yangtze River, northin most parts of the medium-sized cities in the dry state, water shortage has become an important factor restricting China's economic development. Reasonable use of water resources has become an important issue for China is now facing. In order to achieve the rational use of water resources, in addition to in beefing water conservancy projects and enhance the people's awareness of water conservation efforts to improve. But more important is the application of new technical information, real-time to accurately understand and master a variety of hydrological information in order to make the right water scheduling and management, so that preventive measures to minimize water wastage . Coupled with long-standing water level measurement of water level has been an important issue in hydrology, water resources department. For the timely detection of the signs of the accident, precautionary measures in the future, economical and practical, reliable water level wireless monitoring systems will play a major role. The water level of dam safety, one of the important parameters for water drainage and irrigation scheduling, water storage, flood discharge.Provides a good foundation for the automation of monitoring, transmission and processing of the water level reservoir modernization. Need to monitor the water level in many areas of industrial and agricultural production. The site may not be able to close without the manpower to monitor, we can RMON, sitting in the control room facing the instrument can be monitored on-site, convenient and save manpower. In order to ensure the safe production of hydroelectric power station to improve power generation efficiency,Hydropower production process need to monitor the water level in the reservoir, trash rack, pressure drop and the tail water level. However, due to the different power plants with a different factual situations, have different technical requirements, and the measurement methods and location of the water level parameters and also the requirements of the monitoring equipment. This often results in the monitoring system equipment of a high degree of variety, interchangeability is not conducive to the maintenance of equipment will increase the equipment design, production, installation complexity. Therefore, on the basis of the actual situation and characteristics of the comprehensive study of hydropower water level monitoring, the use of modern electronic technology, especially single-chip technology and non-volatile memory technology, designed to develop a versatile, high reliability, easy maintenance, the applicable a variety of monitoring the environment, multi-mode automatic water level monitoring system has important practical significance. The subject according to the reservoir water level measurement needs, design a remote microcontroller water level monitoring system, the system automatically detects the water level, time processing, Data GPRS remote upload function. The design of the monitoring system will be significant savings in manpower and resources, low-power 24 hours of continuous monitoring and upload real-time control reservoir water level, to better adapt to the needs of the modern water level measurement, the safety of the dam reservoir, impoundment spillway to provide a basis.Microcontroller embedded microcontrollers are widely used in industrial measurement and control systems, intelligent instruments and household appliances. In real-time detection and automatic control of microcomputer application system, the microcontroller is often as a core component to use. The basic requirements of the water tower water level control system in the case of unattended automatic limit automatically start the motor to reach the water level in the water level in the water tower to the water tower water supply; water tower water level reached the water level upper limit is automatically off the motor to stop water supply. And unusual time to sound the alarm and troubleshooting in the water supply system at any time to ensure that the towers of the external normal water supply role. The water tower is often seen in daily life and industrial applications, water storage devices, external water supply through the control of its water level to meet the needs of its waterlevel control is universal. Regardless of socio-economic rapid water plays an important role in people's normal life and production. Once off the water, ranging from great inconvenience to the people's living standards, weight is likely to cause serious accidents and losses, and thus a higher demand of water supply system to meet the timely, accurate, safe and adequate water supply. If you still use the artificial way, the labor-intensive, low efficiency, safety is hard to guarantee the transformation of the automated control system, which must be carried out. In order to achieve sufficient amount of water, smooth water pressure, water towers, water level automatic control design low-cost, high practical value of the controller. The design uses a separate circuit to achieve high and low warning level processing, and automatic control, save energy, improve the quality of the water supply system.SCM is an integrated circuit chip, VLSI technology with data processing capability of the central processing unit CPU random access memory RAM, read only memory ROM, and a variety of I / O port and interrupt system, timers / timer other functions (which may also include a display driver circuit, pulse width modulation circuit, analog multi-channel converter, A / D converter and other circuit) integrated into a silicon constitute a small computer system. The basic features are as follows: the chip is small, but complete, SCM is one of the main features. Its internal program memory, data memory, a variety of interface circuit. Large processor speed is higher, the median more of the arithmetic unit, processing ability, but need to be configured in the external interface circuit; microcontroller clocked generally 100MHZ less suitable for small products for independent work, cited pin number from a few hundred. The application is simple, flexible, and free assembly language and C language development of SCM products. The working process of the microcontroller: microcontroller automatically complete the tasks entrusted to it, that is, single-chip implementation of the procedure for a section of the instruction execution process, the so-called directive requirements for single-chip implementation of the various operations used in the form of the command is to write down , which corresponds to a basic operation of designers assigned to it by the instruction set, an instruction; Full instructions can be executed by the microcontroller, the microcontroller instruction set, the different types of single-chip, and its instruction set is also different. So that the microcontroller canautomatically complete a specific task, the problem to be solved must be compiled into a series of instructions (these instructions must be selected microcontroller to the identification and implementation of the Directive), a collection of this series of instructions to become the program, the program need to pre- stored in the components - memory storage capabilities. Memory is composed by a number of storage units (the smallest unit of storage), like a large building has many rooms composed of the same, the instructions stored in these units, the instruction fetch unit and perform like the rooms of large buildings, each assigned to only a room number, each memory cell must be assigned to a unique address number, the address is known as the address of the storage unit, so as long as you know the address of the storage unit, you can find the storage unit that stores instructions can be removed, and then be executed. Programs are usually executed in the order, instruction program is a sequential storage, single-chip in the implementation of the program to be able to a section of these instructions out and be implemented, there must be a component to track the address of instruction where this part the program counter PC (included in the CPU), the start of program execution, endowed the address where the first instruction of the program to the PC, and then made for each command to be executed, the PC in the content will automatically increase, increase The amount is determined by the instruction length of this article may be 2 or 3, to point to the starting address of the next instruction to ensure the implementation of the instruction sequence.Microcontroller tower water level control system is the basic design requirements: inside the tower, we have designed a simple water level detection sensor used to detect the three water level, the low water level, the normal water level, water level. Low water to give a high single-chip, driven pumps and water, the red light; water level in the normal range, the pump add water, the green light; high water when the pump without water, the yellow light. The design process using the sensor technology, microcomputer technology, and light alarm technology and weak control the strong power of technology. Technical parameters and design tasks: 1, the use of the MCU to control the water level on the tower;, the water level in the water level detection sensor probe was the tower to give the microcontroller in order to achieve the water pump and water system and display system control; 3, the light alarm display system circuit, pumps and hydropower route relaycontrol;, analysis is drawn on the working principle of the system structure and a system block diagram using the microcontroller as a control chip, the main work process when the water in the tower low water level, water level detection sensor gave a high microcontroller, microcontroller-driven pump to add water and display system so that the red light lit; pump add water when the water level within the normal range, the green light, when the water level in the high-water mark, The microcontroller can not drive the water pump to add water, the yellow light. Light alarm circuit, the relay control circuit it works: When the water level in the low water, low water level detection sensor line is not +5 V power supply guide pass into the regulator circuit is treated in the output of the voltage regulator circuit has a high level, into the P1.0 port of the microcontroller, another high voltage circuit output of the microcontroller P1.1 port SCM After analysis, the P1.2 port outputs a low red light, drive, P1. 5 out a signal so that the optocoupler GDOUHE guide through so that the relay is closed, so that the water pump to add water; when the water level in the normal range, water pump plus P1.3 pin to a low level, so that the green light; when the water level in the high-water zone, the sensor of the two detection lines are conduction, are +5 power conduction into the SCM, SCM After analysis, the P1.4 pin out of a low yellow light, The optocoupler guide a low out of the P1.5-side can not pass, so that the relay can not be closed, the pump can not add water; failure when three flashing light indicates the system.译文水位控制电路设计中国水之源总量居世界第六位,人均占有水资源量仅为世界人均占有量的四分之一,并且在地域上分布很不平衡,长江以北的广大地区,特别是北方大、中城市大部分地区处于缺水状态,水资源短缺已成为制约我国经济发展的一个重要因素。
星巴克的品牌风格与消费者的全球化体验作者:克雷格·汤普森泽尼普·亚瑟摘要:此前的研究有力地表明,全球品牌与本土文化的交汇产生文化的异质性。
很少研究调查过全球品牌构造这些文化的异质性表达、消费者体验的全球化的方法。
为了纠正这种差距,我们发展了霸权品牌风格的构建。
我们用星巴克施加于它的当地咖啡店的企业文化,通过它的市场运作、服务环境和一系列对立的观点(即反星巴克论)阐明霸权品牌风格的影响,这种霸权品牌风格通过消费者各自地伪造美学、政治和反企业表示,支持当地的咖啡店体验两种不同形式。
关键词:品牌忠诚文化理论分析零售及店铺形象深度的访谈人种论我们改变了人们在早晨起床候所习惯的生活方式、他们奖励自己的方式,他们习惯的约会地点。
——星巴克CEO 奥林〃史密斯星巴克营销的成功是多方面的。
星巴克革命性从美国中产阶级咖啡喜好者的社会象征转化成主流消费品,它本质上创造了美国咖啡店市场。
1990年,在美国大约200家独立的咖啡厅,如今已有14,000多家,而星巴克咖啡厅数量占了总量的30%。
星巴克咖啡亭的模型已被证明易于在全球范围内推广,现在已席卷加拿大,中国,日本,台湾,英国和欧洲大陆,还大胆地计划进入咖啡发源地。
星巴克的市场垄断地位,加上其超级侵略性扩张战略导致一个显着的蚕食率在其自己的专卖店增长,也使这个品牌成为抗议和批评的避雷针。
归因于全球化的企业资本主义社会批评家星巴克已经成为所有贪婪无度、掠夺意图和文化同质化的文化图标。
反星巴克的口号、干扰嬉笑怒骂的星巴克标志的文化以及对该公司的业务慷慨激昂的起诉书充斥着互联网的许多角落,成为了许多网络社区的议论热点。
学术研究者也进入到这种全球化后果的文化对话中。
对于同质化论的支持者,全球知名品牌由于特洛伊木马通过跨国公司对当地文化进行殖民统治。
近年来,人类学的研究,已经建立了一个强有力的实证案例相反同质化论文,消费者往往通过了解、选择全球知名品牌,以达到自己的目的,创造性地加入了新的文化联系,减弱了跨国品牌和当地消费者的矛盾,跨国品牌也转变了一些经营文化,去适用当地文化、生活模式。
Reforming Agricultural Trade: Not Just for the Wealthy CountriesIn the early 1990s, Mozambique removed a ban on raw cashew exports, which was originally imposed to guarantee a source of raw nuts to its local processing industry and to prevent a drop in exports of processed nuts. As a result, a million cashew farmers received higher prices for the nuts in the domestic market. But, at least half the higher prices received for exports of these nuts went to traders, and not to farmers, so there was no increase in production in response to the higher prices. At the same time, Mozambique’s nut-processing industry lost its guaranteed supply of raw nuts, and was forced to shut down processing plants and lay off 7,000 workers (FAO 2003).In Zambia, before liberalization, maize producers benefited from subsidies to the mining sector, which lowered the price of fertilizer. A State buyer further subsidized small farmers. When these subsidies were removed, and the para-State privatized, larger farmers close to international markets saw few changes, but small farmers in remote areas were left without a formal market for their maize.In Vietnam, trade liberalization was accompanied by tax reductions, land reforms, and marketing reforms that allowed farmers to benefit from increased sales to the market. As Vietnam made these investments, it began to phase out domestic subsidies and reduce border protection against imports. An aggressive program of targeted rural investments accompanied these reforms. During this liberalization, Vietnam’s overall economy grew at 7% annually, agricultural output grew by 6%, and the proportion of undernourished people fell from 27% to 19% of the population. Vietnam moved from being a net importer of food to a net exporter (FAO 2003).Similarly, in Zimbabwe, before liberalization of the cotton sector, the government was the single buyer of cotton from farmers, offering low prices to subsidized textile firms. Facing lower prices, commercial farmers diversified into other crops (tobacco, horticulture) but smaller farmers who could not diversify suffered. Internal liberalization eliminated price controls and privatized the marketing board. The result was higher cotton prices and competition among the three principal buyers. Poorer farmers benefited through increased market opportunities, as well as better extension and services. As a result, agricultural employment rose by 40%, with production of traditional and non-traditional crops increasing.Policy reforms can decrease employment in the short run, but in general, changes in employment caused by trade liberalization are small relative to the overall size of the economy and the natural dynamics of the labor market. But, for some countries that rely heavily on one sector and do not have flexible economies, the transition can be difficult. Even though there are long-term and economy-wide benefits to trade liberalization, there may be short-term disruptions and economic shocks which may be hard for the poor to endure.Once a government decides to undertake a reform, the focus should be on easing the impact of reforms on the losers –either through education, retraining, or income assistance. Government policy should also focus on helping those who will be able to compete in the new environment to take advantage of new opportunities. Even though trade on balance has a positive impact on growth, and therefore on poverty alleviation, developing countries should pursue trade liberalization with a pro-poor strategy. In other words, they should focus on liberalizing those sectors that will absorb non-skilled labor from rural areas, as agriculture becomes more competitive. The focus should be on trade liberalization that will enhanceeconomic sectors that have the potential to employ people in deprived areas. Trade liberalization must be complemented by policies to improve education, rural roads, communications, etc., so that liberalization can be positive for people living in rural areas, not just in urban centers or favored areas. These underlying issues need to be addressed if trade (or any growth) is to reach the poorest; or the reforms and liberalization need to be directed toward smallholders, and landless and unskilled labor.BUT THE POOR IN D EVELOPING COUNTRIES DON’T BENEFIT EQUALLY All policies create winners and losers. Continuing the status quo simply maintains the current cast of winners and losers. Too often in developing countries, the winners from current policies are not the poor living in rural areas. Policy reforms (whether in trade or in other areas) simply create a different set of winners and losers.Notwithstanding the overall positive analyses of the impact of trade liberalization on developing countries as a group, there are significant variations by country, commodity, and different sectors within developing countries. Most analysts combine all but the largest developing countries into regional groupings, so it is difficult to determine the precise impacts on individual countries. Even those studies that show long-term or eventual gains for rural households or for the poor do not focus on the costs imposed during the transition from one regime to another. It is even more difficult to evaluate the impact on different types of producers within different countries, such as smallholders and subsistence farmers. Also, economic models cannot evaluate how trade policies will affect poverty among different households, or among women and children within households.Allen Winters (2002) has proposed a useful set of questions that policy-makers should ask when they consider trade reforms:1. Will the effects of changed border prices be passed through the economy? If not, the effects –positive or negative – on poverty will be muted.2. Is reform likely to destroy or create markets? Will it allow poor consumers to obtain new goods?3. Are reforms likely to affect different household members – women, children – differently?4. Will spillovers be concentrated on areas/activities that are relevant to the poor?5. What factors – land, labor, and capital – are used in what sectors? How responsive is the supply of those factors to changes in prices?6. Will reform reduce or increase government revenue? By how much?7. Will reforms allow people to combine their domestic and international activities, or will it require them to switch from one to another?8. Does the reform depend on or affect the ability of poor people to assume risks?9. Will reforms cause major shocks for certain regions within the country?10. Will transitional unemployment be concentrated among the poor?Although trade liberalization is often blamed for increasing poverty in developing countries, the links between trade liberalization and poverty are more complex. Clearly, more open trade regimes lead to higher rates of economic growth, and without economic growth any effort to alleviate poverty, hunger, and malnutrition will be unproductive. But, without accompanying national policies in education, health, land reforms, micro-credit, infrastructure, and governance, economic growth (whether derived from trade or other sources) is much less likely to alleviate poverty, hunger, and malnutrition in the poorest developing countries.CONCLUSIONSThe imperative to dismantle unjust structures and to halt injurious actions is enshrined in the Millennium Development Goals, and in the goals of the Doha Development Round. This imperative has been primarily directed at the OECD countries that maintain high levels of agricultural subsidies and protection against many commodities that are vital to the economic well-being of developing countries. The OECD countries must reduce their trade barriers, reduce and reform their domestic subsidies; but, as this chapter makes clear, the OECD reforms must be accompanied by trade policy reforms in the developing countries as well.Open trade is one of the strongest forces for economic development and growth. Developing countries and civil society groups who oppose these trade reforms i n order to ‘protect’ subsistence farmers are doing these farmers a disservice. Developing countries and civil society are correct in the narrow view that markets cannot solve every problem, and that there is a role for government and for public policies. As the Doha negotiators get down to business, their energies would be better used in ensuring that developing countries begin to prepare for a more open trade regime by enacting policies that promote overall economic growth and that promote agricultural development. Their energies would be better spent convincing the population (taxpayers and consumers) in developed countries of the need for agricultural trade reform, and in convincing the multilateral aid agencies to help developing countries invest in public goods and public policies to ensure that trade policy reforms are pro-poor.It is clear from an examination of the evidence that trade reform, by itself, does not exacerbate poverty in developing countries. Rather, the failure of trade reforms to alleviate poverty lies in the underlying economic structures, adverse domestic policies, and the lack of strong flanking measures. To ensure that trade reform is pro-poor, the key is not to seek additional exemptions from trade disciplines for developing countries, which will only be met with counter-demands for other exemptions by developed countries, but to ensure that the WTO agreement is strong and effective in disciplining subsidies and reducing barriers to trade by all countries.Open trade is a key determinant of economic growth, and economic growth is the only path to poverty alleviation. This is equally true in agriculture as in other sectors of the economy. In most cases, trade reforms in agriculture will benefit the poor in developing countries. In cases where the impact of trade reforms is ambiguous or negative, the answer is not to postpone trade reform. Rather, trade reforms must be accompanied by flanking policies that make needed investments or that provide needed compensation, so that trade-led growth can benefit the poor.。
Reading Material(1)PlumbingIn general, plumbing refers to the system of pipes, fixtures, and other apparatus used inside a building for Supplying water and removing liquid and waterborne wastes. In practice, the term includes storm water or roof drainage and exterior system components connecting to a source such as a public water system or a point of disposal such as a public sewer system or a domestic septic tank or cesspool.The purpose of plumbing systems is, basically, to bring into, and distribute within, a building a supply of safe water to be used for drinking purposes and to collect and dispose of polluted and contaminated wastewater from the various receptacles on the premises without hazard to the health of occupants. Codes, regulations, and trade practices are designed to keep the water system separated from drainage systems; to prevent the introduction of harmful material such as chemicals,micro-organisms, and dirt; and to keep the water system safe under all operating conditions. These protective codes also are designed to prevent flooding of drainage lines, provide venting of dangerous gases, and eliminate opportunities for backflow of dangerous waste water into the water system. It is essential that disease-producing organisms and harmful chemicals be confined to the drainage system.Since the time of Moses man has been cautioned to dispose of his wastes safely, and cleanliness has been related to the availability of water and associated with social custom. Early man often lived near a water source that served as his water supply and drainage system in one. It was also bis bath. I.atrine-like receptacles with crude drains have been found in excavations in the Orkney Islands of Neolithic stone huts at least 10,000 years old. Both a water system and piping ctsed as drainage fashioned of terra-cotta pipe were part of the royal palace of Minos in Crete, about 2000 BC. The palace also had a latrine with water-flushing reservoir and drainage. Nothing comparable to it was developed in Europe until the 18th century.Even the equipment of the modern bathroom, though much improved with hot and cold water under pressure and less crude provisions for drainage, is in concept little different from the Minoan version. Itwas not until the end of the 19th century that advances in plumbing practice were given serious attention as an integral part of housing.A building plumbing system includes two components, the piping that brings potable water into the building and distributes it to all fixtures and water outlets and the piping that collects the water after use and drains it to a point of safe disposal.Water systems. When a building is served by a public water system, the plumbing begins at the service connection with the public supply. It includes all meters, pumps, valves, piping, storage tanks,and connectionsrequired to make water available at outlets serving the fixtures or equipment within the building.Many premises in rural areas are not served by public water supply. These may include private dwellings, apartment houses, hotels, commercial centres, hospitals, institutions, factories, roadsidestands, and restaurants.Public water supplies have surface water or groundwater as their source. Large water systems are almost entirely supplied with surface water. In smaller communities and in certain areas groundwater is obtained from wells or springs. Independent semipublic, industrial, and private-premise water systemsfrequently take water from wells on the premises but may, under certain conditions, draw water from aspring, lake, or stream.Public water systems supply treated water meeting publicwater-supply drinking-water standards.Private-premise systems are expected to provide water of equal quality, and to do so the private system requires a water-treatment plant including chlorination as a minimum and possibly sedimentation (settling out of solid particles) chemical treatment, primarily for softening, and filtration.Water is supplied to fixtures and outlets under pressure provided by pumps or elevated storage tanks or both. In some installations a pump controlled by a pressure-activated switch on a pressurized storage tank takes water from a well and pumps until the upper limit of pressure for the system has been reach。
A Riccati Equation Approach to the Stabilizationof Uncertain Linear SystemsIAN R.PETERSEN and CHRISTOPHER V.HOLLOAbstractThis paper presents a method for designing a feedback control law to stabilize a class of uncertain linear systems.The systems under consideration contain uncertain parameters whose values are known only to with a given compact bounding set.Furthermore,these uncertain parameters may be time-varying.The method used to establish asymptotic stability of the closed loop system obtained when the feedback control is applied involves the use of a quadratic Lyapunov function.The main contribution of this paper involves the development of a computationally feasible algorithm for the construction of a suitable quadratic Lyapunov function,Once the Lyapunov function has been obtained,it used to construct the stabilizing feedback control law.The fundamental idea behind the algorithm presented involves constructing an upper bound for the Lyapunov derivative corresponding to the closed loop system.This upper bound is a quadratic form.By using this upper bounding procedure,a suitable Lyapunov function can be found by solving a certain matrix Riccati equation.Another major contribution of this paper is the identification of classes of systems for which the success of the algorithm is both necessary and sufficient for the existence of a suitable quadratic Lyapunov function.Key words:Feedback control;Uncertain linear systems;Lyapunov methods;Riccati equation1.INTRODUCTIONThis paper deals with the problem of designing a controller when no accurate model is available for the process to be controlled.Specifically,the problem or stabilizing an uncertain linear system using state feedback control is considered.In this case the uncertain linear system consists of a linear system containing parameters whole values are unknown but bounded.That is,the values of these uncertain parameters are known to be contained with given compact bounding sets.Furthermore,these uncertain parameters are assumed to vary with time.The problem of stabilizing uncertain linear systems of this type has attracted a considerable amount or interest in recent years. In Leitman(1979,1981)and Gutman and Palmoor(1982),the uncertainty in the system is assumed to satisfy the so called “matching conditions".These matching conditions constitute sufficient conditions for a given uncertain system to be stabilizable.In Corless and Leitmann(1981)and Barmish,Corless and Leltmann(1983),this approach is extended to uncertain non—linear systems.However,even for uncertain linear systems the matching conditions are known to be unduly restrictive.Indeed,It has been shown in Barmish and Leitmann(1982) and Hollot and Barmish(1980) that there exist many uncertain linear systems which fail to satisfy the matching conditions and yet are nevertheless stabilizable.Consequently,recent research efforts have been directed towards developing control schemes which will stabilize a larger class of system than those which satisfy the matching conditions; e.g.Barmish and Leitmann(1982),Hollot and Barmish(1980),Thorp and barmish(1981),Barmlsh (1982,1985)and HoLLot(1984).The main aim of this paper is to enlarge the class of uncertain linear systems for which one can construct a stabilizing feedback control law.It should be noted however,that in contrast to Corless and Leitmann(1981) Barmish,Corless and Leitmann(1983)and Petersen and Barmish(1986),attention will be restricted to uncertain linear systems here.Lyapunov of law,enter to the 1990s non-linear controlled field succeed in will it be the eighties the 20th century while being excellent while being stupid It is a main design method with stupid and excellent and calm non-linear system. While utilizing this kind of method todesign the stupid excellent composure system , suppose at first the uncertainty existing is unknown in the real system, but belong to a certain set that describes,namely the uncertain factor can show in order to there is unknown parameter of the circle,gain unknown perturbation function to have circle and accuse of mark of target claim model construct a proper Lyapunov function, make its whole system of assurance steady to any element while assembling uncertain.Just because of this kind of generality,no matter used for analyzing the stability or using for being calm and comprehensive,lack contractility.People attempt ripe theory is it reach the non-linear system to delay more linear system. Introduced the non- linear system.In recent years to the steps, in the non-linear system,the meaning in the steps lies in it has described the essence of the non-linear structure of the system.For imitating the non-linear system penetrated,can utilize the concept of relative steps to divide into linear and two non-linear parts systematically,part it is non-linear can view,it is linear for part can have accused of can watch as well as, system that form like this zero subsystem not dynamic, having proved it under the one-dimensional situation, if the asymptotic stability of the overall situation of zero dynamic subsystem, so whole system can be exactly booked nearer and nearer with the overall situation.Feedback as to steps linearization combines together, receive and well control the result such as document[1].In all the references cited above dealing with uncertain linear systems,the stability of the close-loop uncertain system is established using a quadratic Lyapunov function.This motivates the concept of quadratic stabilizability which is formalized in section 2;see also Barmish(1985).Furthermore ,in Barmlsh(1985)and Petersen(1983),it is show in section 2;see also Barmish(1985).Furthermore,in Barmish(1985) and Petersen(1983),it is shown that the problem of stabilizing an uncertain linear system can be reduced to the problem or constructing a suitable quadratic Lyapunov function for the system consequently a major portion of this paper is devoted to this problem.Various aspects of the problem or constructing suitable quadratic Lyapunov functions have been investigated in Hollot and Barmish(1980),Thorp and Barmish(1981) and Hollot(1984),Chang and Peng(1972),Noldus(1982) and Petcrsen(1983).One approach to finding a suitbale quadratic Lyap unov function involves solving an“augmented”matrix Riccati equation which has beenspecially constructed to account for the uncertainty in the system;e.g. Chang and Peng(1972)and Noldus (1982).The results presented in this paper go beyond Noldus(l982) in t hat uncertainty is allowed in both the “A” matrix and “B” matrix.Furthermore,a number of classes of uncertain systems are identified,for which the success of this method becomes necessary and sufficient for the existence of a suitable quadratic Lyapunov function.The fundamental idea behind the approach involves constructing a quadratic form which serves as an upper bound for the Lyapunov derivative corresponding to the closed loop uncertain system.This procedure motivates the introduction or the term quadratic bound method to describe the procedure used in this paper.The benefit of quadratic boundind stems from the fact that a candidate quadratic Lyapunov function can easily be obtained by solving a matrix Riccati equation.For the special case or systems wi thout uncertainty,this “augmented” Riccati equation reduces to the “ordinary” Ricccti equation which arises in the linear quadratic regulator problem,e.g.Anderson and Moore(1971).Hence,the procedure presented in the paper can be regarded as being an extension of the linear quadratic regulator design procedure.2.SYSTEM AND DEFINITIONSA class of uncertain linear systems described by the state equations0011()[()]()[()]()k li i i i i i xt A A r t x t B B s t u t ===+++∑∑ where ()n x t R ∈ is the state,()m u t R ∈ is the control and ()kr t R ∈and ()t s t R ∈ are vectors of uncertain parameters,is considered.The functions r(·)and s(·) are restricted to be Lebessue measurable vector functions.Furthermore,the matrices i A and i B are assumed to be rankone matrices of the form 'i i i A d e =and 'i i i B f g = in the above description ()i r t and ()i s t denote the component of the vectors r(t) and s(t) respectively.Remarks :Note that an arbitrary n ⨯ n matrix i A can always be decomposed as the sum ofrank one matrices;i.e.for the system(∑),one can write 1P i ij j A A ==∑ with rank one ij A .Consequently,if i i A r is replaced by 1k ij ij j A r=∑and the constraint |()|ij r t r ≤ is included for alli and j then this "overbounding” of the uncertainties will resul t in a system which satisfies the rank-one assumption.Moreover,stabilizability of this "larger" system will imply stabiliabillty for(Z).At this point,observe that the rank one decompositions for the i A and i B are notunique.For example,i d can be multiplied by any scalar if i e is divided by the samescalar.This fact represents one of the main weaknesses or the approach.That is,the quadratic bound method described in the sequel may fail for one decomposition of i A and i B and yet succeed for another.At the moment,there is no systematic method for choosing the best rank-one decompositions and therefore,this would constitute an important area for future research.A final observation concerns the bounds on the uncertain parameters,it has been assumed that each parameter satisfies the same bound.For example,one has |()|ij r t r ≤ rather than separate bounds |()|ij r t r ≤.This assumption can be made without loss orgenerality.Indeed,any variation in the uncertain bounds can be eliminated by suitable scaling of the matrices i A and i B ..The weighting matrices Q and R.Associated with the system(∑)are the positive definite symmetric weighting matrices n nQ R ⨯∈and (*):n m p R R →.These matrices are chosen by the designer.It will be seen in Section 4 that these matrices are analogous to the weighting matrices in the classical linear quadratic regulator problem.The formal definition of quadratic stabilizability now presented. Definition 2.1.The system(∑) is said to be quadratically stabilizable if there exists continuous feedback control (*):n m p R R → with P(0)=0,an n ⨯ n:positive definite symmetric matrix P and a constant >0 such that the following condition is satisfied,given any admissible uncertainties r (·)and S…(·)the Lyapu nov derivative corresponding to the closed loop system and the quadratic Lyapunov function '()V x x Px = satisfies the inequality.''2000111(,)[(())(())]2[()]()||||k k li i i i i t l l l x t A A r t P P A A r t x x P B B s t P x x ξ====+++++≤-∂∑∑∑ for all non-zero n x R ∈and all,[0,)t ∈∞To clarify the definitions and theorems which follow,it useful to rewrite the Lyapunov derivative inequality(2.1).Indeed,applying the state space transformation x=sn:1S P -=;the inequality is obtained.In order to present a necessary and sufficient condition for quadratic stabilizability of ()∑,some preliminary definitions are required.Definition2.2.The set Nand the function ():n n n t R R R λ⨯⨯→ are defined. In the following definition,a condition referred to as the modified matching condition is introduced .It will be seen in the next section that uncertainty matrices i A satisfying thiscondition will not enter into the construction of a quadratic Lyapunov function for the system()∑ see also Petersen (1985).Definition 2.3.Given any {1,2,,}i k ∈ the matrix i A is said to satisfy the modifiedmatching condition if'0y A x=ifor all y N∈ and all n∈x RAccording to the line controls theoretical dispersion control the method stands alone the control method for the sake of the partial system that overcome the shortage, the big system line in the theories scatters about to control the theories is applied to control the realm in the electric power system, with solution many machine electric power system inside many the control problem of the controller. Because many machines electric power system controls of keep the view way of thinking is a foundation top that a research to concentrates control method, so scatter about control direction is to inquiry in to control in concentration how lower the amount of correspondence. Cultural heritage designed a whole appearance feedback control system first, passing the analysis discovers besides rest district mutually Cape excluding, the other appearance controls the result to all affect to the whole not very, can away with directly, from but get" false turn asunder" control project .Finally, use the native current information calculation acquire the rest district mutually Cape, realize the dispersion of the control strategy turns to handle .A key problem that scatter about how controller design is handled the influence of each sub- system a connection item. Cultural heritage at proceed to resolve to the big system adopted to overlay the technique, namely the model of the each statures system includes the appearance of part of rest sub- systems, a controller for designing still needs the appearance of feedback close by and parts of sub- systems, so could not to attain complete scatter about control.For this, cultural heritage regard whole system model as the foundation designs each partial controller, but take into to the control construction of each partial controller the dispersion control.(can the feedback is native to measure the signal).Cultural heritage proceeded the improvement to this method, combining to scatter about the appearance feedback to control to expand the exportation feedback scatter about to control.The electric power system line turns the model inside the some the special and native measuring to change the deal( if the exportation power, machine of the generator carries electric current, electric voltage...etc.) and a relation for having got than closely, lead into these in scatter about controller the native measuring can replace the feedback to measure towhole system appearance, cultural heritage calls this method as that the connection measures the dispersion moderates to control the method, is a very valuable research direction.CONCLUSIONSThe quadratic bound algorithm presented in this paper provides a computationally feasible procedure for the stabilization of an uncertain linear system .although the approach gives only a sufficient condition stabilizability,a number of cases have been given for which the method is both necessary and sufficient for quadratic stabilizability.Furthermore,most other methods for stabilizing an uncertain linear system involve either implicitly or explicitly the use of a quadratic Lyapunov function.Therefore,the "tightness" results will prove useful when comparing this approach other methods for stabilizing an uncertain linear system.As mentioned in Section 2,one area for future research concerns finding the best rank one decompositions of the matrices.Another would involve investigating Riccati equations of the form (3.6).In particular,it would be desirable to give some algebraic or geometrical condition for the existence of a positive definite solution to this Riccati equation.一种不确定线性系统的Riccati方程镇定方法摘要这篇文章提出了一种适合反馈控制的方法来使一类不确定线性系统镇定。
附件3外文文献原文Clusters and Competitiveness——A New Federal Role For Stimulating Regional EconomiesByKaren lsElisabeth B.ReynoldsAndrew ReamerClusters reinvigorate regional competitiveness. In recent decades, the nation’s economic dominance has eroded across an array of industries and business functions. In the decades following World War II, the United States built world-leading industries that provided well-paying jobs and economic prosperity to the nation. This dominance flowed from the nation’s e xtraordinary aptitude for innovation as well as a relative lack of international competition. Other nations could not match the economic prowess of the U.S. due to some combination of insufficient financial, human, and physical capital and economic and social systems that did not value creativity and entrepreneurship.However, while the nation today retains its preeminence in many realms, the dramatic expansion of economic capabilities abroad has seen the U.S. cede leadership, market share, and jobs in an ever-growing, wide-ranging list of industries and business functions. Initially restricted to labor-intensive, lower-skill activities such as apparel and electronic parts manufacturing, the list of affected U.S. operations has expanded to labor-intensive, higher-skill ones such as furniture-making and technical support call centers; capital-intensive, higher-skill ones such as auto, steel, and information technology equipment manufacturing; and, more recently, research and development (R&D) activities in sectors as diverse as computers and consumer products. Looking ahead, the nation’s capability for generating and sustaining stable, sufficiently well-paying jobs for a large number of U.S. workers is increasingly at risk. Across numerous industries, U.S.-based operations have not been fully effective inresponding to competitive challenges from abroad. Many struggle to develop and adopt the technological innovations (in products and production processes) and institutional innovations (new ways of organizing firms and their relationships with customers, suppliers, and collaborators) that sustain economic activity and high-skill, high value-added jobs. As a result, too many workers are losing decent jobs without prospect of regaining them and too many regions are struggling economically.In this environment, regional industry clusters provide a valuable mechanism for boosting national and regional competitiveness. Essentially, an industry cluster is a geographic concentration of interconnected businesses, suppliers, service providers, and associated institutions in a particular field.Defined by relationships rather than a particular product or function, clusters include organizations across multiple traditional industrial classifications (which makes drawing the categorical boundaries of a cluster a challenge). Specifically, participants in an industry cluster include:•organizations providing similar and related goods or services•specialized suppliers of goods, services, and financial capital (backward linkages)•distributors and local customers (forward linkages)•companies with complementary products (lateral linkages)•companies employing related skills or technologies or common inputs (lateral linkages)•related research, education, and training ins titutions such as universities, community colleges, and workforce training programs•cluster support organizations such as trade and professional associations, business councils, and standards setting organizationsThe power of clusters to advance regional economic growth was described (using the term ―industrial districts‖) in the pioneering work of Alfred Marshall in 1890. With the sizeable upswing in regional economic restructuring in recent decades, understanding of and interest in the role of clusters in regional competitiveness again has come to the fore through the work of a number of scholars and economic development practitioners.In particular, the efforts of Michael Porter, in a dual role as scholar and development practitioner, have done much to develop and disseminate the concept.Essentially, industry clusters develop through the attractions of geographic proximity—firms find that the geographic concentration of similar, related,complementary, and supporting organizations offers a wide array of benefits. Clusters promote knowledge sharing (―spillovers‖) and innovations in products and in technical and business processes by providing thick networks of formal and informal relationships across organizations. As a result, companies derive substantial benefits from participation in a cluster’s ―social structure of innovation.‖A number of studies indicate a positive correlation between clusters and patenting rates, one measure of the innovation process.What is more, clusters enhance firm access to specialized labor, materials, and equipment and enable lower operating costs. Highly concentrated markets attract skilled workers by offering job mobility and specialized suppliers and service providers—such as parts makers, workforce trainers, marketing firms, or intellectual property lawyers—by providing substantial business opportunities in close proximity. And concentrated markets tend to provide firms with various cost advantages; for example, search costs are reduced, market economies of scale can cut costs, and price competition among suppliers can be heightened.Entrepreneurship is one important means through which clusters achieve their benefits. Dynamic clusters offer the market opportunities and the conditions—culture, social networks, inter-firm mobility, access to capital—that encourage new business development.In sum, clusters stimulate innovation and improve productivity. In so doing, they are a critical element of national and regional competitiveness. After all, the nation’s econom y is essentially an amalgamation of regional ones, the health of which depends in turn on the competitiveness of its traded sector—that part of the economy which provides goods and services to markets that extend beyond the region. In metropolitan areas and most other economic regions of any size, the traded sector contains one or more industry clusters.In this respect, the presence and strength of industry clusters has a direct effect on economic performance as demonstrate a number of recent studies. A strong correlation exists between gross domestic product per capita and cluster concentrations.Several studies show a positive correlation between cluster strength and wage levels in cluster.And a third set of studies indicates that regions with strong clusters have higher regional and traded sector wages.For purposes of economic development policy, meanwhile, it should be kept in mind that every cluster is unique. Clusters come in a variety of purposes, shapes,and sizes and emerge out of a variety of initial conditions. (See Appendix A for examples.) The implication is that one size, in terms of policy prescription, does not fit all.Moreover, clusters differ considerably in their trajectory of growth, development, and adjustment in the face of changing market conditions. The accumulation of evidence suggests, in this respect, that there are three critical factors of cluster success: collaboration (networks and partnerships), skills and abilities (human resources), and organizational capacities to generate and take advantage of innovations.Any public policy for clusters, then, needs to aim at spurring these success factors.Policy also needs to recognize that cluster success breeds success: The larger a cluster, the greater the benefits it generates in terms of innovation and efficiencies, the more attractive it becomes to firms, entrepreneurs, and workers as a place to be, the more it grows, and so on. As a result, most sectors have a handful of dominant clusters in the U.S. As the dominant sectors continually pull in firms, entrepreneurs, and workers, it is difficult for lower tier regions to break into the dominant group.For instance, the biotech industry is lead by the Boston and San Francisco clusters, followed by San Diego, Seattle, Raleigh-Durham, Washington-Baltimore, and Los Angeles.Moreover, as suggested by the biotech example, the dominant clusters tend to be in larger metro areas. Larger metros (almost by definition) tend to have larger traded clusters, which offer a greater degree of specialization and diversity, which lead to patenting rates almost three times higher than smaller metros.The implication is that public policy needs to be realistic; not every region can be, as many once hoped, the next Silicon Valley.At the same time, not even Silicon Valley can rest on its laurels. While the hierarchy of clusters in a particular industry may be relatively fixed for a period of time, the transformation of the American industrial landscape from the 1950s—when Detroit meant cars, Pittsburgh meant steel, and Hartford meant insurance—to the present makes quite clear that cluster dominance cannot be taken for granted. This is true now more than ever—as innovation progresses, many clusters have become increasingly vulnerable, for three related reasons.First, since the mid-20th century, transportation and communications innovations have allowed manufacturers to untether production capacity from clusters and scatter isolated facilities around the nation and the world, to be closer to new markets and totake advantage of lower wage costs. Once relatively confined to the building of ―greenfield‖ branch plants in less industrial, non-union areas of the U.S., the shift of nondurables manufacturing to non-U.S. locations is a more recent manifestation of this phenomenon. Further, these innovations have enabled foreign firms to greatly increasetheir share of markets once dominated by American firms and their associated home-based clusters.Second, more recent information technology innovations have allowed the geographic disaggregation of functions that traditionally had been co-located in a single cluster. Firms now have the freedom to place headquarters, R&D, manufacturing, marketing and sales, and distribution and logistics in disparate locations in light of the particular competitive requirements (e.g., skills, costs, access to markets) of each function.As a result, firms often locate operations in function-specific clusters. The geographic fragmentation of corporate functions has had negative impacts on many traditional, multi-functional clusters, such as existed in 1960. At the same time, it offers opportunities, particularly for mid-sized and smaller areas, to develop clusters around highly specific functions that may serve a variety of industry sectors. For instance, Memphis, TN and Louisville, KY have become national airfreight distribution hubs. Relying on Internet technologies, firms such as IBM and Procter & Gamble are creating virtual clusters, cross-geography ―collaboratories.‖However, by whatever name and changes in information technology, the benefits of the geographic agglomeration of economic activity will continue for the foreseeable future.)Third, as radically new products and services disrupt existing markets, new clusters that produce them can do likewise. For instance, the transformation in the computer industry away from mainframes and then from minicomputers in the 1970s and 1980s led to a shift in industry dominance from the Northeast to Silicon Valley and Seattle.In the new world of global competition, the U.S. and its regions are in a perpetual state of economic transition. Industries rise and fall, transform products and processes, and move around the map. As a result, regions across the U.S. are working hard to sustain a portfolio of competitive clusters and other traded activities that provide decent jobs. In this process, some regional economies are succeeding for the moment, while others are struggling. For U.S. regions, states, and particularly the federal government, the challenge is to identify and pursue mechanisms—clusterinitiatives, in particular—to enhance the competitiveness of existing clusters while taking advantage of opportunities to develop new ones.Cluster initiatives stimulate cluster competitiveness and growth. Cluster initiatives are formally organized efforts to promote cluster competitiveness and growth through a variety of collaborative activities among cluster participants.Examples of such collaborative efforts include:•facilitating mark et development through joint market assessment, marketing,and brand-building•encouraging relationship-building (networking) within the cluster, within the region, and with clusters in other locations•promoting collaborative innovation –research, product and process development, and commercialization•aiding the innovation diffusion, the adoption of innovative products, processes, and practices•supporting the cluster expansion through attracting firms to the area and supporting new business development•sponsoring education and training activities•representing cluster interests before external organizations such as regional development partnerships, national trade associations, and local, state, and federal governmentsWhile cluster initiatives have existed for some time, research indicates that the number of such initiatives has grown substantially around the world in a short period of time. In 2003, the Global Cluster Initiative Survey (GCIS) identified over 500 cluster initiatives in Europe, North America, Australia, and New Zealand; 72 percent of these had been created during the previous four years.That number likely has expanded significantly in the last five years. Today, the U.S. alone has several hundred distinct cluster initiatives.A look across the breadth of cluster initiatives indicates the following:•Clusters are present across the full array of i ndustry sectors, including both manufacturing and services—as examples, initiatives exist in information technology, biomedical, photonics, natural resources, communications, and the arts •They are almost always in sectors of economic importance, in other words, they tend not to be frivolously or naively chosen•They carry out a diverse set of activities, typically in four to six of the b ulleted categories on the previous page•While the geographic boundaries of many are natural economic regions such as metro areas, others follow political boundaries, such as states•Typically, they are industry-led, with active involvement from government and nonprofit organizations•In terms of legal structure, they can be sponsored by existing collaborative institutions such as chambers of commerce and trade associations or created as new sole-purpose nonprofits (e.g., the North Star Alliance)•Most have a dedicated facilitator•The number of participants in a cluster initiative can range from a handful to over 500•Almost every cluster initiative is unique when the combination of regional setting, industry, size, range of objectives and activities, development, structure, and financing are consideredSuccessful cluster initiatives:•are industry-led•involve state and local government decisionmakers that can be supportive•are inclusive: They seek any and all organizations that might find benefi t from participation, including startups, firms not locally-owned, and firms rival to existing members•create consensus regarding vision and roadmap (mission, objectives, how to reach them)•encourage broad participation by members and collaboration amon g all types of participants in implementing the roadmap•are well-funded initially and self-sustaining over the long-term•link with relevant external efforts, including regional economic development partnerships and cluster initiatives in other locationsAs properly organized cluster initiatives can effectively promote cluster competitiveness, it is in the nation’s interest to have well-designed, well-implemented cluster initiatives in all regions. Cluster initiatives often emerge as a natural, firm-led outgrowth of cluster development. For example, the Massachusetts Biotechnology Council formed out of a local biotech softball league.However, left to the initiative of cluster participants, a good number of possible cluster initiatives never see reality because of a series of barriers to the efficient working of markets (what economists call ―market failures‖). First are ―public good‖ and ―free rider‖ problems. In certain instances, individual firms, particularly smallones, will under-invest in cluster a ctivities because any one firm’s near-term cost in time, money, and effort will outweigh the immediate benefits it receives. So no firm sees the incentive to be an early champion or organizer. Further, because all firms in the cluster benefit from the work of early champions (―public good‖), many are content to sit back and wait for others to take the lead (be a ―free rider‖).Consequently, if cluster firms are left to their own devices and no early organizers emerge, a sub-optimal amount of cluster activity will occur and the cluster will lose the economic benefits that collaboration could bring.Some firms have issues of mistrust, concerns about collaborating with the competition. In certain industries in certain regions, competition among firms is so intense that a culture of secrecy and suspicion has developed that stymies mutually beneficial cooperation.Even if the will to organize a cluster initiative is present, the way may be impeded by a variety of factors. Cluster initiatives may not get off the ground because would-be organizers lack knowledge about the full array of organizations in the cluster, relationships or standing with key organizations (i.e., lack the power to convene), financial resources to organize, or are uncertain about how organizin g should best proceed. They see the ―transaction costs‖ of overcoming these barriers (that is, seeking information, building relationships, raising money) as too high to move forward. In the face of the various barriers to self-generating cluster initiatives, public purpose organizations such as regional development partnerships and state governments are taking an increasingly active role in getting cluster initiatives going. So, for example, the Massachusetts Technology Collaborative, a quasi-public state agency, was instrumental in initiating the Massachusetts Medical Device Industry Council (inresponse to an economic development report to the governor prepared by Michael Porter). And Maine’s North Star Alliance was created through the effort of that state’s governor.However, a number of states and regional organizations—and national governments elsewhere—have come to understand that creating single cluster initiatives in ad hoc, ―one-off‖ manner is an insufficient response to the problem and the opportunity. Rather, as discussed in the next section, they have created formal on-going programs to seed and support a series of cluster initiatives. Even so, the nation’s network of state and regional cluster init iatives is thin and uneven in terms of geographic and industry coverage. Consequently, the nation’s ability to stay competitive and provide well-paying jobs across U.S. regions is diminished; broader, thoughtful federal action is necessary.。
Aquatic Toxicology 65(2003)337–360Fish tolerance to organophosphate-induced oxidative stress isdependent on the glutathione metabolism andenhanced by N -acetylcysteineSamuel Peña-Llopis a ,∗,M.Dolores Ferrando b ,Juan B.Peña aa Institute of Aquaculture Torre de la Sal (CSIC),E-12595Ribera de Cabanes,Castellón,Spain bDepartment of Animal Biology (Animal Physiology),Faculty of Biology,University of Valencia,Dr.Moliner-50,E-46100Burjassot,Valencia,Spain Received 24October 2002;received in revised form 5June 2003;accepted 7June 2003AbstractDichlorvos (2,2-dichlorovinyl dimethyl phosphate,DDVP)is an organophosphorus (OP)insecticide and acaricide extensively used to treat external parasitic infections of farmed fish.In previous studies we have demonstrated the importance of the glutathione (GSH)metabolism in the resistance of the European eel (Anguilla anguilla L.)to thiocarbamate herbicides.The present work studied the effects of the antioxidant and glutathione pro-drug N -acetyl-l -cysteine (NAC)on the survival of a natural population of A.anguilla exposed to a lethal concentration of dichlorvos,focusing on the glutathione metabolism and the enzyme activities of acetylcholinesterase (AChE)and caspase-3as biomarkers of neurotoxicity and induction of apoptosis,respectively.Fish pre-treated with NAC (1mmol kg −1,i.p.)and exposed to 1.5mg l −1(the 96-h LC 85)of dichlorvos for 96h in a static-renewal system achieved an increase of the GSH content,GSH/GSSG ratio,hepatic glutathione reductase (GR),glutathione S -transferase (GST),glutamate:cysteine ligase (GCL),and ␥-glutamyl transferase (␥GT)activities,which ameliorated the glutathione loss and oxidation,and enzyme inactivation,caused by the OP pesticide.Although NAC-treated fish presented a higher survival and were two-fold less likely to die within the study period of 96h,Cox proportional hazard models showed that hepatic GSH/GSSG ratio was the best explanatory variable related to survival.Hence,tolerance to a lethal concentration of dichlorvos can be explained by the individual capacity to maintain and improve the hepatic glutathione redox status.Impairment of the GSH/GSSG ratio can lead to excessive oxidative stress and inhibition of caspase-3-like activity,inducing cell death by necrosis,and,ultimately,resulting in the death of the organism.We therefore propose a reconsideration of the individual effective dose or individual tolerance concept postulated by Gaddum 50years ago for the log-normal dose–response relationship.In addition,as NAC increased the tolerance to dichlorvos,it could be a potential antidote for OP poisoning,complementary to current treatments.©2003Elsevier B.V .All rights reserved.Keywords:Dichlorvos;Organophosphorus pesticide;Tolerance;Necrosis;Glutathione redox status;BiomarkersAbbreviations:AChE,acetylcholinesterase;EAAs,excitatory amino acids;GCL,glutamate:cysteine ligase;GPx,glutathione peroxidase;GR,glutathione reductase;GSH,reduced glutathione;GSSG,oxidised glutathione or glutathione disulphide;GST,glutathione S -transferase;␥GT,␥-glutamyl transferase;NAC,N -acetyl-l -cysteine;NMDA,N -methyl-d -aspartate;OP,organophosphate;ROS,reactive oxygen species;TTD,time-to-death∗Corresponding author.Tel.:+34-964-319500;fax:+34-964-319509.E-mail address:samuel@iats.csic.es (S.Peña-Llopis).0166-445X/$–see front matter ©2003Elsevier B.V .All rights reserved.doi:10.1016/S0166-445X(03)00148-6338S.Peña-Llopis et al./Aquatic Toxicology65(2003)337–3601.IntroductionDichlorvos(2,2-dichlorovinyl dimethyl phosphate; DDVP)is a relatively non-persistent organophosphate (OP)compound that undergoes fast and complete hy-drolysis in most environmental compartments and is rapidly degraded by mammalian metabolism(WHO, 1989).These characteristics made it attractive for worldwide use to control insects on crops,household, stored products,and treat external parasitic infections of farmedfish,livestock,and domestic animals.In fact,dichlorvos has extensively been used to treat sea lice infestations(by the copepod parasites Lep-eophtheirus salmonis and Caligus elongatus)in the Atlantic salmon(Salmo salar)culture.The primary effect of dichlorvos and other OPs on vertebrate and invertebrate organisms is the inhibition of the enzyme acetylcholinesterase(AChE),which is responsible for terminating the transmission of the nerve impulse.OPs block the hydrolysis of the neu-rotransmitter acetylcholine(ACh)at the central and peripheral neuronal synapses,leading to excessive ac-cumulation of ACh and activation of ACh receptors. The overstimulation of cholinergic neurones initiates a process of hyperexcitation and convulsive activity that progresses rapidly to status epilecticus,leading to profound structural brain damage,respiratory dis-tress,coma,and ultimately the death of the organism if the muscarinic ACh receptor antagonist atropine is not rapidly administered(Shih and McDonough, 1997).Until recently,the toxic effects of OPs were believed to be largely due to the hyperactivity of the cholinergic system as a result of the accumulation of ACh at the synaptic cleft.However,recent stud-ies have highlighted the role of glutamate receptors in the propagation and maintenance of OP-induced seizures,as well as the role of glutamate in mediat-ing neuronal death after OP poisoning(Solberg and Belkin,1997).A few minutes after the beginning of OP-induced seizures,other neurotransmitter systems become progressively more disrupted,releasing ini-tially catecholamines and afterwards excitatory amino acids(EAAs),such as glutamate and aspartate,which prolong the convulsive activity.After a certain dura-tion of convulsions(about40min in rats exposed to soman)the atropine treatment becomes ineffective be-cause the seizure activity can be sustained in absence of the initial cholinergic drive(Shih and McDonough,1997).The high extracellular concentrations of EAA are neurotoxic,because they are able to activate the N-methyl-d-aspartate(NMDA)receptor,leading to intracellular influx of Ca2+,which triggers the acti-vation of proteolytic enzymes,nitric oxide synthase, and the generation of free radicals(Beal,1995).Re-active oxygen species(ROS)such as hydrogen perox-ide(H2O2)and the free radicals superoxide(O2•−) and hydroxyl radical(HO•)can react with biologi-cal macromolecules(especially the hydroxyl radical) and produce enzyme inactivation,lipid peroxidation, and DNA damage,resulting in oxidative stress.The degree of this oxidative stress is determined by the balance between ROS production and antioxidant de-fences.Pesticides are recently known to be able to induce in vitro and in vivo generation of ROS(Bagchi et al.,1995).In previous studies we demonstrated that thiocarbamate herbicides induced oxidative stress in the European eel(Anguilla anguilla L.)(Peña et al., 2000;Peña-Llopis et al.,2001).Oxidative stress ef-fects have also been observed in the carp(Cyprinus carpio)and catfish(Ictalurus nebulosus)intoxicated with dichlorvos(Hai et al.,1997).OPs are also capable to induce programmed cell death(apoptosis)by multifunctional pathways (Carlson et al.,2000).Apoptosis is a complex pro-cess characterised by a cell shrinkage,chromatin condensation,and internucleosomal DNA fragmenta-tion that allows unwanted or useless cell removal by phagocytosis,preventing an inflammatory response to the intracellular components.Caspases are a fam-ily of cysteine proteases that are present in cytosol as inactive pro-enzymes but become activated when apoptosis is initiated,playing an essential role at var-ious stages of it(Cohen,1997).Caspase-3is one of the key executioners of apoptosis,being responsible either partially or totally for the proteolytic cleavage of many structural and regulatory proteins.However, at conditions of higher stress,the cellular impairment is so high that apoptosis is suppressed.This leads to cell death by necrosis,which causes further tissue damage and an intense inflammatory response. Dichlorvos is metabolised in the rat liver mainly via two enzymatic pathways:one,producing desmethyl-dichlorvos,is glutathione(GSH)dependent,while the other,resulting in dimethyl phosphate and dichloroac-etaldehyde,is glutathione independent(Dicowsky and Morello,1971).Hence,GSH availability canS.Peña-Llopis et al./Aquatic Toxicology65(2003)337–360339result in a limiting factor for dichlorvos elimination.Glutathione is a ubiquitous thiol-containing tripeptidethat is involved in numerous processes that are essen-tial for normal biological function,such as DNA andprotein synthesis(Meister and Anderson,1983).Itis predominantly present in cells in its reduced form(GSH),which is the active state.Among the severalimportant functions of GSH,it contributes to the re-moval of reactive electrophiles(such as many metabo-lites formed by the cytochrome P-450system)throughconjugation by means of glutathione S-transferases(GSTs).GSH also scavenges ROS directly or in areaction catalysed by glutathione peroxidase(GPx)through the oxidation of two molecules of GSH to amolecule of glutathione disulphide(GSSG).The re-lationship between the reduced and oxidised state ofglutathione,the GSH/GSSG ratio or glutathione redoxstatus,is then considered as an index of the cellularredox status and a biomarker of oxidative damage,because glutathione maintains the thiol-disulphidestatus of proteins,acting as a redox buffer.Glutathione levels are regulated by several en-zymes(Meister and Anderson,1983),but mainlydepend on the balance between GSH synthesis rate(by glutamate:cysteine ligase,GCL),conjugation rate(by GSTs),oxidation rate(non-enzymatically or byGPx),and GSSG reduction to GSH(by glutathionereductase,GR).GCL is an enzyme also known as ␥-glutamylcysteine synthetase,which catalyses the rate limiting step of GSH biosynthesis in whichthe amino acid l-cysteine is linked to l-glutamate.GR reduces GSSG to GSH at expenses of oxidisingNADPH to NADP+,which is recycled by the pentosephosphate pathway.In extrahepatic tissues,high GSHconcentrations are also maintained by␥-glutamyltransferase(␥GT,traditionally known as␥-glutamyltranspeptidase),which is the only protease that cancleave intact GSH and GSH-conjugates(Curthoys andHughey,1979).␥GT is a membrane-bound enzymewith its active site orientated on the outer surface of thecell membrane that enables resorption of extracellularGSH catabolites from plasma(Horiuchi et al.,1978).We found previously that eels showing a higher sur-vival upon herbicide exposure had enhanced GR ac-tivity and increased GSH and GSH/GSSG ratio in theliver(Peña-Llopis et al.,2001).Hence,a drug thatcould increase the GSH content and act as a reductantcould improve the survival of OP-poisonedfish.We used in this study the well-known antioxidant and free radical scavenger N-acetyl-l-cysteine(NAC),which can easily be deacetylated to l-cysteine,the limiting amino acid for glutathione biosynthesis.NAC is used clinically to treat several diseases related to oxida-tive stress and/or glutathione deficiency such as parac-etamol(acetaminophen)overdose,VIH infection,and lung and heart diseases(Prescott et al.,1977;Gillissen and Nowak,1998;De Rosa et al.,2000;Sochman, 2002).It has also been proven to be useful in the treatment of acute paraquat and heavy metal poison-ing(Hoffer et al.,1996;Ballatori et al.,1998;Gurer and Ercal,2000).So far,studies of tolerance to pollutants and/or ox-idative stress have principally been focused on the role of genetic variations in natural populations(e.g. Sullivan and Lydy,1999)or antioxidant defences of different species(Hasspieler et al.,1994;Hansen et al., 2001)and strains(Mathews and Leiter,1999),but not on the effect of antioxidant defences on the survival of a natural population exposed lethally to a pollutant. At this point,we try tofill this gap by studying the effect of the antioxidant NAC on dichlorvos survival of a genetically diverse population of European eels by analysing endpoints of the glutathione metabolism, in addition to the use of AChE and caspase-3activ-ities as biomarkers of neurotoxicity and induction of apoptosis,respectively.2.Materials and methods2.1.AnimalsSexually undifferentiated yellow eels of the species A.anguilla(5–15g)were used to avoid the effects of sex variation and minimise hormonal interactions in toxicity assays.These European eels were captured on the coast of Portugal(averaging0.33g)and cultured for about6months in afish farm(Valenciana de Acui-cultura S.A.,Spain)free of any disease.Acclimation and selection offish for acute toxicity tests were car-ried out according to OECD guidelines(1992).Be-fore starting the experiments,animals were kept for2 weeks in aerated andfiltered dechlorinated freshwater (total hardness:192±5mg l−1as CaCO3;pH7.5±0.1; dissolved oxygen:7.2±0.1mg l−1)at24.0±0.5◦C, and with a12-h photoperiod.340S.Peña-Llopis et al./Aquatic Toxicology65(2003)337–3602.2.ChemicalsHexipra Solucion®,an emulsifiable concentrate containing40%of dichlorvos,8%of emulgators,and 47%of non-toxic solvents,composed principally by 2-propanol,was obtained from Laboratorios Hipra S.A.(Girona,Spain).2-Vinylpyridine was acquired from Aldrich.NADPH was purchased from Ap-plichem(Darmstadt,Germany).NAC and all other reagents were obtained from Sigma Chemical Co. (St.Louis,MO,USA)unless mentioned otherwise.2.3.N-Acetylcysteine supplementation assayFish received a single intraperitoneal(i.p.)injection of either1mmol kg−1NAC or its vehicle(physiolog-ical saline).This amount of NAC was used in order to induce GSH synthesis beyond physiological levels. Five animals were removed from the water at3,12,24, 48,72,and96h after the injection,and anaesthetised in ice instead of using a chemical anaesthesia to prevent interfere with the glutathione metabolism(Brigelius et al.,1982).They were then weighed,the lengths measured,and were euthanised by decapitation.The livers and muscles were excised,weighed and stored frozen at−80◦C until biochemical determinations.2.4.Time-to-death(TTD)static-renewal testsMortality within96h was the main end point of this study.In order to ensure a low percentage of sur-vivors at96h in the TTD tests,preliminary acute tox-icity tests were performed in accordance with OECD guidelines(1992)to estimate the lethal concentration that causes85%mortality at96h(96-h LC85).Fish were exposed to different nominal concentrations of dichlorvos at24.0±0.5◦C in a static-renewal system,where water and pesticide were completely replaced every24h in40-l glass aquaria.These concentration–effect experiments indicated that the median lethal concentration at96h(96-h LC50)for dichlorvos in the European eel was0.852mg l−1(95% confidence interval(CI),0.735–0.957),and the96-h LC85was1.498mg l−1(95%CI,1.378–1.774).This latter concentration(1.5mg l−1)was then used in the TTD tests to expect a mortality of85%.This nominal concentration of dichlorvos includes 1.7mg l−1of 2-propanol.Although this aliphatic alcohol can poten-tiate the toxicity of carbon tetrachloride(Traiger and Plaa,1971),the96-h LC50of this solvent for fresh-waterfish ranged from4200to11,130mg l−1(WHO, 1990).Therefore,the toxicity of Hexipra Solucion®observed was virtually due exclusively to dichlorvos. One hundred randomly selected eels were separated into two groups.Fifty ice-anaesthetisedfish were in-jected i.p.with1mmol kg−1NAC,whereas the other 50were only injected with the same amount of saline and were assigned to four40-l tanks,receiving25fish each.Fish were allowed to recover in clean water for 3h because the injection time lasted10min from the first animal injected to the last.After that,fish were exposed to1.5mg l−1of dichlorvos for96h under semi-static conditions as mentioned before,where water and pesticide were completely replaced once a day.Water temperature was recorded every3h and maintained at24.0±0.5◦C in all tanks during the experiment.Fish were continually inspected at3-h intervals,but during thefirst24h they were checked every90min because a higher mortality was expected. Dead animals were immediately removed,the TTD noted,weighed,the length measured and the livers and muscles were excised,weighed and stored frozen at−80◦C.At96h,survivors were anaesthetised with ice and processed as previously described.The same TTD experiment was replicated again in order to have 100NAC-treated and100non-treatedfish,and then gain statistical power.2.5.Glutathione determinationTissue samples were homogenised with5volumes of ice-cold5%5-sulfosalicylic acid per gram of wet weight tissue,and further processed by sonica-tion(Vibra-Cell,Sonics&Materials Inc.,Danbury, CT,USA).Homogenates were then centrifuged at 20,000×g for20min at4◦C.Total glutathione con-tent(tGSx)and oxidised glutathione(GSSG)were de-termined in supernatant fractions with a sensitive and specific assay using a recycling reaction of GSH with 5,5 -dithiobis(2-nitrobenzoic acid)(DTNB)in the presence of excess GR according to Baker et al.(1990) in a microplate reader(Model3550,Bio-Rad Labora-tories,Richmond,CA,USA)as previously described (Peña-Llopis et al.,2001).Glutathione concentrations were expressed as nmol of GSH equivalents(GSx) per mg of protein(GSx=[GSH]+2×[GSSG]).S.Peña-Llopis et al./Aquatic Toxicology65(2003)337–360341 GSH was calculated by subtracting GSSG levels fromthe tGSx levels determined.GSH/GSSG ratio wasexpressed as number of molecules but not moles:GSH GSSG =tGSx−GSSGGSSG/2.2.6.Kinetic enzyme assaysLiver and muscle tissues were homogenised with 5and4volumes,respectively,of Henriksson stabil-ising medium(Henriksson et al.,1986),which con-tained50%glycerol,20mM phosphate buffer pH7.4, 0.5mM EDTA,and0.02%defatted bovine serum al-bumin.-Mercaptoethanol was not included because it interferes with the GR assay.Homogenates were centrifuged at20,000×g for20min at4◦C,and the resulting supernatants were diluted5-or10-fold with buffer and assayed rapidly for enzyme activities.2.6.1.AChE(EC3.1.1.7)activityAChE activity was determined at415nm with acetylthiocholine as substrate in accordance to an adaptation of the Ellman method(Ellman et al., 1961)to microtiter plates by Doctor et al.(1987), but with0.1M phosphate buffer,pH7.27and1mM EDTA as recommended by Riddles et al.(1979).Eel cholinesterase activity detected in muscle was con-sidered as true AChE as was previously characterised (Lundin,1962;Ferenczy et al.,1997).2.6.2.GR(EC1.6.4.2)activityThe method of Cribb et al.(1989)was used to assay the GR activity through the increase of absorbance at 415nm with reference wavelength at595nm.Thefinal concentrations of0.075mM DTNB,0.1mM NADPH, and1mM GSSG were used in accordance to Smith et al.(1988).2.6.3.GST(EC2.5.1.18)activityGST activity was measured through the conjugation of GSH with1-chloro-2,4-dinitrobenzene(CDNB) according to Habig et al.(1974).The assay mixture contained100mM potassium phosphate buffer,pH 6.5,1mM CDNB in ethanol,and1mM GSH.The formation of the adduct of CDNB,S-2,4-dinitrophenyl glutathione,was monitored by measuring the rate of increase in absorbance at340nm with a Multi-skan Ascent microplate reader(Thermo Labsystems, Helsinki,Finland).2.6.4.γGT(EC2.3.2.2)activity␥GT activity was determined by the method of Silber et al.(1986).The rate of the substrate ana-logue␥-glutamyl-p-nitroanilide cleavage to form p-nitroaniline(pNA)by transfer of a glutamyl moiety to glycylglycine was monitored at405nm for at least 10min.2.6.5.GCL(EC6.3.2.2)activityGCL activity assay was adapted to microtiter plates from the indirect method of Seeling and Meister (1985),which utilises the coupled reaction of pyruvate kinase(PK)and lactate dehydrogenase(LDH)to de-termine the rate of formation of ADP by GCL through the oxidation of NADH.Each well contained0.1M Tris–HCl buffer,pH8,150mM KCl,2mM EDTA, 20mM MgCl2,5mM ATP,2mM phosphoenolpyru-vate,10mM l-glutamate,10mM l-␣-aminobutyrate, 0.2mM NADH,7U ml−1PK,and10U ml−1LDH. Enzyme activity was evaluated by following the de-crease in the absorbance of NADH at340nm at25◦C with the Multiskan Ascent microplate reader.A calibration curve of known activities of purified enzymes was used on every96-well plate to avoid mis-calculations resulting from an ill-defined path length. AChE(type V)from electric eel,GR(type III)from baker’s yeast,GST from equine liver,and␥GT(type I)from bovine kidney were used as standards,whose activities were determined in quartz cuvettes using a Hitachi U-2001UV-Vis spectrophotometer(Hitachi Instruments Inc.,USA).A molar absorption coeffi-cient at412nm(ε412)of14.150was used for the di-anion of DTNB(TNB2−)as Riddles et al.(1979)de-termined.As no purified GCL enzyme was available, several samples were used as standards and their activ-ity were validated spectrophotometrically.Reliability of AChE and␥GT assays was verified with the stan-dard ACCUTROL TM Normal.Specific enzyme activi-ties were expressed as nmoles of substrate hydrolysed per min per milligram of protein(mU mg−1prot).2.7.Caspase-3assayCaspase-3activity was measured in96-well plates using the Sigma caspase-3colorimetric assay kit342S.Peña-Llopis et al./Aquatic Toxicology65(2003)337–360according to the manufacturer’s instructions.Thehydrolysis of the peptide substrate acetyl-Asp-Glu-Val-Asp p-nitroanilide(Ac-DEVD-pNA)to releasepNA was monitored at405nm and calculated usinga pNA calibration curve,whose concentrations weredetermined with a spectrophotometer.Sealed andlight preserved microplates were incubated at25◦Cfor several days in order to detect extremely lowenzyme activities.Pseudo-zero-order kinetics wasensured by plotting absorbance against time on everywell.Recombinant human caspase-3was used as apositive control to validate results.Specific enzymeactivity was expressed as pmol of Ac-DEVD-pNAhydrolysed per min per mg protein(U mg−1prot).The DEVDase activity measured was considered ascaspase-3-like because caspase-7is another key ex-ecutioner of apoptosis that has similar function andsubstrate specificity to caspase-3(Fernandes-Alnemriet al.,1995).2.8.Protein determinationProtein content was determined by the Bio-Rad Pro-tein Assay kit(Bio-Rad Laboratories GmbH,Munich,Germany)based on the Bradford dye-binding proce-dure,using bovine serum albumin as standard.2.9.StatisticsThe96-h lethal concentrations(LC50and LC85)were determined with the Probit Analysis procedureusing the SPSS10.0statistical software package(SPSS Inc.,Chicago,IL,USA),which was used forall other statistical analyses.Survival curves wereconstructed using the Kaplan–Meier method(Kaplanand Meier,1958)and compared by the log-rank χ2statistics.Two-factor ANOV A with the type III sum-of-squares method was used to investigate theeffect of pre-treatment and pesticide exposure andtheir interaction on studied variables.The time de-pendence of variables after NAC injection in controlswas also tested by the two-way ANOV A.A prioricontrasts between selected single levels of factorswere made to compare means.Variables with hetero-geneity of variances,according to the Levene test,were properly transformed.Pearson correlation coef-ficients were calculated among all studied parametersof dichlorvos-exposed eels in order to measure thestrength of a linear association between two variables.These relationships were also tested after removing the effect of TTD by means of partial correlations. These variables were checked for normality with the Kolmogorov–Smirnov test with Lilliefors significance correction,and data not normally distributed were appropriately transformed.Sequential Bonferroni cor-rection was applied to multiple significance tests to avoid spurious significant differences(Rice,1989). As standard ANOV A-type and common multivari-ate regression methods cannot be used for survival data because of the presence of censored observa-tions and skewing of the data(Piegorsch and Bailer, 1997),the Cox proportional hazards regression model (Cox,1972)was used to determine the relationship between dichlorvos mortality and studied variables. Unadjusted hazard ratios were obtained from univari-ate Cox proportional hazard models.Adjusted hazard ratios were obtained from significant explanatory variables determined using a multivariate stepwise forward selection procedure from all covariates based on conditional parameter estimates.The P≤0.05 and P>0.10were set,respectively,as limits for variable inclusion and exclusion.These covariates were then adjusted for the effect of length and weight. The assumption of proportional hazards was ensured by visual inspection of the smoothed plots of scaled Schoenfeld residuals(Schoenfeld,1982)versus sur-vival time,in accordance with Hess(1995),and the plots of martingale residuals against the covariates.3.Results3.1.Dichlorvos mortalityMortality observed upon exposure to the96-h LC85 of dichlorvos was91%for eels pre-treated with saline (92and90%in thefirst and second replicate of the ex-periment,respectively),whereas it was of85%in NAC pre-treatedfish(86and84%in thefirst and second replicate,respectively).Replicates showed no different survival curves when compared stratifying for treat-ment(log-rankχ2=0.6,P=0.43).Aquaria also did not affect survival in saline-treatedfish(log-rankχ2= 0.6,P=0.90)nor NAC-treatedfish(log-rankχ2=1.8,P=0.60),thus data of replicates and aquaria were pooled.Then,only nine of the100fish injected with the vehicle survived within the96h of the studyS.Peña-Llopis et al./Aquatic Toxicology65(2003)337–360343 Fig.1.Kaplan–Meier estimates of survival of eels i.p.injected either with1mmol kg−1NAC or its vehicle(saline)and,after3h,were exposed to1.5mg l−1(the96-h LC85)of dichlorvos.Censored observations at the end of the observation time were represented by crosses.period,with a mean survival of25h(95%CI,20–30), while15of the100fish pre-treated with NAC sur-vived,with a mean survival of34h(95%CI,28–41). Therefore,eels pre-treated with1mmol kg−1of NAC presented a66.7%higher survival than non-treated fish(log-rankχ2=7.8,P<0.005;Fig.1),which was more evident within thefirst24h.3.2.Effect of NAC and/or dichlorvos on biochemical parametersBasically,fish exposure to the96-h LC85of dichlorvos resulted in a decrease of the hepatic and muscular GSH levels(P<0.001;Table1),but a muscular GSSG increase(P<0.01)that lowered the GSH/GSSG ratio in the muscle(P<0.001).The glutathione redox status was also decreased in the liver(P<0.001).The activities of hepatic GR and GST,hepatic and muscular␥GT,hepatic GCL and caspase-3-like,and especially muscular AChE were also diminished(P<0.001),whereas GST activity increased in the muscle(P<0.001).Conversely, NAC treatment achieved an increase of the GSH con-tent and GSH/GSSG ratio in the liver(P<0.001) and muscle(P<0.001and0.01,respectively),in ad-dition to an enhancement of hepatic GR,GST,GCL (P<0.001),and␥GT(P<0.01)activities,and mus-cular GST(P<0.05).Interactions of treatment and dichlorvos exposure were only found on the hepatic and muscular␥GT activities(P<0.05).The single i.p.injection of1mmol kg−1NAC in-creased the levels of hepatic and muscular GSH by 39and14%(P<0.001and0.05,respectively),hep-atic GSH/GSSG ratio by53%(P<0.001),hepatic GR activity by12%,hepatic and muscular GST activ-ity by18and16%(P<0.01and0.05,respectively), and hepatic GCL activity by31%(P<0.001).How-ever,GSH content in the liver was time-dependent (Fig.2B).Three hours after the injection,GSH rose (P<0.05)and reached a two-fold increase at12h (P<0.001),that returned to baseline after48h.The 12h after the injection corresponded with the9h after dichlorvos exposure,and the beginning offish mortali-ties(Fig.1).The administration of NAC also enhanced the hepatic GSH/GSSG ratio by134%(Fig.3B)and GR activity by26%(Fig.3D)at thefirst3h(P< 0.001and0.01,respectively),but were not different afterwards,except for GR activity at96h(P<0.01). The decrease of glutathione levels and enzyme ac-tivities found in dichlorvos-exposed eels were amelio-rated with NAC pre-treatment(Table1).Hepatic and muscular GSH levels of NAC-treated animals were59 and16%higher(P<0.001and0.01,respectively), as can be observed in Figs.2A and4A,which resulted。