外文翻译 中文
- 格式:doc
- 大小:18.50 KB
- 文档页数:4
Strengths优势All these private sector banks hold strong position on CRM part, they have professional, dedicated and well-trained employees.所以这些私人银行在客户管理部分都持支持态度,他们拥有专业的、细致的、训练有素的员工。
Private sector banks offer a wide range of banking and financial products and financial services to corporate and retail customers through a variety of delivery channels such as ATMs, Internet-banking, mobile-banking, etc. 私有银行通过许多传递通道(如自动取款机、网上银行、手机银行等)提供大范围的银行和金融产品、金融服务进行合作并向客户零售。
The area could be Investment management banking, life and non-life insurance, venture capital and asset management, retail loans such as home loans, personal loans, educational loans, car loans, consumer durable loans, credit cards, etc. 涉及的领域包括投资管理银行、生命和非生命保险、风险投资与资产管理、零售贷款(如家庭贷款、个人贷款、教育贷款、汽车贷款、耐用消费品贷款、信用卡等)。
Private sector banks focus on customization of products that are designed to meet the specific needs of customers. 私人银行主要致力于为一些特殊需求的客户进行设计和产品定制。
中英文对照外文翻译(文档含英文原文和中文翻译)原文:Safety Assurance for Challenging Geotechnical Civil Engineering Constructions in Urban AreasAbstractSafety is the most important aspect during design, construction and service time of any structure, especially for challenging projects like high-rise buildings and tunnels in urban areas. A high level design considering the soil-structure interaction, based on a qualified soil investigation is required for a safe and optimised design. Dueto the complexity of geotechnical constructions the safety assurance guaranteed by the 4-eye-principle is essential. The 4-eye-principle consists of an independent peer review by publicly certified experts combined with the observational method. The paper presents the fundamental aspects of safety assurance by the 4-eye-principle. The application is explained on several examples, as deep excavations, complex foundation systems for high-rise buildings and tunnel constructions in urban areas. The experiences made in the planning, design and construction phases are explained and for new inner urban projects recommendations are given.Key words: Natural Asset; Financial Value; Neural Network1.IntroductionA safety design and construction of challenging projects in urban areas is based on the following main aspects:Qualified experts for planning, design and construction;Interaction between architects, structural engineers and geotechnical engineers;Adequate soil investigation;Design of deep foundation systems using the FiniteElement-Method (FEM) in combination with enhanced in-situ load tests for calibrating the soil parameters used in the numerical simulations;Quality assurance by an independent peer review process and the observational method (4-eye-principle).These facts will be explained by large construction projects which are located in difficult soil and groundwater conditions.2.The 4-Eye-PrincipleThe basis for safety assurance is the 4-eye-principle. This 4-eye-principle is a process of an independent peer review as shown in Figure 1. It consists of 3 parts. The investor, the experts for planning and design and the construction company belong to the first division. Planning and design are done accordingto the requirements of the investor and all relevant documents to obtain the building permission are prepared. The building authorities are the second part and are responsible for the buildingpermission which is given to the investor. The thirddivision consists of the publicly certified experts.They are appointed by the building authorities but work as independent experts. They are responsible for the technical supervision of the planning, design and the construction.In order to achieve the license as a publicly certified expert for geotechnical engineering by the building authorities intensive studies of geotechnical engineering in university and large experiences in geotechnical engineering with special knowledge about the soil-structure interaction have to be proven.The independent peer review by publicly certified experts for geotechnical engineering makes sure that all information including the results of the soil investigation consisting of labor field tests and the boundary conditions defined for the geotechnical design are complete and correct.In the case of a defect or collapse the publicly certified expert for geotechnical engineering can be involved as an independent expert to find out the reasons for the defect or damage and to develop a concept for stabilization and reconstruction [1].For all difficult projects an independent peer review is essential for the successful realization of the project.3.Observational MethodThe observational method is practical to projects with difficult boundary conditions for verification of the design during the construction time and, if necessary, during service time. For example in the European Standard Eurocode 7 (EC 7) the effect and the boundary conditions of the observational method are defined.The application of the observational method is recommended for the following types of construction projects [2]:very complicated/complex projects;projects with a distinctive soil-structure-interaction,e.g. mixed shallow and deep foundations, retaining walls for deep excavations, Combined Pile-Raft Foundations (CPRFs);projects with a high and variable water pressure;complex interaction situations consisting of ground,excavation and neighbouring buildings and structures;projects with pore-water pressures reducing the stability;projects on slopes.The observational method is always a combination of the common geotechnical investigations before and during the construction phase together with the theoretical modeling and a plan of contingency actions(Figure 2). Only monitoring to ensure the stability and the service ability of the structure is not sufficient and,according to the standardization, not permitted for this purpose. Overall the observational method is an institutionalized controlling instrument to verify the soil and rock mechanical modeling [3,4].The identification of all potential failure mechanismsis essential for defining the measure concept. The concept has to be designed in that way that all these mechanisms can be observed. The measurements need to beof an adequate accuracy to allow the identification ocritical tendencies. The required accuracy as well as the boundary values need to be identified within the design phase of the observational method . Contingency actions needs to be planned in the design phase of the observational method and depend on the ductility of the systems.The observational method must not be seen as a potential alternative for a comprehensive soil investigation campaign. A comprehensive soil investigation campaignis in any way of essential importance. Additionally the observational method is a tool of quality assurance and allows the verification of the parameters and calculations applied in the design phase. The observational method helps to achieve an economic and save construction [5].4.In-Situ Load TestOn project and site related soil investigations with coredrillings and laboratory tests the soil parameters are determined. Laboratory tests are important and essential for the initial definition of soil mechanical properties of the soil layer, but usually not sufficient for an entire and realistic capture of the complex conditions, caused by theinteraction of subsoil and construction [6].In order to reliably determine the ultimate bearing capacity of piles, load tests need to be carried out [7]. Forpile load tests often very high counter weights or strong anchor systems are necessary. By using the Osterberg method high loads can be reached without install inganchors or counter weights. Hydraulic jacks induce the load in the pile using the pile itself partly as abutment.The results of the field tests allow a calibration of the numerical simulations.The principle scheme of pile load tests is shown in Figure 3.5.Examples for Engineering Practice5.1. Classic Pile Foundation for a High-Rise Building in Frankfurt Clay and LimestoneIn the downtown of Frankfurt am Main, Germany, on aconstruction site of 17,400 m2 the high-rise buildingproject “PalaisQuartier” has been realized (Figure 4). The construction was finished in 2010.The complex consists of several structures with a total of 180,000 m2 floor space, there of 60,000 m2 underground (Figure 5). The project includes the historic building “Thurn-und Taxis-Palais” whose facade has been preserved (Unit A). The office building (Unit B),which is the highest building of the project with a height of 136 m has 34 floors each with a floor space of 1340 m2. The hotel building (Unit C) has a height of 99 m with 24 upper floors. The retail area (Unit D)runs along the total length of the eastern part of the site and consists of eight upper floors with a total height of 43 m.The underground parking garage with five floors spans across the complete project area. With an 8 m high first sublevel, partially with mezzanine floor, and four more sub-levels the foundation depth results to 22 m below ground level. There by excavation bottom is at 80m above sea level (msl). A total of 302 foundation piles(diameter up to 1.86 m, length up to 27 m) reach down to depths of 53.2 m to 70.1 m. above sea level depending on the structural requirements.The pile head of the 543 retaining wall piles (diameter1.5 m, length up to 38 m)were located between 94.1 m and 99.6 m above sea level, the pile base was between 59.8 m and 73.4 m above sea level depending on the structural requirements. As shown in the sectional view(Figure 6), the upper part of the piles is in the Frankfurt Clay and the base of the piles is set in the rocky Frankfurt Limestone.Regarding the large number of piles and the high pile loads a pile load test has been carried out for optimization of the classic pile foundation. Osterberg-Cells(O-Cells) have been installed in two levels in order to assess the influence of pile shaft grouting on the limit skin friction of the piles in the Frankfurt Limestone(Figure 6). The test pile with a total length of 12.9 m and a diameter of 1.68 m consist of three segments and has been installed in the Frankfurt Limestone layer 31.7 m below ground level. The upper pile segment above the upper cell level and the middle pile segment between the two cell levels can be tested independently. In the first phase of the test the upper part was loaded by using the middle and the lower part as abutment. A limit of 24 MN could be reached (Figure 7). The upper segment was lifted about 1.5 cm, the settlement of the middle and lower part was 1.0 cm. The mobilized shaft friction was about 830 kN/m2.Subsequently the upper pile segment was uncoupled by discharging the upper cell level. In the second test phase the middle pile segment was loaded by using the lower segment as abutment. The limit load of the middle segment with shaft grouting was 27.5 MN (Figure 7).The skin friction was 1040 kN/m2, this means 24% higher than without shaft grouting. Based on the results of the pile load test using O-Cells the majority of the 290 foundation piles were made by applying shaft grouting. Due to pile load test the total length of was reduced significantly.5.2. CPRF for a High-Rise Building in Clay MarlIn the scope of the project Mirax Plaza in Kiev, Ukraine,2 high-rise buildings, each of them 192 m (46 storeys)high, a shopping and entertainment mall and an underground parking are under construction (Figure 8). The area of the project is about 294,000 m2 and cuts a 30 m high natural slope.The geotechnical investigations have been executed 70m deep. The soil conditions at the construction site are as follows: fill to a depth of 2 m to 3mquaternary silty sand and sandy silt with a thickness of 5 m to 10 m tertiary silt and sand (Charkow and Poltaw formation) with a thickness of 0 m to 24 m tertiary clayey silt and clay marl of the Kiev and But schak formation with a thickness of about 20 m tertiary fine sand of the But schak formation up to the investigation depthThe ground water level is in a depth of about 2 m below the ground surface. The soil conditions and a cross section of the project are shown in Figure 9.For verification of the shaft and base resistance of the deep foundation elements and for calibration of the numerical simulations pile load tests have been carried out on the construction yard. The piles had a diameter of 0.82 m and a length of about 10 m to 44 m. Using the results of the load tests the back analysis for verification of the FEM simulations was done. The soil properties in accordance with the results of the back analysis were partly 3 times higher than indicated in the geotechnical report. Figure 10 shows the results of the load test No. 2 and the numerical back analysis. Measurement and calculation show a good accordance.The obtained results of the pile load tests and of the executed back analysis were applied in 3-dimensionalFEM-simulations of the foundation for Tower A, taking advantage of the symmetry of the footprint of the building. The overall load of the Tower A is about 2200 MN and the area of the foundation about 2000 m2 (Figure11).The foundation design considers a CPRF with 64 barrettes with 33 m length and a cross section of 2.8 m × 0.8m. The raft of 3 m thickness is located in Kiev Clay Marl at about 10 m depth below the ground surface. The barrettes are penetrating the layer of Kiev Clay Marl reaching the Butschak Sands.The calculated loads on the barrettes were in the range of 22.1 MN to 44.5 MN. The load on the outer barrettes was about 41.2 MN to 44.5 MN which significantly exceeds the loads on the inner barrettes with the maximum value of 30.7 MN. This behavior is typical for a CPRF.The outer deep foundation elements take more loads because of their higher stiffness due to the higher volume of the activated soil. The CPRF coefficient is 0.88 =CPRF . Maximum settlements of about 12 cm werecalculated due to the settlement-relevant load of 85% of the total design load. The pressure under the foundation raft is calculated in the most areas not exceeding 200 kN/m2, at the raft edge the pressure reaches 400 kN/m2.The calculated base pressure of the outer barrettes has anaverage of 5100 kN/m2 and for inner barrettes an average of 4130 kN/m2. The mobilized shaft resistance increases with the depth reaching 180 kN/m2 for outer barrettes and 150 kN/m2 for inner barrettes.During the construction of Mirax Plaza the observational method according to EC 7 is applied. Especially the distribution of the loads between the barrettes and the raft is monitored. For this reason 3 earth pressure devices were installed under the raft and 2 barrettes (most loaded outer barrette and average loaded inner barrette) were instrumented over the length.In the scope of the project Mirax Plaza the new allowable shaft resistance and base resistance were defined for typical soil layers in Kiev. This unique experience will be used for the skyscrapers of new generation in Ukraine.The CPRF of the high-rise building project MiraxPlaza represents the first authorized CPRF in the Ukraine. Using the advanced optimization approaches and taking advantage of the positive effect of CPRF the number of barrettes could be reduced from 120 barrettes with 40 mlength to 64 barrettes with 33 m length. The foundation optimization leads to considerable decrease of the utilized resources (cement, aggregates, water, energy etc.)and cost savings of about 3.3 Million US$.译文:安全保证岩土公民发起挑战工程建设在城市地区摘要安全是最重要的方面在设计、施工和服务时间的任何结构,特别是对具有挑战性的项目,如高层建筑和隧道在城市地区。
兼职工作外文翻译中英文最新英文Part-time employment and worker health in the United StatesYoung ChoAbstractA growing body of research has highlighted the consequences of part-time employment for workers’ health and well-being. However, these studies have yielded inconsistent results and relied on cross-sectional data. In addition, relatively little empirical research has explored whether the effect of working part-time on health varies by gender, particularly in the United States. Using longitudinal data from three waves of the General Social Survey panel (2010–2012–2014), our study examined the association between part-time employment and perceived health among U.S employees, and whether this association varied by gender. The results showed that part-time workers were less likely to report poor self-rated health than full-time workers, especially among males. The pattern of results was consistent across empirical approaches—including generalized estimating equations and random effects models, with an extensive set of covariates. Taken together, these findings suggest that for U.S. employees, working part-time appears to be beneficial or at least not detrimental to perceived health, which warrants further investigation.Keywords:Part-time employment,Self-rated health,American workers,General Social Survey panel,Gender differencesIntroductionIn most industrialized countries, the prevalence of part-time employment has significantly increased over the last few decades. According to the recent data from Organisation for Economic Co-operation and Development (2016), the average prevalence of working part-time employment in all OECD countries was 16.7% in 2014, ranging from the highest in the Netherlands (38.5%) to the lowest in Russian Federation (4.0%). Although nationally representative data for part-time employment are scarce in Asia, this region has also seen a slight increase in part-time work—Japanfrom 18.3% to 22.7% and South Korea from 9.0% to 10.5% between 2005 and 2014 (OECD, 2016). In the United States, data from the Bureau of Labor Statistics (BLS) showed that the percentage of workers engaged in part-time employment was approximately 18.5% in May 2016 (BLS, 2016a). The prevalence in the U.S. was 13.5% in 1968, and peaked at 20.1% in early 2010 (BLS, 2016a).The high prevalence of part-time employment has raised concerns regarding its consequences for workers’ health. In recent years, a growing number of studies have explored the relationship between part-time employmen t and employees’ health and well-being. These studies, however, have yielded inconsistent findings, with some suggesting positive effects of working part-time (Beham, Präg, & Drobnič, 2012; Booth & van Ours, 2013; Buehler & O’brien, 2011; Olsen & Dahl,2010) or negative effects (Burr, Rauch, Rose, Tisch, & Tophoven, 2015; Oishi, Chan, Wang, & Kim, 2015), while others have found no difference between part- and full-time workers (Rodriguez, 2002, Bardasi and Francesconi, 2004). Furthermore, although some studies have suggested that the effects of part-time employment on health may vary between female and male part-time workers, no clear pattern has emerged (Bartoll, Cortes, & Artazcoz, 2014; Beham et al., 2012). Also, most recent research on the consequences of part-time employment for health and well-being has been conducted in European (Bardasi and Francesconi, 2004, Bartoll et al., 2014, Beham et al., 2012, Booth and van Ours, 2013, Olsen and Dahl, 2010, Rodriguez, 2002) or Asian countries (Oishi et al., 2015), but relatively little is known about U.S. part-time workers’ health. Lastly, despite a growing number of studies in this area, most evidence is based on cross-sectional data, which may limit causal inference (Bartoll et al., 2014, Beham et al., 2012, Bu ehler and O’Brien, 2011, Burr et al., 2015; Higgins, Duxbury, & Johnson, 2000; Oishi et al., 2015, Olsen and Dahl, 2010).This study adds to the existing literature by examining the effects of part-time work on self-rated health using data from the General Social Survey (GSS) panel. The longitudinal nature of the GSS provides an opportunity to explore the relationship between part-time work and health status over time in a sample of U.S. adults. Data from 2010 to 2014 are used to obtain a more accurate picture of current patterns andproduce findings relevant to the U.S. workforce. Exploring gender differences is also of interest, especially given that very little research has been done in the U.S. to examine the potential gendered effects that part-time employment may have on health outcomes (Bartoll et al., 2014, Beham et al., 2012). Accordingly, the purpose of this study is to understand the potentially complex relationship between part-time employment and health status as well as gender differences in the relationship over a four-year period.In order to address the purpose, two primary research questions are explored in this study. First, among U.S. workers, how is part-time employment associated with workers’ health status? Second, are there gender diffe rences in the relationship between part-time employment and heath among U.S. workers? Given the large number of employees working part-time and the importance of health for workers’ and their families’ well-being, examining the potential consequences of part-time employment for employees’ health may shed light on the degree to which part-time employment affects American workers (Kleiner & Pavalko, 2010; Kleiner, Schunck, & Schömann, 2015; Valletta and Bengali, 2013, Virtanen et al., 2005). It also provides policymakers and researchers with a longitudinal perspective of the relationship between part-time work and health status as well as underlying gender differences.Definition of part-time employmentPart-time employment usually refers to “A job in which th e usual number of hours worked per week is below a specified threshold.” (Borowczyk-Martins & Lalé, 2016, p. 7). For example, OECD (2016) uses less than 30 h per week as a threshold for part-time employment, while the U.S. Bureau of Labor Statistics (BLS, 2016a, BLS, 2016b) uses 35 h per week as a cut-point. Additionally, previous studies have used different thresholds (e.g., weekly work hours <30, 32, 35, or 40 h) in defining part-time employment, which may entail some degree of arbitrariness of the cut-off (Bardasi & Francesconi, 2004; Beham et al., 2012; Booth & van Ours, 2013; Buehler & O’brien, 2011; Burr et al., 2015; Oishi et al., 2015). Also, some studies have used a continuous measure of weekly work hours rather than classifying them into categories (Datar, Nicosia, & Shier, 2014; Ziol-Guest, Dunifon, & Kalil, 2013).On the other hand, other studies have utilized workers’ perceived or self-reported status (i.e., response choices in the survey are ‘full-time’ and ‘part-time’) in their operationalization of part-time employment (Burn, Dennerstein, Browning, & Szoeke, 2016; Heimdal & Houseknecht, 2003; Kerrissey & Schofer, 2013). Although these two aspects of employment status—perceived part- vs. full-time status and actual number of hours worked—may have different meanings, few studies have considered them simultaneously.In addition, some of the previous studies have taken into account other relevant concepts such as precariousness or contingency (Kalleberg, 2000, Kalleberg, 2009, Kalleberg, 2013; V osko, Zukewich, & Cranford, 2003). That is, some scholars in this area have focused on the possibility that part-time jobs might be accompanied with poor job security and lower financial rewards. Indeed, part-time work has been regarded as one of the ‘precarious’ or ‘nonstandard’ employment, which are often characterized as poor work conditions, low wage, limited benefits, and high job insecurity (Fernández-Kranz and Rodríguez-Planas, 2011, Kalleberg, 2000; Quinlan, Mayhew & Bohle, 2001; V osko, MacDonald, & Campbell, 2009). For example, analyzing data from the fourth European Working Conditions Survey (EWCS) and the second European Company Survey (ECS), Sandor (2011) found that compared to full-time workers, part-time workers are less optimistic about their promotion prospects and less likely to receive education or training at work. Also, a recent report from BLS (2016b) indicated that part-time workers are more likely to be economically disadvantaged than full-time workers. Thus, empirical research on the consequences of part-time employment should account for relevant work and occupational conditions as well as financial situation in the analysis model, which helps to minimize an influence of “the third factor” that may bias the association between part-time employment and health.Theoretical perspectivesTheoretically, there could be two opposing hypotheses regarding the effects of part-time employment on workers’ well-being. On one hand, if employees opt for a part-time job voluntarily, they can allocate more time for their personal orfamily-related activities (Tilly, 1996). In this case, part-time work may help to manage health conditions and to balance their work and family life (Booth and van Ours, 2013, Higgins et al., 2000; Van Rijswijk, Bekker, Rutte, & Croon, 2004). Moreover, part-time workers may benefit from flexible time that can be spent on education or training, which could be an important investment for their human capital and future earning potential (Gash, 2008). Part-time employment may serve as an important strategy for both employees and employers that can contribute to preserving employees’ human capital without leaving the labor market (Falzone, 2000, Tomlinson, 2007, Mills and Täht, 2010, Vlasblom and Schippers, 2006).On the other hand, part-time workers may have less income and benefits to financially support themselves or their families (Bardasi and Gornick, 2000, Bardasi and Gornick, 2008, Connolly and Gregory, 2008). In addition, part-time workers may have less opportunity for advancement as well as loss in human capital due to restrictions in training or education (Connolly and Gregory, 2008, Kalleberg, 2000). Part-time jobs are often regarded as marginalized employment for less qualified employees that is characterized as poor work conditions, which might lead to poor health outcomes (Kalleberg, 2000, Sandor, 2011, Scott-Marshall and Tompa, 2011). If a growth in part-time employment has been largely driven by employer igh demand for a flexible workforce (Cajner, Mawhirter, Nekarda, & Ratner, 2014), more and more employees are likely to be engaged in ‘involuntary’ part-time work, which may lead to poor health and well-being (Dooley, Prause, & Ham-Rowbottom, 2000; Oishi et al., 2015; Tilly, 1996).Therefore, the direction and magnitude of the effects of part-time employment are complex—often termed as “part-time duality,” “complex,” or “paradox”—in part due to the trade-off between potential income effects (or human capital) and time effects (Bartoll et al., 2014; Epstein, Seron, Oglensky, & Saute, 2014). In addition, underlying reasons for working part-time (voluntary vs. involuntary part-time) may account for some of the complexity in the relationship between part-time employment and health outcomes (Santin, Cohidon, Goldberg, & Imbernon, 2009).Previous researchA review by Quinlan et al. (2001) conducted in early 2000s suggested that there had been only a handful of empirical studies on the health effects of part-time employment and their results were not consistent. Although a growing body of research has examined the relationship between part-time employment and workers’ health and well-being in the last decade, the overall situation has remain largely unchanged. Most of these studies are from developed western countries, especially in Europe (Bardasi and Francesconi, 2004, Beham et al., 2012, Bartoll et al., 2014, Rodriguez, 2002), with an exception of a recent study from the Asian region (Oishi et al., 2015). In addition, these studies provided markedly different findings regarding consequences of part-time work. For example, a British study by Bardasi and Francesconi (2004), which used a sample of 8,000 employees from the first ten waves of the British Household Panel Survey (BHPS), showed that part-time employment was not associated with adverse health outcomes. In contrast, using two cohorts of German employees (N = 4,047), Burr et al. (2015) revealed that part-time workers were more likely to report depressive symptoms than their full-time counterparts. Also, Oishi et al. (2015) analyzed a merged dataset from four East-Asian countries (Korea, Japan, Hong Kong, and Taiwan) and found that part-time workers were more likely to report higher levels of work-family conflict and lower levels of job satisfaction than full-time workers.The inconsistent findings across countries and populations imply that the effects of part-time employment may depend on county-specific institutional factors, which points to a need for further research from other countries (or cultures) that can help determine if the consequences of part-time employment for workers’ well-being differ across countries (Bartoll et al., 2014, Epstein et al., 2014). Another possible explanation for the unclear relationship between part-time employment and health and well-being may be that relevant job, occupational, and financial conditions might have not been properly taken into account in previous research. For example, some important covariates, such as job insecurity, occupations, firm sizes, and financial difficulty, were not adjusted for in previous research, which may lead to omitted variable bias (Beham et al., 2012, Bartoll et al., 2014, Oishi et al., 2015).Despite the significance of the issues raised by part-time employment, longitudinal studies on the health and well-being of part-time workers have been relatively scarce. Although there are few exceptions (Bardasi and Francesconi, 2004, Rodriguez, 2002), the majority of previous studies on this topic have relied on cross-sectional data, which limits causal inference of study findings. Indeed, using cross-sectional data is especially problematic because it does not allow for the temporal order of events, which may lead to reverse causation or selection bias. For example, it is quite possible that workers’ health conditions may limit their capability to work full-time or longer hours. Therefore, in order to address the reverse causality or health-related selection issues, empirical studies on health effects of part-time employment need to use longitudinal data (Oishi et al., 2015, Olsen and Dahl, 2010).Gender differencePrevious research suggests that the effects of part-time employment or work hours may depend on gender (Artazcoz, Borrell, Cortès, Escribà-Agüir, & Cascant, 2007; Bartoll et al., 2014, Benach et al., 2014). For example, female part-time workers might have less beneficial health effects because they may face higher levels of discriminations in terms of salary, fringe benefits, and opportunity for advancement (Epstein et al., 2014; Repetti, Matthews, & Waldron, 1989; Young, 2010). Furthermore, male and female employees may have different reasons for working part-time (Blackwell, 2001, Laurijssen and Glorieux, 2013, Gash, 2008, Lyonette, 2015, Walters, 2005). That is, compared to male workers, female workers are more likely to engage in part-time employment due to additional family responsibilities and obligations placed on them (Epstein et al., 2014; Gjerdingen, McGovern, Bekker, Lundberg, & Willemsen, 2001; Stier and Lewin-Epstein, 2000, Young, 2010). Indeed, some research results from the United States have shown that the division of domestic labor remains gendered (Bianchi, Sayer, Milkie, & Robinson, 2012; Evertsson and Nermo, 2004, Greenstein, 2000), suggesting that female part-time workers may devote more time and effort to family responsibilities such as housework and child care than males, than to their own health and well-being.Empirical studies regarding gender differences on the association betweenpart-time work and health have produced inconsistent findings, partly due to the differences in study populations and country specific-institutional contexts (Bartoll et al., 2014; Gash, Mertens, & Romeu Gordo, 2010). For example, Bartoll et al. (2014)used cross-sectional data from the 2005 European Working Conditions Survey (4th EWCS) and found that female part-time workers in Scandinavia and the Netherlands were less likely to report work-related stress and psychological distress, while such relationships were not found among those in Anglo-Saxon countries (i.e., Ireland and United Kingdom). Although caution is needed in the interpretation of their cross-sectional results, Bartoll et al. (2014) suggested that the gender difference in the association between part-time employment and health might be largely shaped by institutional contexts, such as regulation of non-discriminatory practices and labor market policy for equal treatment between part-time and full-time or between female and male workers. These mixed findings across studies reinforce the need for further investigation and replication in different countries and populations to clarify the potentially gendered effect that part-time employment may exert on health outcomes.Contributions of this studyThis study makes several contributions to the literature. First, despite a growing number of studies on the health effects of part-time employment as well as the high prevalence of part-time workers, very little research has been conducted in the United States. To my knowledge, this is one of the few studies to empirically assess the relationship between part-time employment and health among U.S. workers. Second, this study explicitly looks at gender differences and thereby acknowledges the possibility that the effect of experience in part-time work on health may vary by gender (Bartoll et al., 2014; Kim, Khang, Muntaner, Chun, & Cho, 2008; Ross & Mirowsky, 1995). Third, the majority of previous work has relied on cross-sectional data (Buehler and O’Brien, 2011, Burr et al., 2015, Higgins et al., 2000, Oishi et al., 2015, Olsen and Dahl, 2010). The current study uses three-wave panel data, which allows me to explore the relationship between part-time work and health in a longitudinal research design. Finally, I employ two different analytic techniques for the analysis of panel data to address heterogeneity and dependency as well as tominimize selection bias. In addition, a number of control variables are included in the model to reduce the likelihood of confounding effects or alternative explanations.DiscussionsPart-time employment has become widespread in the most industrialized countries (OECD, 2016). In the United States, approximately 20% of employees work part-time (BLS, 2016a). Although a growing number of studies have explored the consequences of part-time emplo yment for workers’ well-being and gender differences, their findings have been inconsistent (Bardasi and Francesconi, 2004, Bartoll et al., 2014, Beham et al., 2012, Booth and van Ours, 2013, Burr et al., 2015, Santin et al., 2009). Furthermore, most of these studies were conducted in European countries and were cross-sectional in nature. In this study, I used longitudinal data from the 2010–2012–2014 GSS panel study to further clarify to what extent part-time work is associated with health and whether the association differs by gender among U.S. workers. The present study contributes to the existing literature by providing one of the few longitudinal examinations of the relationship between part-time work and health in the U.S. context, extending our knowledge about the potential health consequences of part-time employment. This study also adds to previous research by considering both perceived part-time status and actual work hours and employing more sophisticated analytic techniques.Analyses of longitudinal data from the GSS panel showed that part-time workers were less likely to report poor self-rated health, while weekly work hours were not significantly related to health. Our findings suggest that the positive effects of part-time employment on health remain significant, even after accounting for a range of demographic, socio-economic and job characteristics as well as heterogeneity and dependency in panel data. The significant association between part-time status and better health was consistent across different model specifications: GEE and RE models. With respect to other socio-demographic and work characteristics, such as race, family size, job satisfaction, financial satisfaction, and income, were significantly associated with workers’ health status. Our findings are consistent with some studies that have shown positive health effects of part-time employment (Behamet al., 2012, Booth and van Ours, 2013), but not consistent with other previous studies that provided negative effects (Burr et al., 2015, Oishi et al., 2015) or null association (Rodriguez, 2002, Bardasi and Francesconi, 2004). The findings of this study and others deserve further examination and replication with different samples and data.Despite conflicting expectations regarding the direction and magnitude of the association between part-time employment and employees’ health, our results indicated positive effects of working part-time among US employees. These results are comparable to those of Rodriguez (2002), who found that in Britain and Germany, part-time workers were 20% less likely to report poor health than full-time workers, though effect size was relatively small. Regarding this non-negative effect of part-time employment, Rodriguez (2002, p. 978) suggested that there seems to b e “a mixed reality” about part-time work—it may not only entail “elements of marginality” and but also reflect “choice” in that employees who can afford to work part-time sometimes do. He added that there could be additional stresses, burdens or duties placed on full-time workers compared to part-time workers. Therefore, it is possible that part-time employment itself may not be associated with workers’ poor health if the marginality of part-time work as well as other socio-economic and work characteristics (e.g., job insecurity, financial satisfaction, and income) are properly accounted for in the analysis model. Indeed, several previous studies that found negative effects of part-time employment did not include some of the key covariates, such as household income, financial satisfaction, family size, job insecurity, occupation, or firm size in their analyses (Bartoll et al., 2014; Burr et al., 2015, Oishi et al., 2015). This suggests the possibility that negative relationships between part-time work and health observed in some of the previous research might be partly due to unobserved or unmeasured characteristics of workers that can predict both part-time employment and health outcomes.This study further explored whether differences in self-rated health between part-time and full-time employees could be found among both male and female workers. Subgroup analyses indicated that working part-time was significantly associated with a reduced likelihood of reporting poor health in male employees, butnot in female employees. The overall pattern of results regarding gender difference was consistent across the two analytic methods. A plausible interpretation of these findings is that, as Stier and Lewin-Epstein (2000, p. 406) pointed out, female part-time workers m ay suffer from “a double disadvantage” in the work and family domains. Although female workers are likely to choose part-time employment in order to balance their work and family responsibilities (Bartoll et al., 2014, Higgins et al., 2000), their economic activities are often devalued in the labor market and their engagement in paid-employment rarely alters the traditional gendered division of domestic labor (Bleijenbergh, De Bruijn, & Bussemaker, 2004; Craig, 2002; Craig, Bittman, Brown, & Thompson, 2006; Craig and Mullan, 2009, Hook, 2010, Morehead, 2005, Rose and Hartmann, 2004, Walters, 2005). Indeed, previous studies have suggested that women’s part-time work may not be a result of their preference or choice (i.e., voluntary), but because of ‘constraints’ such as the pressure of domestic commitments, insufficient childcare, and inflexible working hours (Dex, 1988, Lane, 2004). Therefore, future research needs to examine the interplay among part-time employment, gender, and health status by taking into account relative shares of domestic work and discrimination in the workplace.The current study contains several limitations. First, although the measures of health status based on self-reported data have been validated in the previous literature, they may provide biased estimates. Although self-reported measures of health can be reliable and valid for identifying relationships in epidemiological studies (Fried et al., 1998, Jylhä et al., 2006), there might be some degree of response bias due to social desirability and recall issues (Baker, Stabile, & Deri, 2004; Mackenbach, Looman, & Van der Meer, 1996). Future research should use additional health measures based on actual diagnoses or bio-markers. Second, our findings may not be interpreted as a causal relationship due to the presence of unobserved heterogeneity. Despite the inclusion of important control variables in the analyses, there still remains the possibility of unobserved factors—such as time spent on unpaid housekeeping work or levels of work-family conflict, which are not available in the GSS panel—that may be associated with both part-time employment and workers’ health and well-being.Finally, this study used the sample of 693 US employees (or 2,079 person-year observations) who participated in all three waves of the GSS panel data. Thus, the sample is not representative of the US employee population, and our findings may not be readily generalizable to other populations or geographic areas with different institutional and cultural contexts.ConclusionThis study provides important insights into the consequences of part-time employment for workers’ health in the U.S context. It is one of the few empirical studies that used a longitudinal sample of U.S. adults to examine the effects of part-time emp loyment on workers’ health and different patterns by gender. Results of this study indicate that part-time employment is associated with better self-rated health, especially among male workers, even after controlling for relevant covariates such as demographic, socio-economic, and work characteristics. These longitudinal findings were robust across the GEE and RE models. Our findings suggest that part-time employment does not have adverse health consequences, and male workers are likely to benefit from part-time work. The findings are also important from a policy perspective, suggesting that policymakers should pay close attention to gender differences in the relationship between part-time work and health. Understanding these gender differences can provide guidance on how to tailor interventions to make them more effective for female part-time workers (Artazcoz et al., 2007, Jacobs and Gerson, 2004). Furthermore, our findings point to the need for ongoing research on part-time work and health to identify other potential factors (e.g., domestic labor, work-family conflict, labor market discrimination), which were not taken into account in the present study (Buehler, O’Brien, & Walls, 2011; Ruppanner & Maume, 2016). To enhance the generalizability of the findings, future research should be conducted in different cultural and institutional contexts. These research efforts could help disentangle the complex interrelationship between part-time employment, health, and gender and provide potentially important implications for policymakers and practitioners.。
- 1 -中英文对照外文翻译(文档含英文原文和中文翻译)外文文献外文文献: :Designing Against Fire Of BulidingABSTRACT:This paper considers the design of buildings for fire safety. It is found that fire and the associ- ated effects on buildings is significantly different to other forms of loading such as gravity live loads, wind and earthquakes and their respective effects on the building structure. Fire events are derived from the human activities within buildings or from the malfunction of mechanical and electrical equipment provided within buildings to achieve a serviceable environment. It is therefore possible to directly influence the rate of fire starts within buildings by changing human behaviour, improved maintenance and improved design of mechanical and electricalsystems. Furthermore, should a fire develops, it is possible to directly influence the resulting fire severity by the incorporation of fire safety systems such as sprinklers and to provide measures within the building to enable safer egress from the building. The ability to influence the rate of fire starts and the resulting fire severity is unique to the consideration of fire within buildings since other loads such as wind and earthquakes are directly a function of nature. The possible approaches for designing a building for fire safety are presented using an example of a multi-storey building constructed over a railway line. The design of both the transfer structure supporting the building over the railway and the levels above the transfer structure are consideredin the context of current regulatory requirements. The principles and assumptions associ- ated with various approaches are discussed.1 INTRODUCTIONOther papers presented in this series consider the design of buildings for gravity loads, wind and earthquakes.The design of buildings against such load effects is to a large extent covered by engineering based standards referenced by the building regulations. This is not the case, to nearly the same extent, in the case of fire. Rather, it is building regulations such as the Building Code of Australia (BCA) that directly specify most of the requirements for fire safety of buildings with reference being made to Standards such as AS3600 or AS4100 for methods for determining the fire resistance of structural elements.The purpose of this paper is to consider the design of buildings for fire safety from an engineering perspective (as is currently done for other loads such as wind or earthquakes), whilst at the same time,putting such approaches in the context of the current regulatory requirements.At the outset,it needs to be noted that designing a building for fire safety is far more than simply considering the building structure and whether it has sufficient structural adequacy.This is because fires can have a direct influence on occupants via smoke and heat and can grow in size and severity unlike other effects imposed on the building. Notwithstanding these comments, the focus of this paper will be largely on design issues associated with the building structure.Two situations associated with a building are used for the purpose of discussion. The multi-storey office building shown in Figure 1 is supported by a transfer structure that spans over a set of railway tracks. It is assumed that a wide range of rail traffic utilises these tracks including freight and diesel locomotives. The first situation to be considered from a fire safety perspective is the transfer structure.This is termed Situation 1 and the key questions are: what level of fire resistance is required for this transfer structure and how can this be determined? This situation has been chosen since it clearly falls outside the normal regulatory scope of most build-ing regulations. An engineering solution, rather than a prescriptive one is required. The second fire situation (termed Situation 2) corresponds to a fire within the office levels of the building and is covered by building regulations. This situation is chosen because it will enable a discussion of engineering approaches and how these interface with the building regulations regulations––since both engineering and prescriptive solutions are possible.2 UNIQUENESS OF FIRE2.1 Introduction Wind and earthquakes can be considered to b Wind and earthquakes can be considered to be “natural” phenomena o e “natural” phenomena o e “natural” phenomena over which designers ver which designers have no control except perhaps to choose the location of buildings more carefully on the basis of historical records and to design building to resist sufficiently high loads or accelerations for the particular location. Dead and live loads in buildings are the result of gravity. All of these loads are variable and it is possible (although generally unlikely) that the loads may exceed the resistance of the critical structural members resulting in structural failure.The nature and influence of fires in buildings are quite different to those associated with other“loads” to which a building may be subjected to. The essential differences are described in the following sections.2.2 Origin of FireIn most situations (ignoring bush fires), fire originates from human activities within the building or the malfunction of equipment placed within the building to provide a serviceable environment. It follows therefore that it is possible to influence the rate of fire starts by influencing human behaviour, limiting and monitoring human behaviour and improving the design of equipment and its maintenance. This is not the case for the usual loads applied to a building.2.3 Ability to InfluenceSince wind and earthquake are directly functions of nature, it is not possible to influence such events to any extent. One has to anticipate them and design accordingly. It may be possibleto influence the level of live load in a building by conducting audits and placing restrictions on contents. However, in the case of a fire start, there are many factors that can be brought to bear to influence the ultimate size of the fire and its effect within the building. It is known that occupants within a building will often detect a fire and deal with it before it reaches a sig- nificant size. It is estimated that less than one fire in five (Favre, 1996) results in a call to the fire brigade and for fires reported to the fire brigade, the majority will be limited to the room of fire origin. Inoc- cupied spaces, olfactory cues (smell) provide powerful evidence of the presence of even a small fire. The addition of a functional smoke detection system will further improve the likelihood of detection and of action being taken by the occupants.Fire fighting equipment, such as extinguishers and hose reels, is generally provided within buildings for the use of occupants and many organisations provide training for staff in respect ofthe use of such equipment.The growth of a fire can also be limited by automatic extinguishing systems such as sprinklers, which can be designed to have high levels of effectiveness.Fires can also be limited by the fire brigade depending on the size and location of the fire at the time of arrival.2.4 Effects of FireThe structural elements in the vicinity of the fire will experience the effects of heat. The temperatures within the structural elements will increase with time of exposure to the fire, the rate of temperature rise being dictated by the thermal resistance of the structural element and the severity of the fire. The increase in temperatures within a member will result in both thermal expansion and,eventually,a reduction in the structural resistance of the member. Differential thermal expansion will lead to bowing of a member. Significant axial expansion willbe accommodated in steel members by either overall or local buckling or yielding of local- ised regions. These effects will be detrimental for columns but for beams forming part of a floorsystem may assist in the development of other load resisting mechanisms (see Section 4.3.5).With the exception of the development of forces due to restraint of thermal expansion, fire does not impose loads on the structure but rather reduces stiffness and strength. Such effects are not instantaneous but are a function of time and this is different to the effects of loads such as earthquake and wind that are more or less instantaneous.Heating effects associated with a fire will not be significant or the rate of loss of capacity will be slowed if:(a) the fire is extinguished (e.g. an effective sprinkler system)(b) the fire is of insufficient severity –– insufficient fuel, and/or(b) the fire is of insufficient severity(c)the structural elements have sufficient thermal mass and/or insulation to slow the rise in internal temperatureFire protection measures such as providing sufficient axis distance and dimensions for concrete elements, and sufficient insulation thickness for steel elements are examples of (c). These are illustrated in Figure 2.The two situations described in the introduction are now considered.3 FIRE WITHIN BUILDINGS3.1 Fire Safety ConsiderationsThe implications of fire within the occupied parts of the office building (Figure 1) (Situation 2) are now considered. Fire statistics for office buildings show that about one fatality is expected in an office building for every 1000 fires reported to the fire brigade. This is an orderof magnitude less than the fatality rate associated with apartment buildings. More than two thirdsof fires occur during occupied hours and this is due to the greater human activity and the greater use of services within the building. It is twice as likely that a fire that commences out of normal working hours will extend beyond the enclosure of fire origin.A relatively small fire can generate large quantities of smoke within the floor of fire origin.If the floor is of open-plan construction with few partitions, the presence of a fire during normal occupied hours is almost certain to be detected through the observation of smoke on the floor. The presence of full height partitions across the floor will slow the spread of smoke and possibly also the speed at which the occupants detect the fire. Any measures aimed at improving housekeeping, fire awareness and fire response will be beneficial in reducing the likelihood of major fires during occupied hours.For multi-storey buildings, smoke detection systems and alarms are often provided to give “automatic” detection and warning to the occupants. An alarm signal is also transm itted to the fire brigade.Should the fire not be able to be controlled by the occupants on the fire floor, they will need to leave the floor of fire origin via the stairs. Stair enclosures may be designed to be fire-resistant but this may not be sufficient to keep the smoke out of the stairs. Many buildings incorporate stair pressurisation systems whereby positive airflow is introduced into the stairs upon detection of smoke within the building. However, this increases the forces required to open the stair doors and makes it increasingly difficult to access the stairs. It is quite likely that excessive door opening forces will exist(Fazio et al,2006)From a fire perspective, it is common to consider that a building consists of enclosures formed by the presence of walls and floors.An enclosure that has sufficiently fire-resistant boundaries (i.e. walls and floors) is considered to constitute a fire compartment and to be capableof limiting the spread of fire to an adjacent compartment. However, the ability of such boundariesto restrict the spread of fire can be severely limited by the need to provide natural lighting (windows)and access openings between the adjacent compartments (doors and stairs). Fire spread via the external openings (windows) is a distinct possibility given a fully developed fire. Limit- ing the window sizes and geometry can reduce but not eliminate the possibility of vertical fire spread.By far the most effective measure in limiting fire spread, other than the presence of occupants, is an effective sprinkler system that delivers water to a growing fire rapidly reducing the heat being generated and virtually extinguishing it.3.2 Estimating Fire SeverityIn the absence of measures to extinguish developing fires, or should such systems fail; severe fires can develop within buildings.In fire engineering literature, the term “fire load” refers to the quantity of combustibles within an enclosure and not the loads (forces) applied to the structure during a fire. Similarly, fire load density refers to the quantity of fuel per unit area. It is normally expressed in terms of MJ/m2or kg/m 2of wood equivalent. Surveys of combustibles for various occupancies (i.e offices, retail,hospitals, warehouses, etc)have been undertaken and a good summary of the available data is given in FCRC (1999). As would be expected, the fire load density is highly variable. Publications such as the International Fire Engineering Guidelines (2005) give fire load data in terms of the mean and 80th percentile.The latter level of fire load density is sometimes taken asthe characteristic fire load density and is sometimes taken as being distributed according to a Gumbel distribution (Schleich et al, 1999).The rate at which heat is released within an enclosure is termed the heat release rate (HRR) and normally expressed in megawatts (MW). The application of sufficient heat to a combustible material results in the generation of gases some of which are combustible. This process is called pyrolisation.Upon coming into contact with sufficient oxygen these gases ignite generating heat. The rate of burning(and therefore of heat generation) is therefore dependent on the flow of air to the gases generated by the pyrolising fuel.This flow is influenced by the shape of the enclosure (aspect ratio), and the position and size of any potential openings. It is found from experiments with single openings in approximately cubic enclosures that the rate of burning is directly proportional to A h where A is the area of the opening and h is the opening height. It is known that for deep enclosures with single openings that burning will occur initially closest to the opening moving back into the enclosure once the fuel closest to the opening is consumed (Thomas et al, 2005). Significant temperature variations throughout such enclosures can be expected.The use of the word ‘opening’ in relation to real building enclosures refers to any openings present around the walls including doors that are left open and any windows containing non fire-resistant glass.It is presumed that such glass breaks in the event of development of a significant fire. If the windows could be prevented from breaking and other sources of air to the enclosure limited, then the fire would be prevented from becoming a severe fire.V arious methods have been developed for determining the potential severity of a fire within an enclosure.These are described in SFPE (2004). The predictions of these methods are variable and are mostly based on estimating a representative heat release rate (HRR) and the proportion of total fuel ς likely to be consumed during the primary burning stage (Figure 4). Further studies of enclosure fires are required to assist with the development of improved models,as the behaviour is very complex.3.3 Role of the Building StructureIf the design objectives are to provide an adequate level of safety for the occupants and protection of adjacent properties from damage, then the structural adequacy of the building in fire need only be sufficient to allow the occupants to exit the building and for the building to ultimately deform in a way that does not lead to damage or fire spread to a building located on an adjacent site.These objectives are those associated with most building regulations including the Building Code of Australia (BCA). There could be other objectives including protection of the building against significant damage. In considering these various objectives, the following should be taken into account when considering the fire resistance of the building structure.3.3.1 Non-Structural ConsequencesSince fire can produce smoke and flame, it is important to ask whether these outcomes will threaten life safety within other parts of the building before the building is compromised by a lossof structural adequacy? Is search and rescue by the fire brigade not feasible given the likely extent of smoke? Will the loss of use of the building due to a severe fire result in major property and income loss? If the answer to these questions is in the affirmative, then it may be necessary to minimise the occurrence of a significant fire rather than simply assuming that the building structure needs to be designed for high levels of fire resistance. A low-rise shopping centre with levels interconnected by large voids is an example of such a situation.3.3.2 Other Fire Safety SystemsThe presence of other systems (e.g. sprinklers) within the building to minimise the occurrence of a serious fire can greatly reduce the need for the structural elements to have high levels of fire resistance. In this regard, the uncertainties of all fire-safety systems need to be considered. Irrespective of whether the fire safety system is the sprinkler system, stair pressurisation, compartmentation or the system giving the structure a fire-resistance level (e.g. concrete cover), there is an uncertainty of performance. Uncertainty data is available for sprinkler systems(because it is relatively easy to collect) but is not readily available for the other fire safety systems. This sometimes results in the designers and building regulators considering that only sprinkler systems are subject to uncertainty. In reality, it would appear that sprinklers systems have a high level of performance and can be designed to have very high levels of reliability.3.3.3 Height of BuildingIt takes longer for a tall building to be evacuated than a short building and therefore the structure of a tall building may need to have a higher level of fire resistance. The implications of collapse of tall buildings on adjacent properties are also greater than for buildings of only several storeys.3.3.4 Limited Extent of BurningIf the likely extent of burning is small in comparison with the plan area of the building, then the fire cannot have a significant impact on the overall stability of the building structure. Examples of situations where this is the case are open-deck carparks and very large area building such as shopping complexes where the fire-effected part is likely to be small in relation to area of the building floor plan.3.3.5 Behaviour of Floor ElementsThe effect of real fires on composite and concrete floors continues to be a subject of much research.Experimental testing at Cardington demonstrated that when parts of a composite floor are subject to heating, large displacement behaviour can develop that greatly assists the load carrying capacity of the floor beyond that which would predicted by considering only the behaviour of the beams and slabs in isolation.These situations have been analysed by both yield line methods that take into account the effects of membrane forces (Bailey, 2004) and finite element techniques. In essence, the methods illustrate that it is not necessary to insulate all structural steel elements in a composite floor to achieve high levels of fire resistance.This work also demonstrated that exposure of a composite floor having unprotected steel beams, to a localised fire, will not result in failure of the floor.A similar real fire test on a multistory reinforced concrete building demonstrated that the real structural behaviour in fire was significantly different to that expected using small displacement theory as for normal tempera- ture design (Bailey, 2002) with the performance being superior than that predicted by considering isolated member behaviour.3.4 Prescriptive Approach to DesignThe building regulations of most countries provide prescriptive requirements for the design of buildings for fire.These requirements are generally not subject to interpretation and compliance with them makes for simpler design approvalapproval––although not necessarily the most cost-effective designs.These provisions are often termed deemed-to-satisfy (DTS) provisions. Allcovered––the provision of emergency exits, aspects of designing buildings for fire safety are coveredspacings between buildings, occupant fire fighting measures, detection and alarms, measures for automatic fire suppression, air and smoke handling requirements and last, but not least, requirements for compartmentation and fire resistance levels for structural members. However, there is little evidence that the requirements have been developed from a systematic evaluation of fire safety. Rather it would appear that many of the requirements have been added one to anotherto deal with another fire incident or to incorporate a new form of technology. There does not appear to have been any real attempt to determine which provision have the most significant influence on fire safety and whether some of the former provisions could be modified.The FRL requirements specified in the DTS provisions are traditionally considered to result in member resistances that will only rarely experience failure in the event of a fire.This is why it is acceptable to use the above arbitrary point in time load combination for assessing members in fire. There have been attempts to evaluate the various deemed-to-satisfy provisions (particularly the fire- resistance requirements)from a fire-engineering perspective taking into account the possible variations in enclosure geometry, opening sizes and fire load (see FCRC, 1999).One of the outcomes of this evaluation was the recognition that deemed-to- satisfy provisions necessarily cover the broad range of buildings and thus must, on average, be quite onerous because of the magnitude of the above variations.It should be noted that the DTS provisions assume that compartmentation works and that fire is limited to a single compartment. This means that fire is normally only considered to exist at one level. Thus floors are assumed to be heated from below and columns only over one storey height.3.5 Performance-Based DesignAn approach that offers substantial benefits for individual buildings is the move towards performance-based regulations. This is permitted by regulations such as the BCA which state thata designer must demonstrate that the particular building will achieve the relevant performance requirements. The prescriptive provisions (i.e. the DTS provisions) are presumed to achieve these requirements. It is necessary to show that any building that does not conform to the DTS provisions will achieve the performance requirements.But what are the performance requirements? Most often the specified performance is simplya set of performance statements (such as with the Building Code of Australia)with no quantitative level given. Therefore, although these statements remind the designer of the key elements of design, they do not, in themselves, provide any measure against which to determine whether the design is adequately safe.Possible acceptance criteria are now considered.3.5.1 Acceptance CriteriaSome guidance as to the basis for acceptable designs is given in regulations such as the BCA. These and other possible bases are now considered in principle.(i)compare the levels of safety (with respect to achieving each of the design objectives) of the proposed alternative solution with those asso- ciated with a corresponding DTS solution for the building.This comparison may be done on either a qualitative or qualitative risk basis or perhaps a combination. In this case, the basis for comparison is an acceptable DTS solution. Such an approach requires a “holistic” approach to safety whereby all aspects relevant to safety, including the structure, are considered. This is, by far, the most common basis for acceptance.(ii)undertake a probabilistic risk assessment and show that the risk associated with the proposed design is less than that associated with common societal activities such as using pub lic transport. Undertaking a full probabilistic risk assessment can be very difficult for all but the simplest situations.Assuming that such an assessment is undertaken it will be necessary for the stakeholders to accept the nominated level of acceptable risk. Again, this requires a “holistic” approach to fire safety.(iii) a design is presented where it is demonstrated that all reasonable measures have been adopted to manage the risks and that any possible measures that have not been adopted will have negligible effect on the risk of not achieving the design objectives.(iv) as far as the building structure is concerned,benchmark the acceptable probability of failure in fire against that for normal temperature design. This is similar to the approach used when considering Building Situation 1 but only considers the building structure and not the effects of flame or smoke spread. It is not a holistic approach to fire safety.Finally, the questions of arson and terrorism must be considered. Deliberate acts of fire initiation range from relatively minor incidents to acts of mass destruction.Acts of arson are well within the accepted range of fire events experienced by build- ings(e.g. 8% of fire starts in offices are deemed "suspicious"). The simplest act is to use a small heat source to start a fire. The resulting fire will develop slowly in one location within the building and will most probably be controlled by the various fire- safety systems within the building. The outcome is likely to be the same even if an accelerant is used to assist fire spread.An important illustration of this occurred during the race riots in Los Angeles in 1992 (Hart 1992) when fires were started in many buildings often at multiple locations. In the case of buildings with sprinkler systems,the damage was limited and the fires significantly controlled.Although the intent was to destroy the buildings,the fire-safety systems were able to limit the resulting fires. Security measures are provided with systems such as sprinkler systems and include:- locking of valves- anti-tamper monitoring- location of valves in secure locationsFurthermore, access to significant buildings is often restricted by security measures.The very fact that the above steps have been taken demonstrates that acts of destruction within buildings are considered although most acts of arson do not involve any attempt to disable the fire-safety systems.At the one end of the spectrum is "simple" arson and at the other end, extremely rare acts where attempts are made to destroy the fire-safety systems along with substantial parts of thebuilding.This can be only achieved through massive impact or the use of explosives. The latter may be achieved through explosives being introduced into the building or from outside by missile attack.The former could result from missile attack or from the collision of a large aircraft. The greater the destructiveness of the act,the greater the means and knowledge required. Conversely, the more extreme the act, the less confidence there can be in designing against such an act. This is because the more extreme the event, the harder it is to predict precisely and the less understood will be its effects. The important point to recognise is that if sufficient means can be assembled, then it will always be possible to overcome a particular building design.Thus these acts are completely different to the other loadings to which a building is subjected such as wind,earthquake and gravity loading. This is because such acts of destruction are the work of intelligent beings and take into account the characteristics of the target.Should high-rise buildings be designed for given terrorist activities,then terrorists will simply use greater means to achieve the end result.For example, if buildings were designed to resist the impact effects from a certain size aircraft, then the use of a larger aircraft or more than one aircraft could still achieve destruction of the building. An appropriate strategy is therefore to minimise the likelihood of means of mass destruction getting into the hands of persons intent on such acts. This is not an engineering solution associated with the building structure.It should not be assumed that structural solutions are always the most appropriate, or indeed, possible.In the same way, aircrafts are not designed to survive a major fire or a crash landing but steps are taken to minimise the likelihood of either occurrence.The mobilization of large quantities of fire load (the normal combustibles on the floors) simultaneously on numerous levels throughout a building is well outside fire situations envisaged by current fire test standards and prescriptive regulations. Risk management measures to avoid such a possibility must be considered.4 CONCLUSIONSificantly from other “loads” such as wind, live load and earthquakes in significantlyFire differs signrespect of its origin and its effects.Due to the fact that fire originates from human activities or equipment installed within buildings, it is possible to directly influence the potential effects on the building by reducing the rate of fire starts and providing measures to directly limit fire severity.The design of buildings for fire safety is mostly achieved by following the prescriptive requirements of building codes such as the BCA. For situations that fall outside of the scope of such regulations, or where proposed designs are not in accordance with the prescriptive requirements, it is possible to undertake performance-based fire engineering designs.However,。
外文文献翻译(含:英文原文及中文译文)文献出处:Y Miyazaki. A Brief Description of Interior Decoration [J]. Building & Environment, 2005, 40(10):41-45.英文原文A Brief Description of Interior DecorationY Miyazaki一、An interior design element1 Spatial elementsThe rationalization of space and giving people a sense of beauty is the basic task of design. We must dare to explore the new image of the times and technologies that are endowed with space. We should not stick to the spatial image formed in the past.2 color requirementsIn addition to affecting the visual environment, indoor colors also directly affect people's emotions and psychology. Scientific use of color is good for work and helps health. The proper color processing can meet the functional requirements and achieve the beauty effect. In addition to observing the general laws of color, interior colors also vary with the aesthetics of the times.3 light requirementsHumans love the beauty of nature and often direct sunlight into theinterior to eliminate the sense of darkness and closure in the interior, especially the top light and the soft diffuse light, making the interior space more intimate and natural. The transformation of light and shadow makes the interior richer and more colorful, giving people a variety of feelings.4 decorative elementsThe indispensable building components such as columns, walls, and the like in the entire indoor space are combined with the function and need to be decorated to jointly create a perfect indoor environment. By making full use of the texture characteristics of different decorative materials, you can achieve a variety of interior art effects with different styles, while also reflecting the historical and cultural characteristics of the region.5 furnishingsIndoor furniture, carpets, curtains, etc., are all necessities of life. Their shapes are often furnished and most of them play a decorative role. Practicality and decoration should be coordinated with each other, and the functions and forms of seeking are unified and changed so that the interior space is comfortable and full of personality.6 green elementsGreening in interior design is an important means to improve the indoor environment. Indoor flowering trees are planted, and the use ofgreenery and small items to play a role in diffusing indoor and outdoor environments, expanding the sense of interior space, and beautifying spaces all play an active role.二、The basic principles of interior design1 interior decoration design to meet the functional requirementsThe interior design is based on the purpose of creating a good indoor space environment, so as to rationalize, comfort, and scientize the indoor environment. It is necessary to take into account the laws of people's activities to handle spatial relationships, spatial dimensions, and spatial proportions; to rationally configure furnishings and furniture, and to properly resolve indoor environments. V entilation, lighting and lighting, pay attention to the overall effect of indoor tone.2 interior design to meet the spiritual requirementsThe spirit of interior design is to influence people's emotions and even influence people's will and actions. Therefore, we must study the characteristics and laws of people's understanding; study the emotions and will of people; and study the interaction between people and the environment. Designers must use various theories and methods to impact people's emotions and sublimate them to achieve the desired design effect. If the indoor environment can highlight a certain concept and artistic conception, then it will have a strong artistic appeal and better play its role in spiritual function.3 Interior design to meet modern technical requirementsThe innovation of architectural space is closely related to the innovation of structural modeling. The two should be harmonized and unified, fully considering the image of the structural Sino-U.S. and integrating art and technology. This requires that interior designers must possess the necessary knowledge of the type of structure and be familiar with and master the performance and characteristics of the structural system. Modern interior design is in the category of modern science and technology. To make interior design better meet the requirements of spiritual function, we must maximize the use of the latest achievements in modern science and technology.4 Interior design must meet the regional characteristics and national style requirementsDue to differences in the regions where people live, geographical and climatic conditions, the living habits of different ethnic groups are not the same as cultural traditions, and there are indeed great differences in architectural styles. China is a multi-ethnic country. The differences in the regional characteristics, national character, customs, and cultural literacy of various ethnic groups make indoor decoration design different. Different styles and features are required in the design. We must embody national and regional characteristics to evoke people’s national self-respect and self-confidence.三、Points of interior designThe interior space is defined by the enclosure of the floor, wall, and top surface, thus determining the size and shape of the interior space. The purpose of interior decoration is to create a suitable and beautiful indoor environment. The floor and walls of the interior space are the backdrop for people and furnishings and furnishings, while the differences on the top surface make the interior space more varied.1 Base decoration ----- Floor decorationThe basic surface ----- is very important in people's sights. The ground floor is in contact with people, and the line of sight is near, and it is in a dynamic change. It is one of the important factors of interior decoration. Meet the following principles:2 The base should be coordinated with the overall environment to complement each other and set off the atmosphereFrom the point of view of the overall environmental effect of space, the base should be coordinated with the ceiling and wall decoration. At the same time, it should play a role in setting off the interior furniture and furnishings.3 Pay attention to the division, color and texture of the ground patternGround pattern design can be roughly divided into three situations: The first is to emphasize the independent integrity of the pattern itself,such as meeting rooms, using cohesive patterns to show the importance of the meeting. The color should be coordinated with the meeting space to achieve a quiet, focused effect; the second is to emphasize the pattern of continuity and rhythm, with a certain degree of guidance and regularity, and more for the hall, aisle and common space; third It emphasizes the abstractness of the pattern, freedom, and freedom, and is often used in irregular or layout-free spaces.4 Meeting the needs of the ground structure, construction and physical properties of the buildingWhen decorating the base, attention should be paid to the structure of the ground floor. In the premise of ensuring safety, it is convenient for construction and construction. It cannot be a one-sided pursuit of pattern effects, and physical properties such as moisture-proof, waterproof, thermal insulation, and thermal insulation should be considered. need. The bases are available in a wide variety of types, such as: wooden floors, block floors, terrazzo floors, plastic floors, concrete floors, etc., with a wide variety of patterns and rich colors. The design must be consistent with the entire space environment. Complementary to achieve good results.四、wall decorationIn the scope of indoor vision, the vertical line of sight between the wall and the person is in the most obvious position. At the same time, thewall is the part that people often contact. Therefore, the decoration of the wall is very important for the interior design. The following design principles must be met: 1 IntegrityWhen decorating a wall, it is necessary to fully consider the unity with other parts of the room, and to make the wall and the entire space a unified whole.2 PhysicalThe wall surface has a larger area in the interior space, and the status is more important and the requirements are higher. The requirements for sound insulation, warmth protection, fire prevention, etc. in the interior space vary depending on the nature of the space used, such as the guest room, high requirements. Some, while the average unit canteen, requiresa lower number.3 ArtistryIn the interior space, the decorative effect of the wall plays an important role in rendering and beautifying the indoor environment. The shape of the wall, the partition pattern, the texture and the interior atmosphere are closely related to each other. In order to create the artistic effect of the interior space, the wall The artistry of the surface itself cannot be ignored.The selection of wall decoration styles is determined according to the above principles. The forms are roughly the following: plasteringdecoration, veneering decoration, brushing decoration, coil decoration. Focusing on the coil decoration here, with the development of industry, there are more and more coils that can be used to decorate walls, such as: plastic wallpaper, wall cloth, fiberglass cloth, artificial leather, and leather. These materials are characterized by the use of It is widely used, flexible and free, with a wide variety of colors, good texture, convenient construction, moderate prices, and rich decorative effects. It is a material that is widely used in interior design.五、Ceiling decorationThe ceiling is an important part of the interior decoration, and it is also the most varied and attractive interface in the interior space decoration. It has a strong sense of perspective. Through different treatments, the styling of lamps and lanterns can enhance the space appeal and make the top surface rich in shape. Colorful, novel and beautiful.1 Design principlesPay attention to the overall environmental effects.The ceiling, wall surface and base surface together make up the interior space and jointly create the effects of the indoor environment. The design should pay attention to the harmonization of the three, and each has its own characteristics on a unified basis.The top decoration should meet the applicable aesthetic requirements.In general, the effect of indoor space should be lighter and lighter. Therefore, it is important to pay attention to the simple decoration of the top decoration, highlight the key points, and at the same time, have a sense of lightness and art.The top decoration should ensure the rationality and safety of the top structure. Cannot simply pursue styling and ignore safety2 top design(1) Flat roofThe roof is simple in construction, simple in appearance, and convenient in decoration. It is suitable for classrooms, offices, exhibition halls, etc. Its artistic appeal comes from the top shape, texture, patterns, and the organic configuration of the lamps.(2) Convex ceilingThis kind of roof is beautiful and colorful, with a strong sense of three-dimensionality. It is suitable for ballrooms, restaurants, foyers, etc. It is necessary to pay attention to the relationship between the primary and secondary relationships and the height difference of various concavo-convex layers. It is not appropriate to change too much and emphasize the rhythm of rhythm and the artistry of the overall space. .(3) Suspended ceilingV arious flaps, flat plates or other types of ceilings are hung under the roof load-bearing structures. These ceilings are often used to meetacoustic or lighting requirements or to pursue certain decorative effects. They are often used in stadiums, cinemas, and so on. In recent years, this type of roof has also been commonly used in restaurants, cafes, shops, and other buildings to create special aesthetics and interests.(4) Well format ceilingIt is in the form of a combined structural beam, in which the main and secondary beams are staggered and the relationship between the wells and beams, together with a ceiling of lamps and gypsum floral designs, is simple and generous, with a strong sense of rhythm.(5) Glass ceilingThe halls and middle halls of modern large-scale public buildings are commonly used in this form, mainly addressing the needs of large-scale lighting and indoor greening, making the indoor environment richer in natural appeal, and adding vitality to large spaces. It is generally in the form of a dome, a cone, and a zigzag. In short, interior decoration design is a comprehensive discipline, involving many disciplines such as sociology, psychology, and environmental science, and there are many things that we need to explore and study. This article mainly elaborated the basic principles and design methods of interior decoration design. No matter what style belongs to the interior design door, this article gives everyone a more in-depth understanding and comprehension of interior design. If there are inadequacies, let the criticism correct me.中文译文室内装饰简述Y Miyazaki一室内装饰设计要素1 空间要素空间的合理化并给人们以美的感受是设计基本的任务。
车床用于车外圆、端面和镗孔等加工的机床称作车床。
车削很少在其他种类的机床上进行,因为其他机床都不能像车床那样方便地进行车削加工。
由于车床除了用于车外圆还能用于镗孔、车端面、钻孔和铰孔,车床的多功能性可以使工件在一次定位安装中完成多种加工。
这就是在生产中普遍使用各种车床比其他种类的机床都要多的原因。
两千多年前就已经有了车床。
现代车床可以追溯到大约1797年,那时亨利•莫德斯利发明了一种具有把主轴和丝杆的车床。
这种车床可以控制工具的机械进给。
这位聪明的英国人还发明了一种把主轴和丝杆相连接的变速装置,这样就可以切削螺纹。
车床的主要部件:床身、主轴箱组件、尾架组件、拖板组、变速齿轮箱、丝杆和光杆。
床身是车床的基础件。
它通常是由经过充分正火或时效处理的灰铸铁或者球墨铸铁制成,它是一个坚固的刚性框架,所有其他主要部件都安装在床身上。
通常在球墨铸铁制成,它是一个坚固的刚性框架,所有其他主要部件都安装在床身上。
通常在床身上面有内外两组平行的导轨。
一些制造厂生产的四个导轨都采用倒“V”,而另一些制造厂则将倒“V”形导轨和平面导轨结合。
由于其他的部件要安装在导轨上并(或)在导轨上移动,导轨要经过精密加工,以保证其装配精度。
同样地,在操作中应该小心,以避免损伤导轨。
导轨上的任何误差,常常会使整个机床的精度遭到破坏。
大多数现代车床的导轨要进行表面淬火处理。
以减少磨损和擦伤,具有更大的耐磨性。
主轴箱安装在床身一端内导轨的固定位置上。
它提供动力。
使工件在各种速度下旋转。
它基本上由一个安装在精密轴承中的空心轴和一系列变速齿轮---类似于卡车变速箱所组成,通过变速齿轮,主轴可以在许多中转速的旋转。
大多数车床有8~18中转速,一般按等比级数排列。
在现代车床上只需扳动2~4个手柄,就能得到全部挡位的转速。
目前发展的趋势是通过电气的或机械的装置进行无级变速。
由于车床的精度在很大程度上取决于主轴,因此主轴的结构尺寸较大,通常安装在紧密配合的重型圆锤滚子轴承或球轴承中。
风力发电外文翻译中英文英文Wind power in China – Dream or reality?HubacekAbstractAfter tremendous growth of wind power generation capacity in recent years, China now has 44.7 GW of wind-derived power. Despite the recent growth rates and promises of a bright future, two important issues - the capability of the grid infrastructure and the availability of backup systems - must be critically discussed and tackled in the medium term.The study shows that only a relatively small share of investment goes towards improving and extending the electricity infrastructure which is a precondition for transmitting clean wind energy to the end users. In addition, the backup systems are either geographically too remote from the potential wind power sites or currently financially infeasible. Finally, the introduction of wind power to the coal-dominated energy production system is not problem-free. Frequent ramp ups and downs of coal-fired plants lead to lower energy efficiency and higher emissions, which are likely to negate some of the emission savings from wind power.The current power system is heavily reliant on independently acting but state-owned energy companies optimizing their part of the system, and this is partly incompatible with building a robust system supportingrenewable energy technologies. Hence, strategic, top-down co-ordination and incentives to improve the overall electricity infrastructure is recommended.Keywords: Wind power, China, Power grids, Back-up systems1. IntroductionChina’s wind energy industry has exper ienced a rapid growth over the last decade. Since the promulgation of the first Renewable Energy Law in 2006, the cumulative installed capacity of wind energy amounted to 44.7 GW by the end of 2010 [1]. The newly installed capacity in 2010 reached 18.9 GW which accounted for about 49.5% of new windmills globally. The wind energy potential in China is considerable, though with differing estimates from different sources. According to He et al. [2], the exploitable wind energy potential is 600–1000 GW onshore and 100–200 GW offshore. Without considering the limitations of wind energy such as variable power outputs and seasonal variations, McElroy et al. [3] concluded that if the Chinese government commits to an aggressive low carbon energy future, wind energy is capable of generating 6.96 million GWh of electricity by 2030, which is sufficient to satisfy China’s electricity demand in 2030.The existing literature of wind energy development in China focuses on several discussion themes. The majority of the studies emphasize the importance of government policy on the promotion of wind energyindustry in China [4], [5], [6], [7]. For instance, Lema and Ruby [8] compared the growth of wind generation capacity between 1986 and 2006, and addressed the importance of a coordinated government policy and corresponding incentives. Several studies assessed other issues such as the current status of wind energy development in China [9]; the potential of wind power [10]; the significance of wind turbine manufacturing [11]; wind resource assessment [5]; the application of small-scale wind power in rural areas [12]; clean development mechanism in the promotion of wind energy in China [4], social, economic and technical performance of wind turbines [13] etc.There are few studies which assess the challenge of grid infrastructure in the integration of wind power. For instance, Wang [14] studied grid investment, grid security, long-distance transmission and the difficulties of wind power integration at present. Liao et al. [15] criticised the inadequacy of transmission lines in the wind energy development. However, we believe that there is a need to further investigate these issues since they are critical to the development of wind power in China. Furthermore, wind power is not a stand-alone energy source; it needs to be complemented by other energy sources when wind does not blow. Although the viability and feasibility of the combination of wind power with other power generation technologies have been discussed widely in other countries, none of the papers reviewed thesituation in the Chinese context. In this paper, we discuss and clarify two major issues in light of the Chinese wind energy distribution process: 1) the capability of the grid infrastructure to absorb and transmit large amounts of wind powered electricity, especially when these wind farms are built in remote areas; 2) the choices and viability of the backup systems to cope with the fluctuations of wind electricity output.2. Is the existing power grid infrastructure sufficient?Wind power has to be generated at specific locations with sufficient wind speed and other favourable conditions. In China, most of the wind energy potential is located in remote areas with sparse populations and less developed economies. It means that less wind powered electricity would be consumed close to the source. A large amount of electricity has to be transmitted between supply and demand centres leading to several problems associated with the integration with the national power grid system, including grid investment, grid safety and grid interconnection.2.1. Power grid investmentAlthough the two state grid companies-(SGCC) State Grid Corporation of China and (CSG) China Southern Grid - have invested heavily in grid construction, China’s powe r grid is still insufficient to cope with increasing demand. For example, some coal-fired plants in Jiangsu, which is one of the largest electricity consumers in China, had to drop the load ratio to 60 percent against the international standard of 80percent due to the limited transmission capacity [16]. This situation is a result of an imbalanced investment between power grid construction and power generation capacity. For example, during the Eighth Five-Year Plan, Ninth Five-Year Plan and Tenth Five-Year Plan,1 power grid investments accounted for 13.7%, 37.3% and 30% of total investment in the electricity sector, respectively. The ratio further increased from 31.1% in 2005 to 45.94% in 2008, the cumulative investment in the power grid is still significantly lower than the investments in power generation [17]. Fig. 1 gives a comparison of the ratios of accumulative investments in power grid and power generation in China, the US, Japan, the UK and France since 1978. In most of these countries, more than half of the electric power investment has been made on grid construction. By contrast, the ratio is less than 40% in China.According to the Articles 14 and 21 of the Chinese Renewable Energy Law, the power grid operators are responsible for the grid connection of renewable energy projects. Subsidies are given subject to the length of the grid extension with standard rates. However, Mo [18] found that the subsidies were only sufficient to compensate for capital investment and corresponding interest but excluding operational and maintenance costs.Again, similar to grid connection, grid reinforcement requires significant amounts of capital investment. The Three Gorges power planthas provided an example of large-scale and long-distance electricity transmission in China. Similar to wind power, hydropower is usually situated in less developed areas. As a result, electricity transmission lines are necessary to deliver the electricity to the demand centres where the majority are located; these are the eastern coastal areas and the southern part of China. According to SGCC [19], the grid reinforcement investment of the Three Gorges power plants amounted to 34.4 billion yuan (about 5 billion US dollars). This could be a lot higher in the case of wind power due to a number of reasons. First, the total generating capacity of Three Gorges project is approximately 18.2 GW at this moment and will reach 22.4 GW when fully operating [20], whilst the total generating capacity of the massive wind farms amount to over 100 GW. Hence, more transmission capacities are absolutely necessary. Second, the Three Gorges hydropower plant is located in central China. A number of transmission paths are available, such as the 500 kV DC transmission lines to Shanghai (with a length of 1100 km), Guangzhou (located in Guangdong province, with a length of 1000 km) and Changzhou (located in Jiangsu province, with a length of 1000 km) with a transmission capacity of 3 GW each and the 500 kV AC transmission lines to central China with transmission capacity of 12 GW. By contrast, the majority of wind farm bases, which are located in the northern part of China, are far away from the load centres. For example, Jiuquan locatedin Gansu has a planned generation capacity of 20 GW. The distances from Jiuquan to the demand centres of the Central China grid and the Eastern China grid are 1500 km and 2500 km, respectively. For Xinjiang, the distances are even longer at 2500 km and 4000 km, respectively. As a result, longer transmission lines are required. Fig. 2 depicts the demand centres and wind farms in detail.2.2. Grid safetyThe second problem is related to grid safety. The large-scale penetration of wind electricity leads to voltage instability, flickers and voltage asymmetry which are likely to cause severe damage to the stability of the power grid [21]. For example, voltage stability is a key issue in the grid impact studies of wind power integration. During the continuous operation of wind turbines, a large amount of reactive power is absorbed, which lead to voltage stability deterioration [22]. Furthermore, the significant changes in power supply from wind might damage the power quality [23]. Hence, additional regulation capacity would be needed. However, in a power system with the majority of its power from base load provider, the requirements cannot be met easily [24]. In addition, the possible expansion of existing transmission lines would be necessary since integration of large-scale wind would cause congestion and other grid safety problems in the existing transmission system. For example, Holttinen [23] summarized the majorimpacts of wind power integration on the power grid at the temporal level (the impacts of power outputs at second, minute to year level on the power grid operation) and the spatial level (the impact on local, regional and national power grid). Besides the impacts mentioned above, the authors highlight other impacts such as distribution efficiency, voltage management and adequacy of power on the integration of wind power [23].One of the grid safety problems caused by wind power is reported by the (SERC) State Electricity Regulatory Commission [25]. In February and April of 2011, three large-scale wind power drop-off accidents in Gansu (twice) and Hebei caused power losses of 840.43 MW, 1006.223 MW and 854 MW, respectively, which accounted for 54.4%, 54.17% and 48.5% of the total wind powered outputs. The massive shutdown of wind turbines resulted in serious operational difficulties as frequency dropped to 49.854 Hz, 49.815 Hz and 49.95 Hz in the corresponding regional power grids.The Chinese Renewable Energy Law requires the power grid operators to coordinate the integration of windmills and accept all of the wind powered electricity. However, the power grid companies have been reluctant to do so due to the above mentioned problems as well as technical and economic reasons. For instance, more than one third of the wind turbines in China, amounting to 4 GW capacity, were not connectedto the power grid by the end of 2008 [17]. Given that the national grid in China is exclusively controlled by the power companies –SGCC and CSG - the willingness of these companies to integrate wind energy into the electricity generation systems is critical.2.3. The interconnection of provincial and regional power gridsThe interconnection of trans-regional power grids started at the end of 1980s. A (HVDC) high voltage direct current transmission line was established to link the Gezhouba2 dam with Shanghai which signifies the beginning of regional power grids interconnection. In 2001, two regional power grids, the North China Power Grid and Northeast China Power Grid were interconnected. This was followed by the interconnection of the Central China Power Grid and the North China Power Grid in 2003. In 2005, two other interconnection agreements were made between the South China Power Grid with North, Northeast and Central China Power Grid, and the Northwest China Power Grid and the Central China Power Grid. Finally, in 2009, the interconnection of Central China Power Grid and the East China Power Grid was made. In today’s China, the Chinese power transmission systems are composed of 330 kV and 500 kV transmission lines as the backbone and six interconnected regional power grids and one Tibet power grid [26].It seems that the interconnectivity of regional power grids would help the delivery of wind powered outputs from wind-rich regions todemand centres. However, administrative and technical barriers still exist. First, the interconnectivity among regions is always considered as a backup to contingencies, and could not support the large-scale, long-distance electricity transmission [27]. In addition, the construction of transmission systems is far behind the expansion of wind power. The delivery of large amounts of wind power would be difficult due to limited transmission capacity. Furthermore, the quantity of inter-regional electricity transmission is fixed [27]. Additional wind power in the inter-regional transmission might have to go through complex administrative procedures and may result in profit reductions of conventional power plants.3. Are the backup systems geographically available and technically feasible?Power system operators maintain the security of power supply by holding power reserve capacities in operation. Although terminologies used in the classification of power reserves vary among countries [28], power reserves are always used to keep the production and generation in balance under a range of circumstances, including power plant outages, uncertain variations in load and fluctuations in power generations (such as wind) [29]. As wind speed varies on all time scales (e.g. from seconds to minutes and from months to years), the integration of fluctuating wind power generation induces additional system balancing requirements onthe operational timescale [29].A number of studies have examined the approaches to stabilize the electricity output from wind power plants. For example, Belanger and Gagnon [30] conducted a study on the compensation of wind power fluctuations by using hydropower in Canada. Nema et al. [31] discussed the application of wind combined solar PV power generation systems and concluded that the hybrid energy system was a viable alternative to current power supply systems in remote areas. In China, He et al. [2]investigated the choices of combined power generation systems. The combinations of wind-hydro, wind-diesel, wind-solar and wind-gas power were evaluated respectively. They found that, for instance, the wind-diesel hybrid systems were used at remote areas and isolated islands. This is because the wind-diesel hybrid systems have lower generation efficiency and higher generation costs compared to other generation systems. Currently, the wind-solar hybrid systems are not economically viable for large-scale application; thus, these systems have either been used at remote areas with limited electricity demand (e.g. Gansu Subei and Qinghai Tiansuo) or for lighting in some coastal cities [2]. Liu et al. [32] adopted the EnergyPLAN model to investigate the maximum wind power penetration level in the Chinese power system. The authors derived a conclusion that approximately 26% of national power demand could be supplied by wind power by the end of 2007. However, theauthors fail to explain the provision of power reserves at different time scales due to wind power integration.Because of the smoothing effects of dispersing wind turbines at different locations (as exemplified by Drake and Hubacek [33] for the U.K., Roques [34] for the E.U. and Kempton et al. [35] for the U.S.), the integration of wind power has a very small impact on the primary reserves which are available from seconds to minutes [36]. However, the increased reserve requirements are considerable on secondary reserves (available within 10–15 min) which mainly consist of hydropower plants and gas turbine power plants [29]. Besides, the long-term reserves, which are used to restore secondary reserves after a major power deficit, will be in operation to keep power production and consumption in balance for a longer timescale (from several minutes to several hours). In the following subsection, we examine the availability of power plants providing secondary and long-term reserves and investigate the viability of energy storage system in China.中文中国的风力发电–梦想还是现实?胡巴切克摘要经过近几年风力发电能力的巨大增长,中国现在拥有44.7吉瓦的风力发电。
中英文对照外文翻译(文档含英文原文和中文翻译)Bridge research in EuropeA brief outline is given of the development of the European Union, together withthe research platform in Europe. The special case of post-tensioned bridges in the UK is discussed. In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio: relating to the identification of voids in post-tensioned concrete bridges using digital impulse radar.IntroductionThe challenge in any research arena is to harness the findings of different research groups to identify a coherent mass of data, which enables research and practice to be better focused. A particular challenge exists with respect to Europe where language barriers are inevitably very significant. The European Community was formed in the 1960s based upon a political will within continental Europe to avoid the European civil wars, which developed into World War 2 from 1939 to 1945. The strong political motivation formed the original community of which Britain was not a member. Many of the continental countries saw Britain’s interest as being purelyeconomic. The 1970s saw Britain joining what was then the European Economic Community (EEC) and the 1990s has seen the widening of the community to a European Union, EU, with certain political goals together with the objective of a common European currency.Notwithstanding these financial and political developments, civil engineering and bridge engineering in particular have found great difficulty in forming any kind of common thread. Indeed the educational systems for University training are quite different between Britain and the European continental countries. The formation of the EU funding schemes —e.g. Socrates, Brite Euram and other programs have helped significantly. The Socrates scheme is based upon the exchange of students between Universities in different member states. The Brite Euram scheme has involved technical research grants given to consortia of academics and industrial partners within a number of the states—— a Brite Euram bid would normally be led by partners within a number of the statesan industrialist.In terms of dissemination of knowledge, two quite different strands appear to have emerged. The UK and the USA have concentrated primarily upon disseminating basic research in refereed journal publications: ASCE, ICE and other journals. Whereas the continental Europeans have frequently disseminated basic research at conferences where the circulation of the proceedings is restricted.Additionally, language barriers have proved to be very difficult to break down. In countries where English is a strong second language there has been enthusiastic participation in international conferences based within continental Europe —e.g. Germany, Italy, Belgium, The Netherlands and Switzerland. However, countries where English is not a strong second language have been hesitant participants }—e.g. France.European researchExamples of research relating to bridges in Europe can be divided into three types of structure:Masonry arch bridgesBritain has the largest stock of masonry arch bridges. In certain regions of the UK up to 60% of the road bridges are historic stone masonry arch bridges originally constructed for horse drawn traffic. This is less common in other parts of Europe as many of these bridges were destroyed during World War 2.Concrete bridgesA large stock of concrete bridges was constructed during the 1950s, 1960s and 1970s. At the time, these structures were seen as maintenance free. Europe also has a large number of post-tensioned concrete bridges with steel tendon ducts preventing radar inspection. This is a particular problem in France and the UK.Steel bridgesSteel bridges went out of fashion in the UK due to their need for maintenance as perceived in the 1960s and 1970s. However, they have been used for long span and rail bridges, and they are now returning to fashion for motorway widening schemes in the UK.Research activity in EuropeIt gives an indication certain areas of expertise and work being undertaken in Europe, but is by no means exhaustive.In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio. The example relates to the identification of voids in post-tensioned concrete bridges, using digital impulse radar.Post-tensioned concrete rail bridge analysisOve Arup and Partners carried out an inspection and assessment of the superstructure of a 160 m long post-tensioned, segmental railway bridge in Manchester to determine its load-carrying capacity prior to a transfer of ownership, for use in the Metrolink light rail system..Particular attention was paid to the integrity of its post-tensioned steel elements.Physical inspection, non-destructive radar testing and other exploratory methods were used to investigate for possible weaknesses in the bridge.Since the sudden collapse of Ynys-y-Gwas Bridge in Wales, UK in 1985, there has been concern about the long-term integrity of segmental, post-tensioned concrete bridges which may b e prone to ‘brittle’ failure without warning. The corrosion protection of the post-tensioned steel cables, where they pass through joints between the segments, has been identified as a major factor affecting the long-term durability and consequent strength of this type of bridge. The identification of voids in grouted tendon ducts at vulnerable positions is recognized as an important step in the detection of such corrosion.Description of bridgeGeneral arrangementBesses o’ th’ Barn Bridge is a 160 m long, three span, segmental, post-tensionedconcrete railway bridge built in 1969. The main span of 90 m crosses over both the M62 motorway and A665 Bury to Prestwick Road. Minimum headroom is 5.18 m from the A665 and the M62 is cleared by approx 12.5 m.The superstructure consists of a central hollow trapezoidal concrete box section 6.7 m high and 4 m wide. The majority of the south and central spans are constructed using 1.27 m long pre-cast concrete trapezoidal box units, post-tensioned together. This box section supports the in site concrete transverse cantilever slabs at bottom flange level, which carry the rail tracks and ballast.The center and south span sections are of post-tensioned construction. These post-tensioned sections have five types of pre-stressing:1. Longitudinal tendons in grouted ducts within the top and bottom flanges.2. Longitudinal internal draped tendons located alongside the webs. These are deflected at internal diaphragm positions and are encased in in site concrete.3. Longitudinal macalloy bars in the transverse cantilever slabs in the central span .4. Vertical macalloy bars in the 229 mm wide webs to enhance shear capacity.5. Transverse macalloy bars through the bottom flange to support the transverse cantilever slabs.Segmental constructionThe pre-cast segmental system of construction used for the south and center span sections was an alternative method proposed by the contractor. Current thinkingire suggests that such a form of construction can lead to ‘brittle’ failure of the ententire structure without warning due to corrosion of tendons across a construction joint,The original design concept had been for in site concrete construction.Inspection and assessmentInspectionInspection work was undertaken in a number of phases and was linked with the testing required for the structure. The initial inspections recorded a number of visible problems including:Defective waterproofing on the exposed surface of the top flange.Water trapped in the internal space of the hollow box with depths up to 300 mm.Various drainage problems at joints and abutments.Longitudinal cracking of the exposed soffit of the central span.Longitudinal cracking on sides of the top flange of the pre-stressed sections.Widespread sapling on some in site concrete surfaces with exposed rusting reinforcement.AssessmentThe subject of an earlier paper, the objectives of the assessment were:Estimate the present load-carrying capacity.Identify any structural deficiencies in the original design.Determine reasons for existing problems identified by the inspection.Conclusion to the inspection and assessmentFollowing the inspection and the analytical assessment one major element of doubt still existed. This concerned the condition of the embedded pre-stressing wires, strands, cables or bars. For the purpose of structural analysis these elements、had been assumed to be sound. However, due to the very high forces involved,、a risk to the structure, caused by corrosion to these primary elements, was identified.The initial recommendations which completed the first phase of the assessment were:1. Carry out detailed material testing to determine the condition of hidden structural elements, in particularthe grouted post-tensioned steel cables.2. Conduct concrete durability tests.3. Undertake repairs to defective waterproofing and surface defects in concrete.Testing proceduresNon-destructi v e radar testingDuring the first phase investigation at a joint between pre-cast deck segments the observation of a void in a post-tensioned cable duct gave rise to serious concern about corrosion and the integrity of the pre-stress. However, the extent of this problem was extremely difficult to determine. The bridge contains 93 joints with an average of 24 cables passing through each joint, i.e. there were approx. 2200 positions where investigations could be carried out. A typical section through such a joint is that the 24 draped tendons within the spine did not give rise to concern because these were protected by in site concrete poured without joints after the cables had been stressed.As it was clearly impractical to consider physically exposing all tendon/joint intersections, radar was used to investigate a large numbers of tendons and hence locate duct voids within a modest timescale. It was fortunate that the corrugated steel ducts around the tendons were discontinuous through the joints which allowed theradar to detect the tendons and voids. The problem, however, was still highly complex due to the high density of other steel elements which could interfere with the radar signals and the fact that the area of interest was at most 102 mm wide and embedded between 150 mm and 800 mm deep in thick concrete slabs.Trial radar investigations.Three companies were invited to visit the bridge and conduct a trial investigation. One company decided not to proceed. The remaining two were given 2 weeks to mobilize, test and report. Their results were then compared with physical explorations.To make the comparisons, observation holes were drilled vertically downwards into the ducts at a selection of 10 locations which included several where voids were predicted and several where the ducts were predicted to be fully grouted. A 25-mm diameter hole was required in order to facilitate use of the chosen horoscope. The results from the University of Edinburgh yielded an accuracy of around 60%.Main radar sur v ey, horoscope verification of v oids.Having completed a radar survey of the total structure, a baroscopic was then used to investigate all predicted voids and in more than 60% of cases this gave a clear confirmation of the radar findings. In several other cases some evidence of honeycombing in the in site stitch concrete above the duct was found.When viewing voids through the baroscopic, however, it proved impossible to determine their actual size or how far they extended along the tendon ducts although they only appeared to occupy less than the top 25% of the duct diameter. Most of these voids, in fact, were smaller than the diameter of the flexible baroscopic being used (approximately 9 mm) and were seen between the horizontal top surface of the grout and the curved upper limit of the duct. In a very few cases the tops of the pre-stressing strands were visible above the grout but no sign of any trapped water was seen. It was not possible, using the baroscopic, to see whether those cables were corroded.Digital radar testingThe test method involved exciting the joints using radio frequency radar antenna: 1 GHz, 900 MHz and 500 MHz. The highest frequency gives the highest resolution but has shallow depth penetration in the concrete. The lowest frequency gives the greatest depth penetration but yields lower resolution.The data collected on the radar sweeps were recorded on a GSSI SIR System 10.This system involves radar pulsing and recording. The data from the antenna is transformed from an analogue signal to a digital signal using a 16-bit analogue digital converter giving a very high resolution for subsequent data processing. The data is displayed on site on a high-resolution color monitor. Following visual inspection it isthen stored digitally on a 2.3-gigabyte tape for subsequent analysis and signal processing. The tape first of all records a ‘header’ noting the digital radar settings together with the trace number prior to recording the actual data. When the data is played back, one is able to clearly identify all the relevant settings —making for accurate and reliable data reproduction.At particular locations along the traces, the trace was marked using a marker switch on the recording unit or the antenna.All the digital records were subsequently downloaded at the University’s NDT laboratory on to a micro-computer.(The raw data prior to processing consumed 35 megabytes of digital data.) Post-processing was undertaken using sophisticated signal processing software. Techniques available for the analysis include changing the color transform and changing the scales from linear to a skewed distribution in order to highlight、突出certain features. Also, the color transforms could be changed to highlight phase changes. In addition to these color transform facilities, sophisticated horizontal and vertical filtering procedures are available. Using a large screen monitor it is possible to display in split screens the raw data and the transformed processed data. Thus one is able to get an accurate indication of the processing which has taken place. The computer screen displays the time domain calibrations of the reflected signals on the vertical axis.A further facility of the software was the ability to display the individual radar pulses as time domain wiggle plots. This was a particularly valuable feature when looking at individual records in the vicinity of the tendons.Interpretation of findingsA full analysis of findings is given elsewhere, Essentially the digitized radar plots were transformed to color line scans and where double phase shifts were identified in the joints, then voiding was diagnosed.Conclusions1. An outline of the bridge research platform in Europe is given.2. The use of impulse radar has contributed considerably to the level of confidence in the assessment of the Besses o’ th’ Barn Rail Bridge.3. The radar investigations revealed extensive voiding within the post-tensioned cable ducts. However, no sign of corrosion on the stressing wires had been foundexcept for the very first investigation.欧洲桥梁研究欧洲联盟共同的研究平台诞生于欧洲联盟。
数据采集外文文献翻译(含:英文原文及中文译文)文献出处:Txomin Nieva. DATA ACQUISITION SYSTEMS [J]. Computers in Industry, 2013, 4(2):215-237.英文原文DATA ACQUISITION SYSTEMSTxomin NievaData acquisition systems, as the name implies, are products and/or processes used to collect information to document or analyze some phenomenon. In the simplest form, a technician logging the temperature of an oven on a piece of paper is performing data acquisition. As technology has progressed, this type of process has been simplified and made more accurate, versatile, and reliable through electronic equipment. Equipment ranges from simple recorders to sophisticated computer systems. Data acquisition products serve as a focal point in a system, tying together a wide variety of products, such as sensors that indicate temperature, flow, level, or pressure. Some common data acquisition terms are shown below.Data collection technology has made great progress in the past 30 to 40 years. For example, 40 years ago, in a well-known college laboratory, the device used to track temperature rises in bronze made of helium was composed of thermocouples, relays, interrogators, a bundle of papers, anda pencil.Today's university students are likely to automatically process and analyze data on PCs. There are many ways you can choose to collect data. The choice of which method to use depends on many factors, including the complexity of the task, the speed and accuracy you need, the evidence you want, and more. Whether simple or complex, the data acquisition system can operate and play its role.The old way of using pencils and papers is still feasible for some situations, and it is cheap, easy to obtain, quick and easy to start. All you need is to capture multiple channels of digital information (DMM) and start recording data by hand.Unfortunately, this method is prone to errors, slower acquisition of data, and requires too much human analysis. In addition, it can only collect data in a single channel; but when you use a multi-channel DMM, the system will soon become very bulky and clumsy. Accuracy depends on the level of the writer, and you may need to scale it yourself. For example, if the DMM is not equipped with a sensor that handles temperature, the old one needs to start looking for a proportion. Given these limitations, it is an acceptable method only if you need to implement a rapid experiment.Modern versions of the strip chart recorder allow you to retrieve data from multiple inputs. They provide long-term paper records of databecause the data is in graphic format and they are easy to collect data on site. Once a bar chart recorder has been set up, most recorders have enough internal intelligence to operate without an operator or computer. The disadvantages are the lack of flexibility and the relative low precision, often limited to a percentage point. You can clearly feel that there is only a small change with the pen. In the long-term monitoring of the multi-channel, the recorders can play a very good role, in addition, their value is limited. For example, they cannot interact with other devices. Other concerns are the maintenance of pens and paper, the supply of paper and the storage of data. The most important is the abuse and waste of paper. However, recorders are fairly easy to set up and operate, providing a permanent record of data for quick and easy analysis.Some benchtop DMMs offer selectable scanning capabilities. The back of the instrument has a slot to receive a scanner card that can be multiplexed for more inputs, typically 8 to 10 channels of mux. This is inherently limited in the front panel of the instrument. Its flexibility is also limited because it cannot exceed the number of available channels. External PCs usually handle data acquisition and analysis.The PC plug-in card is a single-board measurement system that uses the ISA or PCI bus to expand the slot in the PC. They often have a reading rate of up to 1000 per second. 8 to 16 channels are common, and the collected data is stored directly in the computer and then analyzed.Because the card is essentially a part of the computer, it is easy to establish the test. PC-cards are also relatively inexpensive, partly because they have since been hosted by PCs to provide energy, mechanical accessories, and user interfaces. Data collection optionsOn the downside, the PC plug-in cards often have a 12-word capacity, so you can't detect small changes in the input signal. In addition, the electronic environment within the PC is often susceptible to noise, high clock rates, and bus noise. The electronic contacts limit the accuracy of the PC card. These plug-in cards also measure a range of voltages. To measure other input signals, such as voltage, temperature, and resistance, you may need some external signal monitoring devices. Other considerations include complex calibrations and overall system costs, especially if you need to purchase additional signal monitoring devices or adapt the PC card to the card. Take this into account. If your needs change within the capabilities and limitations of the card, the PC plug-in card provides an attractive method for data collection.Data electronic recorders are typical stand-alone instruments that, once equipped with them, enable the measurement, recording, and display of data without the involvement of an operator or computer. They can handle multiple signal inputs, sometimes up to 120 channels. Accuracy rivals unrivalled desktop DMMs because it operates within a 22 word, 0.004 percent accuracy range. Some data electronic automatic recordershave the ability to measure proportionally, the inspection result is not limited by the user's definition, and the output is a control signal.One of the advantages of using data electronic loggers is their internal monitoring signals. Most can directly measure several different input signals without the need for additional signal monitoring devices. One channel can monitor thermocouples, RTDs, and voltages.Thermocouples provide valuable compensation for accurate temperature measurements. They are typically equipped with multi-channel cards. Built-in intelligent electronic data recorder helps you set the measurement period and specify the parameters for each channel. Once you set it all up, the data electronic recorder will behave like an unbeatable device. The data they store is distributed in memory and can hold 500,000 or more readings.Connecting to a PC makes it easy to transfer data to a computer for further analysis. Most data electronic recorders can be designed to be flexible and simple to configure and operate, and most provide remote location operation options via battery packs or other methods. Thanks to the A/D conversion technology, certain data electronic recorders have a lower reading rate, especially when compared with PC plug-in cards. However, a reading rate of 250 per second is relatively rare. Keep in mind that many of the phenomena that are being measured are physical in nature, such as temperature, pressure, and flow, and there are generallyfewer changes. In addition, because of the monitoring accuracy of the data electron loggers, a large amount of average reading is not necessary, just as they are often stuck on PC plug-in cards.Front-end data acquisition is often done as a module and is typically connected to a PC or controller. They are used in automated tests to collect data, control and cycle detection signals for other test equipment. Send signal test equipment spare parts. The efficiency of the front-end operation is very high, and can match the speed and accuracy with the best stand-alone instrument. Front-end data acquisition works in many models, including VXI versions such as the Agilent E1419A multi-function measurement and VXI control model, as well as a proprietary card elevator. Although the cost of front-end units has been reduced, these systems can be very expensive unless you need to provide high levels of operation, and finding their prices is prohibited. On the other hand, they do provide considerable flexibility and measurement capabilities.Good, low-cost electronic data loggers have the right number of channels (20-60 channels) and scan rates are relatively low but are common enough for most engineers. Some of the key applications include:•product features•Hot die cutting of electronic products•Test of the environmentEnvironmental monitoring•Composition characteristics•Battery testBuilding and computer capacity monitoringA new system designThe conceptual model of a universal system can be applied to the analysis phase of a specific system to better understand the problem and to specify the best solution more easily based on the specific requirements of a particular system. The conceptual model of a universal system can also be used as a starting point for designing a specific system. Therefore, using a general-purpose conceptual model will save time and reduce the cost of specific system development. To test this hypothesis, we developed DAS for railway equipment based on our generic DAS concept model. In this section, we summarize the main results and conclusions of this DAS development.We analyzed the device model package. The result of this analysis is a partial conceptual model of a system consisting of a three-tier device model. We analyzed the equipment project package in the equipment environment. Based on this analysis, we have listed a three-level item hierarchy in the conceptual model of the system. Equipment projects are specialized for individual equipment projects.We analyzed the equipment model monitoring standard package in the equipment context. One of the requirements of this system is the ability to use a predefined set of data to record specific status monitoring reports. We analyzed the equipment project monitoring standard package in the equipment environment. The requirements of the system are: (i) the ability to record condition monitoring reports and event monitoring reports corresponding to the items, which can be triggered by time triggering conditions or event triggering conditions; (ii) the definition of private and public monitoring standards; (iii) Ability to define custom and predefined train data sets. Therefore, we have introduced the "monitoring standards for equipment projects", "public standards", "special standards", "equipment monitoring standards", "equipment condition monitoring standards", "equipment project status monitoring standards and equipment project event monitoring standards, respectively Training item triggering conditions, training item time triggering conditions and training item event triggering conditions are device equipment trigger conditions, equipment item time trigger conditions and device project event trigger condition specialization; and training item data sets, training custom data Sets and trains predefined data sets, which are device project data sets, custom data sets, and specialized sets of predefined data sets.Finally, we analyzed the observations and monitoring reports in the equipment environment. The system's requirement is to recordmeasurements and category observations. In addition, status and incident monitoring reports can be recorded. Therefore, we introduce the concept of observation, measurement, classification observation and monitoring report into the conceptual model of the system.Our generic DAS concept model plays an important role in the design of DAS equipment. We use this model to better organize the data that will be used by system components. Conceptual models also make it easier to design certain components in the system. Therefore, we have an implementation in which a large number of design classes represent the concepts specified in our generic DAS conceptual model. Through an industrial example, the development of this particular DAS demonstrates the usefulness of a generic system conceptual model for developing a particular system.中文译文数据采集系统Txomin Nieva数据采集系统, 正如名字所暗示的, 是一种用来采集信息成文件或分析一些现象的产品或过程。
网络在线学习外文翻译中英文英文Online learning: Adoption, continuance, and learning outcome—A review ofliteratureRitanjali Panigrahi, Praveen Srivastava, Dheeraj Sharma AbstractThe use of Technology to facilitate better learning and training is gaining momentum worldwide, reducing the temporal and spatial problems associated with traditional learning. Despite its several benefits, retaining students in online platforms is challenging. Through a literature review of the factors affecting adoption, the continuation of technology use, and learning outcomes, this paper discusses an integration of online learning with virtual communities to foster student engagement for obtaining better learning outcomes. Future directions have been discussed, the feedback mechanism which i s an antecedent of students’ continuation intention has a lot of scopes to be studied in the virtual community context. The use of Apps in m-learning and the use of cloud services can boost the ease and access of online learning to users and organizations.Keywords: Online learning, Virtual community, Technology adoption, Technology continuation, Learning outcomeIntroductionOnline learning and training are gaining popularity worldwide, reducing the temporal and spatial problems associated with the traditional form of education. The primary factors behind using online learning are not only to improve access to education and training, and quality of learning, but also to reduce the cost and improve the cost-effectiveness of education (Bates, 1997). Online learning is mainly provided in two ways—in synchronous and asynchronous environments (Jolliffe, Ritter, & Stevens, 2012). The time lag attributes of asynchronous learning unlike synchronous learning in online platforms take the advantage of accessing materials anytime and anywhere, ability to reach a greater mass at the same time, and uniformity of content. Online learning along with face-to-face learning is successfullyused in industry as well as academia with positive outcomes (Chang, 2016). The geographically distributed team in an organization can get their skill training through online platforms at the same time, gaining a greater level of competitiveness. Online learning is also beneficial for students as they can learn at their own pace with the availability of online materials. The e-learning market is becoming popular and widely adopted by the education sector and industry. The growth of the e-learning market can be demonstrated by the fact that the global e-learning market is expected to reach 65.41 billion dollars by 2023 growing at a cumulative average growth rate of 7.07% (Research and Markets, 2018a). In addition to this, the global learning management system (LMS) is expected to increase from 5.05 billion USD in 2016 to 18.44 billion USD by 2025 growing at a rate of 15.52% (Research and Markets, 2018b).Despite several advantages of online learning such as improving access to education and training, improving the quality of learning, reducing the cost and improving the cost-effectiveness of education, retaining students in such platforms is a key challenge with a high attrition rate (Perna et al., 2014). Several strategies such as briefing, buddying, and providing feedback on the platform are proposed to retain and engage students (Nazir, Davis, & Harris, 2015). It is also noted that more self-discipline is required by students in online education, unlike traditional classroom education (Allen & Seaman, 2007). Keeping users enrolled and engaged is a challenging job as a personal touch by the instructor is missing or limited. The learning engagement which is an important antecedent for learning outcome is lower for technology-mediated learning than face-to-face learning (Hu & Hui, 2012). As a higher amount of money is spent on infrastructure, staff training, etc., organizations seek to take maximum benefit from online learning which requires an understanding of the factors that drive the adoption, continuation intention, and learning outcome of users on online learning platforms. Therefore, the primary focus of research remains on how to retain online learning users, and increase the efficiency of the online learning.Users may learn inside and outside the classroom; inside classroom learning isthrough instructors either from face-to-face, pure online or blended learning (combination of face-to-face and pure online learning) whereas outside classroom learning is conducted by users anytime and anywhere after the class. The exponential growth of the Internet has enabled individuals to share information, participate, and collaborate to learn from virtual communities (VC) anytime and anywhere (Rennie & Morrison, 2013). In a virtual community, people do everything that they do in real life but leaving their bodies behind (Rheingold, 2000). Virtual communities keep its users engaged based on familiarity, perceived similarity, and trust by creating a sense of belongingness (Zhao, Lu, Wang, Chau, & Zhang, 2012). It is essential to assess the role of a less constrained informal mode of learning (Davis & Fullerton, 2016) like virtual communities in the formal learning to engage and retain students.DiscussionGetting a new idea adopted even when it has obvious advantages is often very difficult (Rogers, 2003). Consistent with the previous statement, despite the advantages of online learning such as improving accessibility, quality, and reducing cost, it has a long way to go to be adopted and used by organizations because of the resistance at different levels (Hanley, 2018). The reasons for resistances offered by the employees in an organizations include perceived poor focus of the e-learning initiative, lack of time to learn new way of working, too much effort to change, lack of awareness, and resistance to change (Ali et al., 2016; Hanley, 2018). It is crucial from an institutional point of view to overcome the resistance to adopt and implement the online learning systems successfully.Understanding the factors of online learning adoption, continuation use intention, and learning outcomes are vital for an e-learning platform providing organization because the success of the platform depends on the successful adoption, continuation use, and finally achieving the desired outcomes. From the literature, it is found that the national culture affects the adoption and moderates the relationship between variables of adoption and use. Therefore, the results of adoption and use of technology might differ in different counties with different cultural dimensions. At a broader level, the perceived characteristics of innovation (of online learning) such as relativeadvantage, compatibility, complexity, trialability, and observability play a significant role in adoption. At an individual level, the primary factors of adoption are the individual expectancies such as the perceived usefulness, perceived ease of use, perceived enjoyment, performance expectancy, effort expectancy, etc., and the external influences such as subjective norm, social norms, surrounding conditions, national culture, social network characteristic, etc. On the other hand, the primary factors of continuation of technology use are the experiences of the individuals in the technology such as satisfaction, confirmation, self-efficacy, flow, trust, we-intention, sense of belongingness, immersion, IS qualities, etc. The perceived usefulness and perceived ease of use are found to be vital for both the technology adoption and continuation use. This implies that the usefulness of the technology and how easy the technology to use determines the adoption and continuation of technology. Apart from these technology enablers, the platform providers should consider the technology inhibitors which negatively impact the acceptance of the technology. The factors of the learning outcomes such as self-efficacy, virtual competence, engagement, design interventions, etc. should be considered before designing and delivering the content in the online learning platform to achieve optimum learning outcomes. The learners’ intention to use full e-learning in developing countries depends on the lea rners’ characteristics, and learners’ adoption of blended learning (Al-Busaidi, 2013). Studies for example by Verbert et al. (2014) have shown that blended learning yields the best outcome in terms of grade when students learn in online collaborative learning with teacher’s initiation and feedback. On the contrary, some studies have shown that contents such as business games do not need the interaction with the instructor; in fact, they are negatively related to perceived learning (Proserpio & Magni, 2012). MOOC (Massive Open Online Course) users have organized face-to-face meetings to fulfill their belongingness or social connectedness as a part of their learning activity (Bulger, Bright, & Cobo, 2015). This indicates that not everyone is good with a digitized form of learning, and hence both face-to-face and online components should be integrated for better outcomes.Lack of human connection is one of the limitations of online learning (Graham,2006) which may reduce the satisfaction level. To address this limitation, personalization functions of e-learning systems began. The satisfaction level, perceived and actual performance, self-efficacy scores increase in personalized online learning where learning materials are provided according to the cognitive capability and style of each individual (Xu, Huang, Wang, & Heales, 2014). Although personalization of e-learning systems is beneficial, they are socially and ethically harmful, and special attention should be given to issues such as privacy compromisation, lack of control, the commodification of education, and reduced individual capability (Ashman et al., 2014). Personal e-learning systems collect user information to understand the users’ interests and requirements for the learning which violates the privacy of individuals. The system utilizes the user information to show the personal content where the individuals do not have control over the learning content. Hence they are limited to certain personal contents which reduce their individual capabilities.Studies, for example, Zhao et al. (2012) have shown that VCs create a sense of belongingness and keeps the members engaged which results in improving the learning outcome, and users with same age groups are less likely to attrite (Freitas et al., 2015). Studies have shown that engagement is promoted when criteria such as problem-centric learning with clear expositions, peer interaction, active learning, instructor accessibility and passion, and using helpful course resources are met (Hew, 2015). Social interactions through social networking produce an intangible asset known as social capital (Coleman, 1988) in terms of the trust, collective action, and communication. Social capital is positively related to online learning satisfaction in group interactions, class interactions, learner-instructor interactions, as well as increasing students’ e-learning performance in groups (Lu, Yang, & Yu, 2013).The continuous development of mobile technology has expanded the opportunity to learn from mobile devices anywhere, anytime. M-Learning is much more beneficial for accessing education in remote areas and developing countries. The success of M-learning in organizations depends on organizational, people, and pedagogical factors apart from technological factors (Krotov, 2015). A range of mobiletechnologies such as laptops, smartphones, and tablets are embraced by students to support informal learning (Murphy, Farley, Lane, Hafeez-Baig, & Carter, 2014). Learning through mobile devices poses both opportunities as well as challenges; it provides flexibility in learning, on the other hand, it is a limitation for those who do not have connectivity and access to these devices. In student-centered learning especially collaborative and project-based learning, the use of mobile devices can be promoted by the mobile apps (Leinonen, Keune, Veermans, & Toikkanen, 2014). The use of mobile apps along with guidance from teachers integrates reflection in the classroom learning (Leinonen et al., 2014).Cloud computing provides organizations with a way to enhance their IT capabilities without a huge investment in infrastructure or software. The benefits of cloud computing are low cost, scalability, centralized data storage, no maintenance from user side (no software needed), easy monitoring, availability and recovery, and the challenges include it requires fast and reliable internet access, and privacy and security issues (El Mhouti, Erradi, & Nasseh, 2018). The primary factors for adoption of cloud computing in e-learning are ease of use, usefulness, and security (Kayali, Safie, & Mukhtar, 2016). Private cloud inside educational institutes can acquire the additional benefits in non-compromising the security and data privacy concerns associated with cloud computing (Mousannif, Khalil, & Kotsis, 2013). Cloud computing provides support to the online learning platforms to store and process the enormous amount of data generated. The problem of managing the increasing growth of online users, contents, and resources can be resolved by using cloud computing services (Fernandez, Peralta, Herrera, & Benitez, 2012).Future directionsFuture directions of research in online learning are as follows: First, the feedback mechanism used in online learning in institutions has not been used to measure the continuation intention in VCs. Feedback enables learners to define goals and track their progress through dashboard applications to promote awareness, reflection, and sense-making (Verbert et al., 2014). The students who received teachers’ feedback along with online learning achieved better grades than those who did not receivefeedback (Tsai, 2013) and students positively perceive the feedback systems more than the educators (Debuse & Lawley, 2016). Although immediate feedback is one of the dimensions of the flow (Csikszentmihalyi, 2014), the factor has not been studied in a VC context. It is vital for managers to check if feedback on a community post fosters the members’ continuation intention, and they should design user interfaces which encourage providing feedback. Second, it is high time to develop an integrated model for formal learning (online and blended) with VCs for students’ engagement. Informal learning as itself, not limited to the body of knowledge, rather, is the result of the interaction of people via communities of practice, networks, other forms, etc. (Rennie & Mason, 2004). The networked communities build intimacy and support which helps in self-directed learning (Rennie & Mason, 2004) which is an important parameter for online learning. Community commitment (Bateman et al., 2011), immersion (Shin et al., 2013), we-intentions (Tsai & Bagozzi, 2014), sense of belongingness (Zhao et al., 2012), etc. from the VC would help students to continue the engagement for a better learning outcome. Moreover, it is found that collaborative chat participation in MOOCs slows down the rate of attrition over time (Ferschke, Yang, Tomar, & Rosé, 2015). It is of great importance to check if learning outcome improves when the virtual community is integrated or embedded in the learning environment (online and blended). The educators and managers should encourage their students and employees to participate in VCs. Third, the growth of the adoption of mobile devices has expanded the arena of e-learning platforms. Integrating the virtual communities via a mobile platform with online learning can foster the student engagement resulting in higher learning outcome. Fourth, cloud computing has great potential in dealing with the scalability issues arising from the rise in numbers of users, content, and resources in online learning. Furthermore, it can provide tremendous benefits to organizations as well as users in terms of ease of access, flexibility, and lower cost. Although a few studies cover cloud computing infrastructure in education and pedagogic processes, the empirical research on the cloud computing for education is very shallow (Baldassarre, Caivano, Dimauro, Gentile, & Visaggio, 2018). As the mobile devices are often limited by storage space,future researchers are invited to carry out effective research on the integration of cloud computing and mobile learning to understand the factors affecting learning outcome.ConclusionUnderstanding the antecedents of e-learning adoption, continuance, and learning outcomes in online platforms are essential in ensuring the successful implementation of technology in learning and achieving the maximum benefits. This study shows factors such as PU, PEoU, PE, culture, attitude, subjective norms, system and information inhibitors, etc. contribute to the adoption of technology. Factors such as satisfaction, confirmation, user involvement, system quality, information quality, feedback, self-efficacy, social identity, perceived benefits, etc. determine the continuation of technology use. This implies factors for adoption, and continuation intentions vary; the attitude and usefulness of a system are essential for adoption while the experience and satisfaction in the environment lead to continuation intention. It is also found from the literature that the learning outcomes depend on the self-efficacy, collaborative learning, team cohesion, technology fit, learning engagement, self-regulation, interest, etc.The contribution of the paper can be summarized as: understanding the factors that are studied for adoption, continuance, learning outcomes in an online environment, and the provision of future research directions for educators and managers for successful implementation of technology in online platforms to achieve maximum benefits.中文在线学习的采用,持续性和学习成果:文献综述摘要在全世界范围内,使用技术促进更好的学习和培训的势头正在增强,减少了与传统学习相关的时空问题。
外文文献翻译(含:英文原文及中文译文)文献出处:Jens Clausen, Birgit Blättel-Mink2, Lorenz Erdmann, Christine Henseling . Contribution of Online Trading of Used Goods to Resource Efficiency: An Empirical Study of eBay Users [J]. Sustainability, 2010, 2: 10-30.英文原文Contribution of Online Trading of Used Goods to Resource Efficiency:An Empirical Study of eBay UsersJens Clausen, Birgit Blättel-Mink , Lorenz Erdmann and ChristineHenselingAbstractThis paper discusses the sustainability impact (contribution to sustainability, reduction of adverse environmental impacts) of online second-hand trading. A survey of eBay users shows that a relationship between the trading of used goods and the protection of natural resources is hardly realized. Secondly, the environmental motivation and the willingness to act in a sustainable manner differ widely between groups of consumers. Given these results from a user perspective, the paper tries to find some objective hints of online second-hand trading’s environmental impact. The greenhouse gas emissions resulting from the energy used for the trading transactions seem to be considerably lower than the emissions due to the (avoided) production of new goods. The paper concludes with a set of recommendations for second-hand trade andconsumer policy. Information about the sustainability benefits of purchasing second-hand goods should be included in general consumer information, and arguments for changes in behavior should be targeted to different groups of consumers.Keywords: online marketplaces; online auctions; consumer; electronic commerce; used products; second-hand market; sustainable consumption 1. IntroductionOnline auction and trading platforms are increasing the opportunities for sustainable consumption. The potential of online based second-hand trading lies largely in the opportunity to extend the life span of products, thereby avoiding additional environmental stresses due to the purchase of new goods. To date, private households often failed to exploit the potentials for reusing products because of high transaction costs. Trade in second-hand goods remained limited to regional markets. These barriers frequently prevented local and regional used goods markets from attaining critical mass and becoming attractive for both buyers and sellers. In recent years, however, rapidly increasing use of the Internet and trading platforms, such as eBay, have fundamentally transformed the underlying conditions of such markets.Online markets have not only significantly increased the numbers of market participants; they have also changed the roles traditionally assigned to consumers and producers. Exchange sites, auction platformsand other Internet-based trading models where users are not merely buyers, but at the same time, also active sellers of products or services, have shifted the role of consumers.Against this background, this article examines consumption processes using the example of eBay, the world’s largest online trading platform for used goods, by focusing on the following question: Which sustainability potentials are connected with the electronic trading of used goods, and how can these potentials be exploited? This question lies in the center of the research project ―From Consumer to Prosumer—Development of new trading mechanisms and auction cultures to promote sustainable consumption.‖ The project is intentionally linked with various streams of research and insights, especially concerning the intensification of use, lifestyle research, and life-cycle assessment, in the fields of information technology and telecommunications, and integrates them from the perspective of the research guiding question.After giving an overview of the scientific work on environmental attitudes and behavior in the context of internet based used goods trading, and an empirical look on internet usage in Chapter 2, the empirical results of an online survey on online trading and sustainability are presented in Chapter 3. Chapter 4 draws conclusions from the empirical study and Chapter 5 focuses on the ecological assessment of used goods trading.The paper concludes with some remarks on the consequences of second-hand trade, online platforms, and consumer policy.2. Internet-Based Used Goods Trading from a Subjective PerspectiveSustainability researchers in the social sciences assume that environmentally-oriented behavior is supported to a non-negligible degree by positive attitudes toward the environment and by knowledge about the environment [1-7]. Time and again, however, representative surveys of the population provide evidence for a discrepancy between concern about increasing environmental devastation and its consequences, as well as knowledge about the environment on the one hand, and environmental behavior that is in line with such knowledge on the other. It is possible to identify groups of individuals who display environmentally-friendly behavior, but not the corresponding attitudes toward the environment (e.g., older single women), just as there are groups of individuals who display a high degree of ecological awareness, but whose behavior is nonetheless not consistently environmentally oriented (e.g., families whose environmentally-friendly behavior is organized to the hilt, but who still drive a family car). Three bundles of characteristics that influence the sustainability of styles of consumption have emerged in the research [8]: the household’s social situation (socio-demographic characteristics and time resources), consumer preferences (subjective preferences relating to the selection of productsand behaviors), and actual consumption behavior. Socio-demographic characteristics that substantially influence differences in terms of sustainable consumption patterns include age, educational level, gender, marital status, and income, with women, well educated people, and parents striving for consistency of attitudes and behavior.Grunenberg and Kuckartz [1] were able to identify the group they called the ―environmentally committed‖ in their study, which was representative for Germany. ―[A group] that takes e nvironmental problems more seriously and is actively committed to solving them. Entirely consistent pro-environment behavior is not demanded of this group; that would require, for example, that these individuals would not just eat exclusively organically-grown food, but would also sell their cars and take bicycling vacations.‖ (Grunenberg/Kuckartz, p. 204 [1]). The following indicators were used to define the group of environmentally-committed individuals: membership in an organization promoting conservation or environmental protection; donation to such an organization in the previous year; familiarity with the term ―sustainable development;‖ high willingness to pay for improved environmental protection; frequenting of information about environmental problems from specialist periodicals; environmental mentality type 1 (motto: ―Be a role model when it comes to environmental protection!‖); declared shared responsibility for environmental protection (statement: ―It isn’t difficultfor an individual person to do something for the environment!‖) (Grunenberg/Kuckartz, p. 204 [1]). Members of this group are often in the familial phase of life, have a relatively high level of education, often live in major cities or small communities, seldom in medium-sized towns or villages, tend to come from West Germany, as a rule have a higher professional position (senior staff, upper-middle-level or upper-level civil servants, the professions), have a medium to high, but not very high income, and tend to live in quiet neighborhoods in single- or two-family houses. Regarding their political preferences, the authors ascertained a more pronounced interest in politics in general, and a clear focus on post-materialist values.As to trading in used goods as a specific area of consumption, a study of West Berlin showed that buying and selling used goods is linked fairly rarely to ecological motives [9]. Pragmatic reasons for selling used goods are mentioned more frequently, for instance ―making room‖ or ―getting rid of items we no longer need.‖ In contrast, when purchasing used goods, financial motives are more important. The proportion of men who buy and sell used goods is somewhat higher than that of women. The average age is 36. More than two-thirds of those offering goods on the second-hand market have a job. Housewives and students comprise 10% of sellers each, the unemployed and pensioners about 5% each. The sellers often live in multi-person households, and live less commonlyalone. Among the buyers of second-hand goods, 29.4% are in the 19- to 25-year age group and 35.7% are in the 26- to 35-year group. Most people have a job (61%), and students form 15% of the buyers, which is substantially higher than their proportion of the sellers. Educational levels are above-average among buyers, too, as is the proportion of individuals living in multi-person households.3. Online Trading and Sustainability—Empirical ResultsTaking the example of eBay, the above mentioned relationships were more closely looked at with an online survey that was carried out by the authors in November, 2008. The survey was intended to gain insight into eBay users’ consumption patterns, their attitudes, and their ways of dealing with used products on eBay. The survey was directed to private eBay users who use the site, both for buying and selling, and who carried out at least one transaction during the preceding 12 months. In total, 2,511 valid questionnaires were analyzed. In contrast to Germany’s total population, more men (57.1%) than women responded, more persons who live with their partners (73.4%), and more people living in households of three or more (52.4%). The sample also displays a relatively high educational (49.4% level A) and employment status (49.2% working full time), and the respondents tend to live in or near urban areas. The age distribution of the respondents (biggest cluster 40–49 years old; 29.8%) and their income distribution (40% medium to low income) arecomparable with the overall population. More women (45.1%) than men (34.0%) of the sample live in households with children; the proportion of men increases with increasing age. The women who buy or sell on eBay have lower incomes. 48.7%, of all female eBay buyers earn less than 2,000 Euros per month compared to 39.8% of all male eBay buyers. Compared to the group of Internet users mentioned above, the sample analyzed here differs only in relation to income, with Internet users displaying higher incomes. The following subjects will be approached: Attitudes toward the environment and motives for trading on eBay, attitudes of eBay users regarding used products and their handling of used products. Then, a typology of consumer patterns of eBay users that was derived from the data will be presented.4. Conclusions from the Empirical StudyThe results of the survey show that environmental aspects play only a minor role for the majority of the surveyed eBay users when trading used products. When concerning their motivations in particular, other aspects have been more important to date: practical and financ ial considerations, as well as having fun trading on eBay. Opportunities to make trading on eBay more environmentally friendly lie in providing information about the environmental relevance of used goods trading, e.g., directly on the eBay platform. In addition, the broad range of motivations that eBay speaks to offers good starting points for creating alliances ofmotivations that connect ecological aspects with other aspects of use. A concrete strategic point of intervention is the option to provide opportunities for climate-neutral shipping on the eBay platform, and eBay users have indicated a high willingness to use such an option. The survey also made it possible to identify various starting points for intensifying used goods trading. When developing communications strategies in this regard, the value of used goods for others should be emphasized more strongly. This could happen, for example, by pointing more clearly to the quality, as well as the monetary value, of used products in such communications. Interesting approaches that take this direction include quality tests of used products, as well as tools with which users can learn about the prices they can get for used products. The test lab introduced by eBay in mid-2008 is an interesting approach. Certain used products were tested here to show their value in relation to the value of new products.The results highlight the significance of situations of change in life for trading used products. Such phases, for example, the birth of one’s first child or retirement can (under certain circumstances) function as times when people start trading used goods, or they can be situations in which the willingness to buy and to sell second-hand products is especially high.An important aspect for used goods trading is that the effort investedin selling the product must be financially worthwhile. The responses showed, however, that this is not always the case. This aspect must be taken into account when developing measures to intensify used goods trading. Another finding: a central problem of used goods trading lies in the fact that many buyers are unsure of the quality of the products for sale (lack of warranties, doubts about whether the products are in fact in proper working order). In order to address this concern, it is important to develop mechanisms that increase trust in second-hand products and reflect their quality. Initial starting points include initiatives to refurbish used products. One example for this is the initiative which purchases, refurbishes and then resells used cell phones and provides a warranty. Identification of the five consumption patterns in online used goods trading contributed to structuring the various behavior patterns of private eBay users. Above all, the fact that the respondents’ dif ferences in socio-demographic characteristics are very small is remarkable. The five types do differ significantly, however, regarding their attitudes and their behavior on eBay. They also differ with respect to their concern for the sustainability-related contexts of eBay trading. The environmentally oriented buyers of used goods and the prosumers, as different as they may be, are those upon whom we pin our hopes for sustainability. Although the former group displays a certain consistency in terms of attitudes and behavior, which is also characterized by increasing awareness ofsustainability, it is the prosumers who treat new and used products with care in order to resell them, thereby contributing to lengthening of the life spans of products, even if they are not aware of this effect.中文译文二手物品网上交易资源效率的贡献:eBay用户的实证研究Jens Clausen , Birgit Blättel-Mink , Lorenz Erdmann ,Christine Henseling 摘要本文探讨了网上二手交易的可持续性的影响(对可持续发展的贡献,减少对环境的不利影响) 。
中英文对照外文翻译(文档含英文原文和中文翻译)Data Acquisition SystemsData acquisition systems are used to acquire process operating data and store it on,secondary storage devices for later analysis. Many or the data acquisition systems acquire this data at very high speeds and very little computer time is left to carry out any necessary, or desirable, data manipulations or reduction. All the data are stored on secondary storage devices and manipulated subsequently to derive the variables ofin-terest. It is very often necessary to design special purpose data acquisition systems and interfaces to acquire the high speed process data. This special purpose design can be an expensive proposition.Powerful mini- and mainframe computers are used to combine the data acquisition with other functions such as comparisons between the actual output and the desirable output values, and to then decide on the control action which must be taken to ensure that the output variables lie within preset limits. The computing power required will depend upon the type of process control system implemented. Software requirements for carrying out proportional, ratio or three term control of process variables are relatively trivial, and microcomputers can be used to implement such process control systems. It would not be possible to use many of the currently available microcomputers for the implementation of high speed adaptive control systems which require the use of suitable process models and considerable online manipulation of data.Microcomputer based data loggers are used to carry out intermediate functions such as data acquisition at comparatively low speeds, simple mathematical manipulations of raw data and some forms of data reduction. The first generation of data loggers, without any programmable computing facilities, was used simply for slow speed data acquisition from up to one hundred channels. All the acquired data could be punched out on paper tape or printed for subsequent analysis. Such hardwired data loggers are being replaced by the new generation of data loggers which incorporate microcomputers and can be programmed by the user. They offer an extremely good method of collecting the process data, using standardized interfaces, and subsequently performing the necessary manipulations to provide the information of interest to the process operator. The data acquired can be analyzed to establish correlations, if any, between process variables and to develop mathematical models necessary for adaptive and optimal process control.The data acquisition function carried out by data loggers varies from one to 9 in system to another. Simple data logging systems acquire data from a few channels while complex systems can receive data from hundreds, or even thousands, of input channels distributed around one or more processes. The rudimentary data loggers scan the selected number of channels, connected to sensors or transducers, in a sequential manner and the data are recorded in a digital format. A data logger can be dedicated in the sense that it can only collect data from particular types of sensors and transducers. It is best to use a nondedicated data logger since any transducer or sensor can be connected to the channels via suitable interface circuitry. This facility requires the use of appropriate signal conditioning modules.Microcomputer controlled data acquisition facilitates the scanning of a large number of sensors. The scanning rate depends upon the signal dynamics which means that some channels must be scanned at very high speeds in order to avoid aliasing errors while there is very little loss of information by scanning other channels at slower speeds. In some data logging applications the faster channels require sampling at speeds of up to 100 times per second while slow channels can be sampled once every five minutes. The conventional hardwired, non-programmable data loggers sample all the channels in a sequential manner and the sampling frequency of all the channels must be the same. This procedure results in the accumulation of very large amounts of data, some of which is unnecessary, and also slows down the overall effective sampling frequency. Microcomputer based data loggers can be used to scan some fast channels at a higher frequency than other slow speed channels.The vast majority of the user programmable data loggers can be used to scan up to 1000 analog and 1000 digital input channels. A small number of data loggers, with a higher degree of sophistication, are suitable for acquiring data from up to 15, 000 analog and digital channels. The data from digital channels can be in the form of Transistor- Transistor Logic or contact closure signals. Analog data must be converted into digital format before it is recorded and requires the use of suitable analog to digital converters (ADC).The characteristics of the ADC will define the resolution that can be achieved and the rate at which the various channels can be sampled. An in-crease in the number of bits used in the ADC improves the resolution capability. Successive approximation ADC's arefaster than integrating ADC's. Many microcomputer controlled data loggers include a facility to program the channel scanning rates. Typical scanning rates vary from 2 channels per second to 10, 000 channels per second.Most data loggers have a resolution capability of ±0.01% or better, It is also pos-sible to achieve a resolution of 1 micro-volt. The resolution capability, in absolute terms, also depends upon the range of input signals, Standard input signal ranges are 0-10 volt, 0-50 volt and 0-100 volt. The lowest measurable signal varies form 1 t, volt to 50, volt. A higher degree of recording accuracy can be achieved by using modules which accept data in small, selectable ranges. An alternative is the auto ranging facil-ity available on some data loggers.The accuracy with which the data are acquired and logged-on the appropriate storage device is extremely important. It is therefore necessary that the data acquisi-tion module should be able to reject common mode noise and common mode voltage. Typical common mode noise rejection capabilities lie in the range 110 dB to 150 dB. A decibel (dB) is a tern which defines the ratio of the power levels of two signals. Thus if the reference and actual signals have power levels of N, and Na respectively, they will have a ratio of n decibels, wheren=10 Log10(Na /Nr)Protection against maximum common mode voltages of 200 to 500 volt is available on typical microcomputer based data loggers.The voltage input to an individual data logger channel is measured, scaled and linearised before any further data manipulations or comparisons are carried out.In many situations, it becomes necessary to alter the frequency at which particu-lar channels are sampled depending upon the values of data signals received from a particular input sensor. Thus a channel might normally be sampled once every 10 minutes. If, however, the sensor signals approach the alarm limit, then it is obviously desirable to sample that channel once every minute or even faster so that the operators can be informed, thereby avoiding any catastrophes. Microcomputer controlledintel-ligent data loggers may be programmed to alter the sampling frequencies depending upon the values of process signals. Other data loggers include self-scanning modules which can initiate sampling.The conventional hardwired data loggers, without any programming facilities, simply record the instantaneous values of transducer outputs at a regular samplingin-terval. This raw data often means very little to the typical user. To be meaningful, this data must be linearised and scaled, using a calibration curve, in order to determine the real value of the variable in appropriate engineering units. Prior to the availability of programmable data loggers, this function was usually carried out in the off-line mode on a mini- or mainframe computer. The raw data values had to be punched out on pa-per tape, in binary or octal code, to be input subsequently to the computer used for analysis purposes and converted to the engineering units. Paper tape punches are slow speed mechanical devices which reduce the speed at which channels can be scanned. An alternative was to print out the raw data values which further reduced the data scanning rate. It was not possible to carry out any limit comparisons or provide any alarm information. Every single value acquired by the data logger had to be recorded eventhough it might not serve any useful purpose during subsequent analysis; many data values only need recording when they lie outside the pre-set low and high limits.If the analog data must be transmitted over any distance, differences in ground potential between the signal source and final location can add noise in the interface design. In order to separate common-mode interference form the signal to be recorded or processed, devices designed for this purpose, such as instrumentation amplifiers, may be used. An instrumentation amplifier is characterized by good common-mode- rejection capability, a high input impedance, low drift, adjustable gain, and greater cost than operational amplifiers. They range from monolithic ICs to potted modules, and larger rack-mounted modules with manual scaling and null adjustments. When a very high common-mode voltage is present or the need for extremely-lowcom-mon-mode leakage current exists(as in many medical-electronics applications),an isolation amplifier is required. Isolation amplifiers may use optical or transformer isolation.Analog function circuits are special-purpose circuits that are used for a variety of signal conditioning operations on signals which are in analog form. When their accu-racy is adequate, they can relieve the microprocessor of time-consuming software and computations. Among the typical operations performed are multiplications, division, powers, roots, nonlinear functions such as for linearizing transducers, rimsmeasure-ments, computing vector sums, integration and differentiation, andcurrent-to-voltage or voltage- to-current conversion. Many of these operations can be purchased in available devices as multiplier/dividers, log/antilog amplifiers, and others.When data from a number of independent signal sources must be processed by the same microcomputer or communications channel, a multiplexer is used to channel the input signals into the A/D converter.Multiplexers are also used in reverse, as when a converter must distribute analog information to many different channels. The multiplexer is fed by a D/A converter which continually refreshes the output channels with new information.In many systems, the analog signal varies during the time that the converter takes to digitize an input signal. The changes in this signal level during the conversion process can result in errors since the conversion period can be completed some time after the conversion command. The final value never represents the data at the instant when the conversion command is transmitted. Sample-hold circuits are used to make an acquisition of the varying analog signal and to hold this signal for the duration of the conversion process. Sample-hold circuits are common in multichannel distribution systems where they allow each channel to receive and hold the signal level.In order to get the data in digital form as rapidly and as accurately as possible, we must use an analog/digital (A/D) converter, which might be a shaft encoder, a small module with digital outputs, or a high-resolution, high-speed panel instrument. These devices, which range form IC chips to rack-mounted instruments, convert ana-log input data, usually voltage, into an equivalent digital form. The characteristics of A/D converters include absolute and relative accuracy, linearity, monotonic, resolu-tion, conversion speed, and stability. A choice of input ranges, output codes, and other features are available. The successive-approximation technique is popular for a large number ofapplications, with the most popular alternatives being the counter-comparator types, and dual-ramp approaches. The dual-ramp has been widely-used in digital voltmeters.D/A converters convert a digital format into an equivalent analog representation. The basic converter consists of a circuit of weighted resistance values or ratios, each controlled by a particular level or weight of digital input data, which develops the output voltage or current in accordance with the digital input code. A special class of D/A converter exists which have the capability of handling variable reference sources. These devices are the multiplying DACs. Their output value is the product of the number represented by the digital input code and the analog reference voltage, which may vary form full scale to zero, and in some cases, to negative values.Component Selection CriteriaIn the past decade, data-acquisition hardware has changed radically due to ad-vances in semiconductors, and prices have come down too; what have not changed, however, are the fundamental system problems confronting the designer. Signals may be obscured by noise, rfi,ground loops, power-line pickup, and transients coupled into signal lines from machinery. Separating the signals from these effects becomes a matter for concern.Data-acquisition systems may be separated into two basic categories:(1)those suited to favorable environments like laboratories -and(2)those required for hostile environments such as factories, vehicles, and military installations. The latter group includes industrial process control systems where temperature information may be gathered by sensors on tanks, boilers, wats, or pipelines that may be spread over miles of facilities. That data may then be sent to a central processor to provide real-time process control. The digital control of steel mills, automated chemical production, and machine tools is carried out in this kind of hostile environment. The vulnerability of the data signals leads to the requirement for isolation and other techniques.At the other end of the spectrum-laboratory applications, such as test systems for gathering information on gas chromatographs, mass spectrometers, and other sophis-ticated instruments-the designer's problems are concerned with the performing of sen-sitive measurements under favorable conditions rather than with the problem ofpro-tecting the integrity of collected data under hostile conditions.Systems in hostile environments might require components for wide tempera-tures, shielding, common-mode noise reduction, conversion at an early stage, redun-dant circuits for critical measurements, and preprocessing of the digital data to test its reliability. Laboratory systems, on the other hand, will have narrower temperature ranges and less ambient noise. But the higher accuracies require sensitive devices, and a major effort may be necessary for the required signal /noise ratios.The choice of configuration and components in data-acquisition design depends on consideration of a number of factors:1. Resolution and accuracy required in final format.2. Number of analog sensors to be monitored.3. Sampling rate desired.4. Signal-conditioning requirement due to environment and accuracy.5. Cost trade-offs.Some of the choices for a basic data-acquisition configuration include:1 .Single-channel techniques.A. Direct conversion.B. Preamplification and direct conversion.C. Sample-hold and conversion.D. Preamplification, sample-hold, and conversion.E. Preamplification, signal-conditioning, and direct conversion.F. Preamplification, signal-conditioning, sample-hold, and conversion.2. Multichannel techniques.A. Multiplexing the outputs of single-channel converters.B. Multiplexing the outputs of sample-holds.C. Multiplexing the inputs of sample-holds.D. Multiplexing low-level data.E. More than one tier of multiplexers.Signal-conditioning may include:1. Radiometric conversion techniques.B. Range biasing.D. Logarithmic compression.A. Analog filtering.B. Integrating converters.C. Digital data processing.We shall consider these techniques later, but first we will examine some of the components used in these data-acquisition system configurations.MultiplexersWhen more than one channel requires analog-to-digital conversion, it is neces-sary to use time-division multiplexing in order to connect the analog inputs to a single converter, or to provide a converter for each input and then combine the converter outputs by digital multiplexing.Analog MultiplexersAnalog multiplexer circuits allow the timesharing of analog-to-digital converters between a numbers of analog information channels. An analog multiplexer consists of a group of switches arranged with inputs connected to the individual analog channels and outputs connected in common(as shown in Fig. 1).The switches may be ad-dressed by a digital input code.Many alternative analog switches are available in electromechanical and solid-state forms. Electromechanical switch types include relays, stepper switches,cross-bar switches, mercury-wetted switches, and dry-reed relay switches. The best switching speed is provided by reed relays(about 1 ms).The mechanical switches provide high do isolation resistance, low contact resistance, and the capacity to handle voltages up to 1 KV, and they are usually inexpensive. Multiplexers using mechanical switches are suited to low-speed applications as well as those having high resolution requirements. They interface well with the slower A/D converters, like the integrating dual-slope types. Mechanical switches have a finite life, however, usually expressed innumber of operations. A reed relay might have a life of 109 operations, which wouldallow a 3-year life at 10 operations/second.Solid-state switch devices are capable of operation at 30 ns, and they have a life which exceeds most equipment requirements. Field-effect transistors(FETs)are used in most multiplexers. They have superseded bipolar transistors which can introduce large voltage offsets when used as switches.FET devices have a leakage from drain to source in the off state and a leakage from gate or substrate to drain and source in both the on and off states. Gate leakage in MOS devices is small compared to other sources of leakage. When the device has a Zener-diode-protected gate, an additional leakage path exists between the gate and source.Enhancement-mode MOS-FETs have the advantage that the switch turns off when power is removed from the MUX. Junction-FET multiplexers always turn on with the power off.A more recent development, the CMOS-complementary MOS-switch has the advantage of being able to multiplex voltages up to and including the supply voltages. A±10-V signal can be handled with a ±10-V supply.Trade-off Considerations for the DesignerAnalog multiplexing has been the favored technique for achieving lowest system cost. The decreasing cost of A/D converters and the availability of low-cost, digital integrated circuits specifically designed for multiplexing provide an alternative with advantages for some applications. A decision on the technique to use for a givensys-tem will hinge on trade-offs between the following factors:1. Resolution. The cost of A/D converters rises steeply as the resolution increases due to the cost of precision elements. At the 8-bit level, the per-channel cost of an analog multiplexer may be a considerable proportion of the cost of a converter. At resolutions above 12 bits, the reverse is true, and analog multiplexing tends to be more economical.2. Number of channels. This controls the size of the multiplexer required and the amount of wiring and interconnections. Digital multiplexing onto a common data bus reduces wiring to a minimum in many cases. Analog multiplexing is suited for 8 to 256 channels; beyond this number, the technique is unwieldy and analog errors be-come difficult to minimize. Analog and digital multiplexing is often combined in very large systems.3. Speed of measurement, or throughput. High-speed A/D converters can add a considerable cost to the system. If analog multiplexing demands a high-speedcon-verter to achieve the desired sample rate, a slower converter for each channel with digital multiplexing can be less costly.4. Signal level and conditioning. Wide dynamic ranges between channels can be difficult with analog multiplexing. Signals less than 1V generally require differential low-level analog multiplexing which is expensive, with programmable-gain amplifiers after the MUX operation. The alternative of fixed-gain converters on each channel, with signal-conditioning designed for the channel requirement, with digital multi-plexing may be more efficient.5. Physical location of measurement points. Analog multiplexing is suitedfor making measurements at distances up to a few hundred feet from the converter, since analog lines may suffer from losses, transmission-line reflections, and interference. Lines may range from twisted wire pairs to multiconductor shielded cable, depending on signal levels, distance, and noise environments. Digital multiplexing is operable to thousands of miles, with the proper transmission equipment, for digital transmission systems can offer the powerful noise-rejection characteristics that are required for29 Data Acquisition Systems long-distance transmission.Digital MultiplexingFor systems with small numbers of channels, medium-scale integrated digital multiplexers are available in TTL and MOS logic families. The 74151 is a typical example. Eight of these integrated circuits can be used to multiplex eight A/D con-verters of 8-bit resolution onto a common data bus.This digital multiplexing example offers little advantages in wiring economy, but it is lowest in cost, and the high switching speed allows operation at sampling rates much faster than analog multiplexers. The A/D converters are required only to keep up with the channel sample rate, and not with the commutating rate. When large numbers of A/D converters are multiplexed, the data-bus technique reduces system interconnections. This alone may in many cases justify multiple A/D converters. Data can be bussed onto the lines in bit-parallel or bit-serial format, as many converters have both serial and parallel outputs. A variety of devices can be used to drive the bus, from open collector and tristate TTL gates to line drivers and optoelectronic isolators. Channel-selection decoders can be built from 1-of-16 decoders to the required size. This technique also allows additional reliability in that a failure of one A/D does not affect the other channels. An important requirement is that the multiplexer operate without introducing unacceptable errors at the sample-rate speed. For a digital MUX system, one can determine the speed from propagation delays and the time required to charge the bus capacitance.Analog multiplexers can be more difficult to characterize. Their speed is a func-tion not only of internal parameters but also external parameters such as channel, source impedance, stray capacitance and the number of channels, and the circuit lay-out. The user must be aware of the limiting parameters in the system to judge their ef-fect on performance.The nonideal transmission and open-circuit characteristics of analog multiplexers can introduce static and dynamic errors into the signal path. These errors include leakage through switches, coupling of control signals into the analog path, and inter-actions with sources and following amplifiers. Moreover, the circuit layout can com-pound these effects.Since analog multiplexers may be connected directly to sources which may have little overload capacity or poor settling after overloads, the switches should have a break-before-make action to prevent the possibility of shorting channels together. It may be necessary to avoid shorted channels when power is removed and a chan-nels-off with power-down characteristic is desirable. In addition to the chan-nel-addressing lines, which are normally binary-coded, it is useful to have inhibited or enable lines to turn all switches off regardless of the channel being addressed. This simplifies the external logic necessary to cascade multiplexers and can also be useful in certain modes of channeladdressing. Another requirement for both analog and digital multiplexers is the tolerance of line transients and overload conditions, and the ability to absorb the transient energy and recover without damage.数据采集系统数据采集系统是用来获取数据处理和存储在二级存储设备,为后来的分析。
在以前,不怎么会使用外文的小伙伴们每次在要去查一些外文文献时都十分困扰,而现在只要在手机上使用录音转文字助手,只要你把英语读出来,就能翻译了,有没有觉得很神奇,接下来小编就带大家看看方法吧。
操作选用工具:在应用市场下载【录音转文字助手】
操作步骤:
第一步:首先我们在百度手机助手或者应用市场里面搜索:【录音转文字助手】找到以后进行下载并安装。
第二步:接着打开软件就可以看到【录音识别】、【文件识别】、【语音翻译】、【录音机】的四个功能,这里我们就举例说明下【语音翻译】。
第三步:点击橙色的【中文】按钮,开始说中文以后,下面就是翻译的英文。
第四步:点击蓝色的【English】按钮,开始说英文,就可以把你说的英语转成中文了。
以上就是语音翻译的操作步骤了,效率会高很多呢。
外文文献翻译原文及译文标题:场地管理现场管理外文翻译中英文文献出处:Kazuhisa Kuba, Christopher Monz, Bård-Jørgen Bårdsen, Vera Helene Hausner[J].Journal of Outdoor Recreation and Tourism, Volume 22,June 2018, Pages 1-8译文字数:8700 多字英文Role of site management in influencing visitor use along trails in multiple alpineprotected areas in NorwayKazuhisa Kuba, Christopher Monz,etcAbstractSite-level management such as the construction of parking lots, formal trails and other visitor facilities is a common means of confining visitor use in protected areas. The effectiveness of site management, in terms of providing visitor access while limiting impacts, has not been broadly evaluated in parks in the Nordic regions given the tradition of open access and the prevalence of multiple entry points. Moreover, in Norwegian parks, visitor facilities are often provided by the Norwegian Trekking Association, which is the country's largest outdoor organization, and have originally been designed for public access to recreation rather than to meet management objectives in protected areas. In this study, we explored whether formal trails tend to focus visitor use by assessing observable indicators of visitor use along 64 formal trails (417.3 km) and 234 informal trails (66.9 km). The assessment was conducted across a vast geographic area consisting of 11 protected areas in the Alpine North Environmental Zone in Norway. We found that Norwegian protected areas generally have low level of observable visitor use along informal trails. In comparisons, importance of site management was statistically evident only for the presence of trash, as facilities such as marked trails and parking lots were negatively related to amount of trash along associated informal trails. There was also a trend of less use along informal trails depending on the degree of site management present in the parks.Keywords: Protected areas, Visitor monitoring, Study design, Line transect sampling, Scandinavia1. IntroductionRecreation and tourism use in protected areas continues to increase in many locations worldwide (Balmford et al., 2015, Cordell, 2008). Although the degree of ecological impact caused by visitor use is dependent on many factors such asecosystem type, use intensity, timing of use, and type of visitor activity (Hammitt et al., 2018, Pickering, 2010), understanding the spatial distribution of visitor use has been receiving increased attention in the literature recently (e.g., Monz, Cole, Leung, & Marion., 2010; D’Antonio and Monz, 2016). This increased focus on the spatial aspects of use is due to the importance in understanding use characteristics at a range of spatial scales and in managing the overall area affected by recreation in many protected areas (Cole, 2004, Monz and Leung, 2006).The assessment of informal trails created by visitors is often used as an indicator of off-trail hiking behavior. Understanding the physical characteristics of informal trails, which can often form extensive networks, provides information on the spatial extent of human activities in parks and other protected areas (Hagen et al., 2012, Leung et al., 2011, Marion et al., 2006, Monz et al., 2010, Monz et al., 2012, Walden-Schreiner and Leung, 2013, Wimpey and Marion, 2011). However, as trails could also be created by livestock and wildlife, knowledge about the visitor use of informal trails is crucial for managing protected areas. This is particularly the case in Norway where sheep and reindeer grazing are prevalent (Austrheim, Solberg, & Mysterud, 2011) and the public right of access or Allemannsretten, grants locals and tourists alike rights to access and move freely on all open lands either public or private (Anononmous, 1957). In addition to hiking and camping, Allemannsrettenalso allows for traditional activities such as harvesting berries, mushrooms, herbs or other plants, and the preferred locations of these activities may not always be served by the formal trail system (Hammitt, Kaltenborn, Emmelin, & Teigland, 2013; Kaltenborn, Haaland, & Sandell, 2001; Gundersen, Mehmetoglu, Vistad, & Andersen, 2015). Increased visitor use of informal trails in remote areas could increase access to vulnerable sites, disturb wildlife and their habitats, spread weeds and non-native species, and fragment landscapes (Bradford and McIntyre, 2007, Hagen et al., 2012, Leung and Marion, 2000, Manning et al., 2006, Pickering, 2010, Wimpey and Marion, 2011). For example, the protected area networks in southern Norway support the last remaining populations of wild reindeer (Rangifer tarandus) in Europe and increased visitor use has been associated with changes in their condition and spatialdistribution (Gundersen et al., 2015, Gundersen et al., 2013, Strand et al., 2014). Resource subsidies (i.e., food) brought by visitor use may further favor generalist predators or scavengers on behalf on the native wildlife species such as ground nesting birds and Arctic fox (Vulpes lagopus, e.g., Hagen et al., 2012; Newsome et al., 2015). Therefore the dispersion of visitor use along the trails and around visitor facilities needs to be monitored and managed.Limiting access by area wide or individual track restrictions, is a management strategy rarely applied in Norwegian protected areas (Engen et al., 2018, Gundersen et al., 2015). Visitors often enter from multiple locations without registering or paying fees regardless of whether facilities are available (Hausner, Engen, Bludd, & Yoccoz, 2017). Site management is therefore a preferred management strategy, with the use of physical infrastructure and visitor facilities becoming more common (Gundersen et al., 2015). Site level management has been widely used to manage human impacts in sensitive ecosystems by limiting spatial extent of visitor use (Yale, 1991, Hall and McArthur, 1993, Curthoys, 1998, Kuo, 2002, Hammitt et al., 2018). Park facilities such as formal trails, boardwalks, and designated sites often serve the dual objective of providing visitor amenities and access while protecting park resources. The utility and appropriateness of designating trails and providing facilities is dependent on the context, with factors such as desired objectives, visitation levels, and activity types influencing specific designs and applications (Marion & Reid, 2007). Regardless, appropriate management of gateways and trails concentrates visitor use and influences the formation of destinations and travel routes in parks, often referred to as the node and linkage patterns (Manning, 1979, Monz et al., 2010). Particularly, formal trails can protect resources by concentrating visitor use impacts, and are regarded as one of the most important park facilities in protected areas (Hill and Pickering, 2009, Marion and Leung, 2001, Marion and Leung, 2004). While the absence of visitor facilities may remain the best strategy from a visitor experience perspective in some protected areas, controlling the spatial distribution of visitor use by site level management (e.g., concentrating visitor use impacts on maintained formal trails) is generally considered desirable for protected areas with increasing visitation levels (Leung and Marion,1999, Marion and Leung, 2004, Park et al., 2008, Wimpey and Marion, 2011).Managers and scientists need to monitor and assess visitor use impacts on ecosystems at larger spatial scales (e.g., potentially across several protected areas) to understand the dynamics of visitor use impacts at the ecosystem level (Fancy et al., 2009, Monz et al., 2010). Also, many protected area agencies are currently not able to conduct visitor monitoring sufficiently due to limited financial and human resources (Buckley, 2003, Hadwen et al., 2007). In Norway, visitor monitoring has been limited, but a new national park branding and visitor management strategy is currently being implemented, that expects all national parks to have a plan for visitor monitoring in place by 2020 (Higham et al., 2016). The new visitor management strategy strives to increase local value creation through attracting visitors to Norwegian national parks by providing good visitor experiences, while strengthening the protection of park qualities by improved visitor management (Norwegian Environmental Agency, 2015).A broad scale approach using a sampling of observable indicators can provide a cost effective method to document visitor use distribution on park level and could be useful for implementing this new visitor management strategy.Given this background, we designed this study to better understand visitor use and associated impacts along formal and informal trails in eleven Norwegian protected areas. In these locations, actual counts of visitors are difficult to obtain due to many of the aforementioned issues. We therefore relied on assessing the occurrence and location of physical evidence of visitors as indicators of use. This objective was particularly challenging given the lack of formal access points in many locations, so our study relied on a sampling strategy of likely access locations and indirect, observable measures of visitor use. Specifically, the study addressed the following research questions:•What are the primary observable indicators of visitor use in Norwegian Protected areas?•Using observable indicators of visitor use, how does use vary on formal trails and informal trails?•Does the level of park infrastructure affect the frequency of occurrence anddistribution of observable indicators of use?2. Methods2.1. Study area and park selectionApproximately 17% of mainland Norway is protected by 37 national parks (10%), 202 protected landscapes (4.5%), and other types of protected areas (2.5%) under the Nature Diversity Act (Statistics Norway, 2014). Many of the protected areas are located in the Alpine North Environmental Zone, a region derived from a stratification of Europe based on climate, geomorphology, geology and soil characteristics. The Alpine North covers mountains and uplands in Scandinavia, and is dominated by arctic, arctic-alpine and dwarf shrub tundra, and various forest types (Jongman et al., 2006, Metzger et al., 2005).We included a large number of parks in effort to contrast the levels of park management, that is, some parks are highly managed with formal trails and other infrastructure, while others have little management presence and infrastructure. The selection criteria were that the park: (1) should be alpine and situated within the Alpine North Environmental Zone; (2) must have been established earlier than 2003; and (3) needs to have a management plan, so that information about the management is available for analysis. We also included two protected areas that did not have management plans (Stabbursdalen NP and Øvre Dividal NP) in Northern Norway to ensure a wide geographic gradient, which covers a distance of over 1200 km from south to north. One park (Stabbursdalen NP) was later excluded due to logistical difficulties. We sampled five national parks (NP) and six protected landscapes (PL) during the summer visitor season, from late June to early September.2.2. Selection of sampling locationsWe first identified entry points on topographical maps (1:50,000 scale) for trails leading into the parks. A circular arc was drawn from the road access point to 5, 7.5 or 10 km radius, depending on the size and shape of the protected area. A sufficient number of circular arcs were randomly selected until the arcs represented approximately 10% of the total area in each protected area. All formal and informal trails located in these circular arcs were sampled.2.3. Line transect samplingIn order to reasonably sample such a large geographic area, we employed a rapid assessment approach by sampling observable indicators of visitor use along formal and informal trails. We defined informal trails as those trails that branched out from formal trails, which had enough width for one person to walk and had some exposed mineral soil, and did not appear on the topographical map. Livestock, visitors, or traditional local use could have created some of these informal trails, and many existed before the protected areas were designated. We expected visitors to use maps to select trails for entering the park, and we accordingly defined these as formal trails. We defined off-trail hiking as visitors departing from these formal trails exploring the existing network of informal trails or potentially creating new ones. Our method could not distinguish tourists from local visitors, who are more familiar with the sites and may tend to hike off trails more often than park visitors (Gundersen et al., 2015), but we could assess whether site management would channel the majority of people to the formal trails to reduce ecological impacts.Line transect sampling (e.g., Bårdsen & Fox, 2006) was performed along formal trails by recording observable indicators of visitor use. Since our main interest was to analyze off-trail hiking (i.e. dispersion of visitor use) we sampled both the formal-and the informal trails. Higher visitor use on informal trails relative to formal trails indicated higher degree of dispersion in our study. Using each trail as a long line transect, we walked on the center of the trail in a slow pace looking to both sides and recorded all observable indicators of visitor use— firepits, trash, damaged trees, visitor sites, trampled vegetation and the presence of existing recreation activity (tent, boat, etc.). The transect distance from the starting point was recorded every 100 m by using a recreation-grade Garmin Colorado 400GPS device (Garmin Corporation, Olathe, KS, USA). We recorded the distance from the trails to all observable visitor use within each of these 100 m until we reached 5, 7.5 or 10 km. In addition, at 500 m intervals we used perpendicular transects 20 m long to assess the effective width of the formal trails for the detection and distribution of observable indicators of visitor use. To increase sampling efficiency over a broad scale, we included a stop rule at500 m for informal trails. As much as 70% of the informal trails ended before 500 m, which indicates that our sampling length was reasonable for our purposes. Six field staff working between late June and early September conducted the field survey.2.4. Site level managementCabins operated by the Norwegian Trekking Association (“Den Norske Turistforening, DNT”), formal parking lots and formal trails were identified by using available topographical and park maps. We hypothesized that visitors, who use visitor facilities such as tourist cabins would largely confine their use to formal trails, while others, who are not channelized by such facilities would disperse more into informal trails. Accordingly, visitor impacts would be higher on informal trails in areas without facilities. Sufficient data on visitor numbers in each entry point was lacking in most parks, but visitor numbers to DNT and other tourist cabins, were typically available, but represented only a proportion of total use. Additionally, the proportion of impacts on informal trails relative to formal trails reflected the dispersal in the park relative to visitor facilities (e.g. if there was no net dispersal in the park depending on visitor facilities, then there would be a constant relationship between observable impact on informal and formal trails).2.5. Statistical analyses2.5.1. Trail levelAll statistical analyses were performed in R (ver. 2.15.1; R Development Core Team, 2012). The synthetic variable called “occurrence rate” was calculated by dividing the number of observed visitor traces (i.e., the occurrence of an indicator of visitor use) by total length of the trails sampled, and represented an index of observed visitor traces (i.e. observed visitor traces per unit of length for the trails) that was directly comparable across parks. We examined whether site management influenced occurrence rate of firepits, trash and the presence of recreation activity on informal trails using Generalized Linear Mixed Model (GLMM: Bolker et al., 2009; Zuur, Hilbe, & Leno, 2013).The data of observed visitor traces were treated as Poisson distribution using a log-link function. For presence of recreation activity and firepits, the numbers ofoccurrences were very low and we therefore used GLMM on presence/absence data assuming a binary distribution using a logit-link function. GLMMs were fitted using the glmer function in the lme4 package (Bates, Maechler, & Bolker, 2012). We examined the effects of formal trails on the number of occurrences of trash, fire pits and presence of recreational activity along informal trails. The occurrence rate of formal trails was included in the model to check if dispersion along informal trails was higher than expected by visitor use of formal trails. Visitor use of formal trails was the best proxy we could use in the absence of actual visitor numbers per trail, but we also included the number of visitors in cabins as covariates. Parks were treated as a random factor, and all mixed effects models in the present study were fitted with random intercepts only. We checked for multicolinearity among explanatory variables before analyzing the relationship between site management and visitor use/impact as this would jeopardize assumptions in GLMM.We selected models for inferences based on second-order Akaike's Information Criterion (AICc) from a set of a priori defined candidate models (e.g., Burnham & Anderson, 2002). We fitted models with a maximum likelihood (ML) when we compared models with different fixed effects to each other and we used Restricted Maximum Likelihood (REML) when we extracted estimates and their precision from the selected model (Zuur et al., 2013). Models were checked for constant variance of the residuals and presence of outliers.2.5.2. Park levelWe hypothesized that parks with a higher level of visitor facilities are more likely to concentrate visitor use along formal trails. We collected statistics for each park on numbers of tourist cabins, number of parking lots, and the length of formal trails divided by the park size. All the predictors were log10(x + 1) transformed where x denote each predictor of visitor impacts, namely density of trails, parking lots and cabins. We used partial least square regression (PLS) to analyze the relationship between park infrastructure and the visitor impacts (firepits, trash and presence of recreation activity). PLS regression has been found to be less sensitive to sample size issues than multiple regression, and is possible to use when there are few observationsand many, correlated predictors (Carrascal, Galván, & Gordo, 2009). It extends multiple regressions by analyzing the effects of the linear combination of predictors on the responses. Uncorrelated latent factors (PLS components) are extracted that maximise the explained variance of the dependent variables. We selected the number of PLS components by the increase in the root mean squared error of prediction (RMSEP). The significance of the variables for explaining the predictors were assessed using the Variable Importance for the Projection (VIP) that measures the importance of an explanatory variable for the building of the PLS components, where 0.8 is regarded as significant for the model (Eriksson, Johansson, Kettaneh-Wold, & Wold, 2001). The goodness of fit was evaluated by using the adjusted R2 which takes into account the number of predictors used in the model. We used the PLS package version 2.3–0 for PLS regression which includes standardization of variables prior to analysis (Mevik and Weherens, 2007).3. ResultsIn total, 64 formal trails (417.3 km) and 234 informal trails (66.9 km) were sampled in 11 parks. A typical entrance study site took three days to complete the assessment by a field crew of two. A total 2044 visitor traces were recorded and analyzed. More than half of the visitor use traces observed were occurrences of trash (total 1356). Despite an intensive sampling effort, observed visitor trace occurrences along the trails were relatively infrequent and 80.7% of occurrences were found within 3 m from the trail.3.1. Visitor use on the trail levelThe sample sizes of most observable indicators were too limited to model the relationship with site management. Trash was the only variable that had sufficient sample size to model the relationships at trail level, where the amount of trash along informal trails was a function of formal trails, visitor numbers, and parking areas. The presence of formal trails and parking lots predicted less trash along informal trails, and higher visitor numbers predicted more visitor use-related impacts on the informal trails. The among-park (random) effect was small relative to the intercept, so we calculated the Nagelkerke R2 without including the random effect (R2 = 0.44).3.2. Visitor use at the park levelAll the visitor impacts in the PLS models were insignificantly improved by using two instead of one PLS component (% increase in variance explained from one to two components was 0.8–2.7% for firepits, trash and recreation traces). We therefore retained only one component for all of the three impacts. The density of trails was the only significant variable explaining the distribution of firepits, trash and occurrence of recreation activity along informal trails on the park level (VIP > 0.8). Parks with a denser network of formal trails had relatively fewer impacts on the informal trails. The park-level model explained 67.3% of the variance in recreation traces, 50% of the firepits and 32.8% of the trash occurrences on informal trails. The estimate for trash on the park level was also highly uncertain and the 95% CI showed that the effect was not significantly different from zero. There were too few parks to explain the high variance of observable visitor impacts on informal trails.4. DiscussionOur analysis of the relative difference between visitor trace occurrences on formal versus informal trails suggested some support for the hypothesis that site level management tends to concentrate visitor use and impacts to formal trails. Overall, there were surprisingly few observable indicators of visitor use on both formal and informal trails in Norwegian parks, with trash representing more than half of all the traces recorded followed by fire pits. We only had sufficient data to statistically examine the occurrence of trash on informal trails, and we found that the presence of parking lots and marked trails resulted in lower amount of trash on informal trails relative to main paths.Our results were consistent with a few recent case studies indicating that the node-linkage pattern of visitor use (i.e., visitors confining their use to established routes) may be relevant also in Norway (Gundersen et al., 2013, Strand et al., 2014, Vorkinn, 2003). The effect of site management was evident for trash, as marked trails and parking lots significantly reduced amount of trash along informal trails, whereas increased visitor use (i.e. cabin visitor numbers and visitor traces alongformal trails) tended to increase the trash levels along informal trails in the parks. Themost comprehensive visitor monitoring program has been conducted in the Dovrefjell-Sunndalsfjella National Park using automatic counters, questionnaires, GPS tracking and field observation. The results show that 80–90% of the visitors in the park use formal trails and visitor infrastructure, which is promising with respect to the management objectives to channel visitor use away from sensitive areas and species such as the wild reindeer (Gundersen et al., 2013). Our study added to these findings by showing that these patterns were consistent also over broad scales and multiple protected areas.Due to the low number of park staff, Norwegian protected areas are unlikely to have the capacity to clean up trash along informal trails. There have been clean ups on formal trails in some parks, but no quantitative data is available (Sletten, Norwegian Environmental Agency, pers. comm). On informal trails, the occurrence of trash most likely reflected visitor use patterns along informal trails in the parks. In this regard, the results of our study indicated that visitor use along the sampled trails in the alpine protected areas is currently relatively low. In a study of resource conditions on backcountry campsites by Monz and Twardock (2010), the amount of trash present was a means of identifying the most impacted sites. The amount of trash co-varied with other visitor related impacts such as vegetation loss. Examining the occurrence of trash may be useful to understand visitor use and other visitor related impacts in the parks, but it is also important for visitor experience and for understanding ecological effects of anthropogenic food resources to generalist predators (e.g., Hamel, Killengreen, Henden, Yoccoz, & Ims, 2013; Newsome et al., 2015). Our focus in this study was on the trail level where we have controlled for park level variance, but the distribution of trash on a broader scale within parks is particularly important to capture landscape effects on wildlife and for managing the parks.Several authors refer to the effectiveness of site management strategies to manage visitor impacts on sensitive ecosystems (Yale, 1991, Hall and McArthur, 1993, Curthoys, 1998, Kuo, 2002). In protected area management, site management such as formal trails and other park facilities can minimize visitor impacts on ecosystems by concentrating visitor use (Kuo, 2002, Leung and Marion, 2000, Marionand Leung, 2001). Particularly, Leung and Marion (2000) point out that formal trails can reduce negative effects by visitors on ecosystems, such as trampling and soil erosion, by preventing potential proliferation of informal trails. The absence of such facilities may in some cases be the most desirable for “pristine areas” (Gundersen et al., 2015), but additional site level management might be needed to avoid dispersion to vulnerable sites in parks with increasing number of visitors (Hagen et al., 2012).One of the challenges in protected area management is to understand the relationship between local and visitor use. Farrell and Marion (2002) refer to the confounding problem of shared natural resources between local residents and visitors and point out that some impacts in the protected areas are attributed both to visitors and local residents. Also, Muhar, Arnberger, and Brandenburg (2002)mention that visitor traces such as the amount of trash left in the landscape are not only correlated to visitor numbers, but also to individual behavior and local traditions. For example Gundersen et al. (2013) found that tourists, local cabin owners and local users distribute differently in the landscape with locals moving off trail for harvest activities to a much larger extent. In the alpine areas in the Europe, traditional land use is usually an integral part of nature conservation policies and informal trails, local cabins, and fences are part of the cultural heritagein the parks (Plieninger et al., 2006, Speed et al., 2012). Grazing activities, particularly by sheep and reindeer, have been important for shaping the alpine landscape in Scandinavia, and many of these informal trails were created in the past (Austrheim et al., 2011, Speed et al., 2012). Studies on dispersion by visitors in the park could therefore not be limited to measures of informal trail proliferation, but also record the actual use along informal trails by recording physical evidence of use. Regarding the number of informal trails, there was a possibility that the parks with higher site management by coincidence had more livestock. It was also very challenging to distinguish traces created by park visitors from other physical traces, which might have been created by local people. Especially in Norway the many possible entry points into the parks and the lack of entrance registration makes it hard to use standard methods or estimate the relative importance of visitor and local use.。
道路路桥工程中英文对照外文翻译文献中英文资料中英文资料外文翻译(文档含英文原文和中文翻译)原文:Asphalt Mixtures-Applications。
Theory and Principles1.ApplicationsXXX is the most common of its applications。
however。
and the onethat will be XXX.XXX “flexible” is used to distinguish these pavements from those made with Portland cement,which are classified as rigid pavements。
that is。
XXX it provides they key to the design approach which must be used XXX.XXX XXX down into high and low types,the type usually XXX product is used。
The low typesof pavement are made with the cutback。
or emulsion。
XXX type may have several names。
However。
XXX is similar for most low-type pavements and XXX mix。
forming the pavement.The high type of asphalt XXX中英文资料XXX grade.中英文资料Fig.·1 A modern XXX.Fig.·2 Asphalt con crete at the San Francisco XXX.They are used when high wheel loads and high volumes of traffic occur and are。
图1. Market-1501数据集的示例图像。
所有图像标准化到128x64. (上:) 具有独特外观的三个行人的示例图像。
(中:)我们展示了三个外观非常相似的行人的情况。
(下:) 提供了一些干扰图像样本(左侧)以及无用图像(右侧).作为一项次要贡献,受到最先进的图像搜索系统的启发,我们还提出了一种无监督的BoW 表示法。
在生成训练数据的码本之后,将每个行人图像表示为视觉词汇直方图。
在这一步中,整合了许多技术,例如根描述子[2],负证据[14],突发加权[16],avgIDF [41]等。
此外,还采用了几个进一步的改进,如几何弱约束,高斯模板,多查询和重排序。
通过简单的点积作为相似性度量,我们证明了所提出的BoW表示在获得快速响应时间的同时可以产生有竞争力的识别精度。
2. 相关研究表1. Market-1501和现有的数据集[20,10,44,22,19,4]比较首先,尽管大多数现有数据集都使用手工裁剪的包围框,但Market-1501数据集采用了最先进的检测器,即可变形部件模型(DPM)[9]。
基于“完美的”手绘包围框,目前的方法并没有充分考虑行人图像的不对齐,这是基于DPM的包围框中一直存在的问题。
如图1所示,在检测到的图像中,未对齐和部分缺失是常见的。
图2. 干扰数据集的示例图片干扰数据集我们强调规模是行人重识别研究中的重要问题。
因此,我们进一步增加了Market-1501据集,增加了一个额外的干扰集。
该数据集包含超过500,000个包围框,包含背景虚假警报以及不属于1,501个标注行人的行人。
样本图像如图2所示。
在实验中,除了Market-1501度量之间差异的一个简单例子。
真实匹配和错误匹配分别为绿色和红色。
对于所有三个排序列表,Market-1501数据集,查询图像是手绘包围框。
每个行人有最多数据集,每个查询平均有14.8个跨摄像头的真实匹配。
因此,我们使)来评估整体表现。
对于每个查询,我们计算)。
桥梁工程中英文对照外文翻译文献(文档含英文原文和中文翻译)BRIDGE ENGINEERING AND AESTHETICSEvolvement of bridge Engineering,brief reviewAmong the early documented reviews of construction materials and structu re types are the books of Marcus Vitruvios Pollio in the first century B.C.The basic principles of statics were developed by the Greeks , and were exemplifi ed in works and applications by Leonardo da Vinci,Cardeno,and Galileo.In the fifteenth and sixteenth century, engineers seemed to be unaware of this record , and relied solely on experience and tradition for building bridges and aqueduc ts .The state of the art changed rapidly toward the end of the seventeenth cent ury when Leibnitz, Newton, and Bernoulli introduced mathematical formulatio ns. Published works by Lahire (1695)and Belidor (1792) about the theoretical a nalysis of structures provided the basis in the field of mechanics of materials .Kuzmanovic(1977) focuses on stone and wood as the first bridge-building materials. Iron was introduced during the transitional period from wood to steel .According to recent records , concrete was used in France as early as 1840 for a bridge 39 feet (12 m) long to span the Garoyne Canal at Grisoles, but r einforced concrete was not introduced in bridge construction until the beginnin g of this century . Prestressed concrete was first used in 1927.Stone bridges of the arch type (integrated superstructure and substructure) were constructed in Rome and other European cities in the middle ages . Thes e arches were half-circular , with flat arches beginning to dominate bridge wor k during the Renaissance period. This concept was markedly improved at the e nd of the eighteenth century and found structurally adequate to accommodate f uture railroad loads . In terms of analysis and use of materials , stone bridges have not changed much ,but the theoretical treatment was improved by introd ucing the pressure-line concept in the early 1670s(Lahire, 1695) . The arch the ory was documented in model tests where typical failure modes were considered (Frezier,1739).Culmann(1851) introduced the elastic center method for fixed-e nd arches, and showed that three redundant parameters can be found by the us e of three equations of coMPatibility.Wooden trusses were used in bridges during the sixteenth century when P alladio built triangular frames for bridge spans 10 feet long . This effort also f ocused on the three basic principles og bridge design : convenience(serviceabili ty) ,appearance , and endurance(strength) . several timber truss bridges were co nstructed in western Europe beginning in the 1750s with spans up to 200 feet (61m) supported on stone substructures .Significant progress was possible in t he United States and Russia during the nineteenth century ,prompted by the ne ed to cross major rivers and by an abundance of suitable timber . Favorable e conomic considerations included initial low cost and fast construction .The transition from wooden bridges to steel types probably did not begin until about 1840 ,although the first documented use of iron in bridges was the chain bridge built in 1734 across the Oder River in Prussia . The first truss completely made of iron was in 1840 in the United States , followed by Eng land in 1845 , Germany in 1853 , and Russia in 1857 . In 1840 , the first ir on arch truss bridge was built across the Erie Canal at Utica .The Impetus of AnalysisThe theory of structures ,developed mainly in the ninetheenth century,foc used on truss analysis, with the first book on bridges written in 1811. The Wa rren triangular truss was introduced in 1846 , supplemented by a method for c alculating the correcet forces .I-beams fabricated from plates became popular in England and were used in short-span bridges.In 1866, Culmann explained the principles of cantilever truss bridges, an d one year later the first cantilever bridge was built across the Main River in Hassfurt, Germany, with a center span of 425 feet (130m) . The first cantileve r bridge in the United States was built in 1875 across the Kentucky River.A most impressive railway cantilever bridge in the nineteenth century was the Fir st of Forth bridge , built between 1883 and 1893 , with span magnitudes of 1711 feet (521.5m).At about the same time , structural steel was introduced as a prime mater ial in bridge work , although its quality was often poor . Several early exampl es are the Eads bridge in St.Louis ; the Brooklyn bridge in New York ; and t he Glasgow bridge in Missouri , all completed between 1874 and 1883.Among the analytical and design progress to be mentioned are the contrib utions of Maxwell , particularly for certain statically indeterminate trusses ; the books by Cremona (1872) on graphical statics; the force method redefined by Mohr; and the works by Clapeyron who introduced the three-moment equation s.The Impetus of New MaterialsSince the beginning of the twentieth century , concrete has taken its place as one of the most useful and important structural materials . Because of the coMParative ease with which it can be molded into any desired shape , its st ructural uses are almost unlimited . Wherever Portland cement and suitable agg regates are available , it can replace other materials for certain types of structu res, such as bridge substructure and foundation elements .In addition , the introduction of reinforced concrete in multispan frames at the beginning of this century imposed new analytical requirements . Structures of a high order of redundancy could not be analyzed with the classical metho ds of the nineteenth century .The importance of joint rotation was already dem onstrated by Manderla (1880) and Bendixen (1914) , who developed relationshi ps between joint moments and angular rotations from which the unknown mom ents can be obtained ,the so called slope-deflection method .More simplification s in frame analysis were made possible by the work of Calisev (1923) , who used successive approximations to reduce the system of equations to one simpl e expression for each iteration step . This approach was further refined and int egrated by Cross (1930) in what is known as the method of moment distributi on .One of the most import important recent developments in the area of analytical procedures is the extension of design to cover the elastic-plastic range , also known as load factor or ultimate design. Plastic analysis was introduced with some practical observations by Tresca (1846) ; and was formulated by Sa int-Venant (1870) , The concept of plasticity attracted researchers and engineers after World War Ⅰ, mainly in Germany , with the center of activity shifting to England and the United States after World War Ⅱ.The probabilistic approa ch is a new design concept that is expected to replace the classical determinist ic methodology.A main step forward was the 1969 addition of the Federal Highway Adim inistration (F HWA)”Criteria for Reinforced Concrete Bridge Members “ that co vers strength and serviceability at ultimate design . This was prepared for use in conjunction with the 1969 American Association of State Highway Offficials (AASHO) Standard Specification, and was presented in a format that is readil y adaptable to the development of ultimate design specifications .According to this document , the proportioning of reinforced concrete members ( including c olumns ) may be limited by various stages of behavior : elastic , cracked , an d ultimate . Design axial loads , or design shears . Structural capacity is the r eaction phase , and all calculated modified strength values derived from theoret ical strengths are the capacity values , such as moment capacity ,axial load ca pacity ,or shear capacity .At serviceability states , investigations may also be n ecessary for deflections , maximum crack width , and fatigue .Bridge TypesA notable bridge type is the suspension bridge , with the first example bu ilt in the United States in 1796. Problems of dynamic stability were investigate d after the Tacoma bridge collapse , and this work led to significant theoretica l contributions Steinman ( 1929 ) summarizes about 250 suspension bridges bu ilt throughout the world between 1741 and 1928 .With the introduction of the interstate system and the need to provide stru ctures at grade separations , certain bridge types have taken a strong place in bridge practice. These include concrete superstructures (slab ,T-beams,concrete box girders ), steel beam and plate girders , steel box girders , composite const ruction , orthotropic plates , segmental construction , curved girders ,and cable-stayed bridges . Prefabricated members are given serious consideration , while interest in box sections remains strong .Bridge Appearance and AestheticsGrimm ( 1975 ) documents the first recorded legislative effort to control t he appearance of the built environment . This occurred in 1647 when the Cou ncil of New Amsterdam appointed three officials . In 1954 , the Supreme Cou rt of the United States held that it is within the power of the legislature to de termine that communities should be attractive as well as healthy , spacious as well as clean , and balanced as well as patrolled . The Environmental Policy Act of 1969 directs all agencies of the federal government to identify and dev elop methods and procedures to ensure that presently unquantified environmenta l amentities and values are given appropriate consideration in decision making along with economic and technical aspects .Although in many civil engineering works aesthetics has been practiced al most intuitively , particularly in the past , bridge engineers have not ignored o r neglected the aesthetic disciplines .Recent research on the subject appears to lead to a rationalized aesthetic design methodology (Grimm and Preiser , 1976 ) .Work has been done on the aesthetics of color ,light ,texture , shape , and proportions , as well as other perceptual modalities , and this direction is bot h theoretically and empirically oriented .Aesthetic control mechanisms are commonly integrated into the land-use re gulations and design standards . In addition to concern for aesthetics at the sta te level , federal concern focuses also on the effects of man-constructed enviro nment on human life , with guidelines and criteria directed toward improving quality and appearance in the design process . Good potential for the upgradin g of aesthetic quality in bridge superstructures and substructures can be seen in the evaluation structure types aimed at improving overall appearance .Lords and lording groupsThe loads to be considered in the design of substructures and bridge foun dations include loads and forces transmitted from the superstructure, and those acting directly on the substructure and foundation .AASHTO loads . Section 3 of AASHTO specifications summarizes the loa ds and forces to be considered in the design of bridges (superstructure and sub structure ) . Briefly , these are dead load ,live load , iMPact or dynamic effec t of live load , wind load , and other forces such as longitudinal forces , cent rifugal force ,thermal forces , earth pressure , buoyancy , shrinkage and long t erm creep , rib shortening , erection stresses , ice and current pressure , collisi on force , and earthquake stresses .Besides these conventional loads that are ge nerally quantified , AASHTO also recognizes indirect load effects such as fricti on at expansion bearings and stresses associated with differential settlement of bridge components .The LRFD specifications divide loads into two distinct cate gories : permanent and transient .Permanent loadsDead Load : this includes the weight DC of all bridge components , appu rtenances and utilities, wearing surface DW nd future overlays , and earth fill EV. Both AASHTO and LRFD specifications give tables summarizing the unit weights of materials commonly used in bridge work .Transient LoadsVehicular Live Load (LL) Vehicle loading for short-span bridges :considera ble effort has been made in the United States and Canada to develop a live lo ad model that can represent the highway loading more realistically than the H or the HS AASHTO models . The current AASHTO model is still the applica ble loading.桥梁工程和桥梁美学桥梁工程的发展概况早在公元前1世纪,Marcus Vitrucios Pollio 的著作中就有关于建筑材料和结构类型的记载和评述。
对使用二项式格子模型的新墨西哥
44公路保修条款的评析
学生:
指导教师:
三峡大学科技学院
摘要:
在1998年,新墨西哥州公路和运输部门(NMSHTD )为44公路项目的20年的路面保修同意支付6000万美元(新墨西哥州44公路,现为美国550干线)。
该保修包括了一个支出总额上限为1.1亿美元的条款。
作为在美国第一个公路的长期保修,交易开创了一个这样有争议的先例。
其双方,包括其他国家的交通运输部(点),都乐于公路承包的创新。
且其USDOT,担保人和承包商也认为该交易可以作为一个评估定价和成本效益的测试案例。
新墨西哥洲圣塔菲议员向立法会财务委员会,国家公路和运输部[Abbey(2004年)]公布了临时审计报告。
该报告提供了其成本效益为6000万美元的宝贵的财政预测。
本文介绍了一个有效性保修条款的独立的分析。
根据NMSHTD数据,分析认为,如果接受开支最高限额不考虑的情况下,20年保证路面,6000万美元是一个公平的费用。
此外,本文认为,保修费用的支出是值得的。
文章用实物期权的方法对新墨西哥州44公路评价最高的保修条款和一些政策建议进行了讨论。
导言:
由于美国联邦高速公路管理局的特别实验项目(SEPs )的实施,国家运输部门(点)变成越来越积极的授标大量公路路面的合同。
在1990年9月14号开始实施创新承包方法,包括车道租金,成本招标,时间招标,技术设计和保修条款的建立。
保修条款表明承包商负责赔偿的性能故障的保修期,通常是5至7年。
虽然新墨西哥州并不属于原来在9月14号发起使用保证的8个州中的一个,但他们评估了保修合同的选择,并成功的将其运用到新墨西哥州的44公路中了(现为美国550干线),其中从1到25在圣思多罗西北菲尔德附近的四角地区长达118英里(190公里)。
当他们考虑到浸润新墨西哥西北角的前景,墨西哥州公路和运输部(NMSHTD )[在2003年改名新墨西哥州的交通运输部门(NMDOT )] ,确定118米的巷道今后的维修和重建费用总额约为每年每英里1600万美元,而一个服务生涯为20年的公路花费(May et al. 2003年)总额才刚刚超过151万美元。
此外,
他们还确定,路基和表面的升级将使用几乎27年才能完成正常的承包使用方法。
为了保持公路良好的条件,从长远来看,NMSHTD从保证了在路面使用性能的保修期Mesa PDC手里购买了20年的保修协议。
新墨西哥州44公路的保修新的突破在时间和费用上。
这是美国第一次长期的最昂贵的达62万美元保修公路。
概念上,其项目的经济性和适用性一直是辩论的焦点问题。
此外,在保修期的两个最高限额条款一致认为,累积量和维修的支出额既不需要检查,也不需要进行评估。
在本文中,我们处理的新墨西哥州44公路项目成本效益的保证条款。
1 新墨西哥州44的保修条款
两个主要参与者,NMSHTD和项目开发承建商梅萨合作新墨西哥州44项目,梅萨是一个部门堪萨斯州威奇托的道路性能科技公司。
该NMSHTD制定了设计标准,性能要求,并监督程序,估计寿命周期费用,建立预计维持在20年的保修期的整体现值。
通过团队建设和开放式通讯,NMSHTD能够监测表现不负责的业绩,而梅萨PDC能够洞察过程中随着限制和约束必须加以解决的发展和裁决。
为了执行一项长期的保修协议,合同介绍了专业的服务。
基本项目包括划分责任和进行修理,费用和偿还的适当协议。
最后的价格是保证包括6000万美元的20年期保修和200万美元的涵盖桥梁,排水,土壤侵蚀等结构10年期的保修,共计6200万美元的保修责任。
梅萨还商定了未来维修费用3.5 %的通胀风险。
基于这些数字,保修定价为6400每英里每年,与初步评估相比减少了60 %(May et al .2003年)。
此外,保修期有三个最高限额条款:( 1 )20年的使用寿命:( 2 )等效单轴载荷(ESALs )为400.000.000和( 3 )1.14亿美元支出总额,其中10亿美元是为路面保修的最高限额和400万美元为结构担保的最高限额。
因此,梅萨PDC同意提供高达1.14亿美元的为期20年的保修或400万ESALs以换取6200万美元。
为了满足确保财政责任,保修也支持支付履约保证金.
双方建立构成路面缺陷需要根据客观标准,如光滑,车辙,横向裂纹间距,裂缝宽度,坑洞,凹凸不平,冒浆,分散和脱层。
由于供应商保修的风险与履约相关,所以他们会因此确保高质量的路面设计,组成和建设。
双方还制定了监测效能和执行预防性和日常维护的计划,以确保公路的健康。
2 贴现现金流量分析
在最近的新墨西哥州44保修审计的一次临时报告中,新墨西哥州挑战的成本效益为6200万美元的保修(Abbey,2004年)。
这份临时报告及其前身(Abbey, 1999 )提供了新墨西哥州44项目和相关的质保的宝贵的成本预测。
利用这些数据,在提供给NMSHTD当他们在1998年的做出决定的信息保证的基础上,我们进行了我们自己的成本分析。
因此这一分析,所有保修好处和费用,在建设过程中和保修期内的折
扣回到时间的决定在1998年,和所有费用,并承担支付属于在今年年底。
此外,我们认为路面保修只是价值6000万美元和解释了96.8 %的总的保修费用。
忽视了只是价值200万美元的结构的保修,这是减少没有任何重大的影响歧义的结果。
此外,Abbey(2004 )估计,结构的保修将与10年保修期年底前期满,而路面的保修可能会继续有效,整个的任期为20年。
3 探讨与政策建议
由于最高限额的条款,据NMSHTD,新墨西哥州44项目的保修条款的价格并不反映的实际费用。
最高限额的条款,按1998年美元计算,意味着梅萨从NMSHTD 那里拿到的除了6200万美元的保证付款还有价值480万美元/ 年的额外收入。
通过宣布的最高限额的条款,梅萨实际上消除了不利的风险,同时保持盈利的不确定性。
梅萨的收益就是NMSHTD的失去,因为这是一个零和游戏。
在实际费用的保修条款中,考虑到NMSHTD借用了一个较高的利率,以资助44项目,按1998年的美元计算,新墨西哥州44项目上升到57.3万个每年,一个属于保修规定的可行率下降了4 %(图6 )。
新墨西哥州的保修规定是没有道理的,因为上限条款的成本高。
有人建议,国家公路仔细评估机构的最高限额的条款时,需要保修服务的销售基础设施项目。
这是一种以便可以进行更好的决策的双重的分析需要。
第一,人们必须认识到,最高限额的条款可能会昂贵。
本文是用一个方便的方法来评估发达国家的模式的价值最高限额的条款。
第二,国家机构当包括条款的保证时,应更好地确定一个良好的最高限额。
敏感性分析表明,最高限额条款的价值使固定最高限额变得灵活。
新墨西哥州44保修条款增加了10 %的上限将降低期权价格的上限条款的30 %。
因此,应仔细选择一个可明显影响着条款限额降低的上限。
最高限额的条款,在本质上是一种期权,提供的灵活性,交换性的决定后,更多信息可以通过锁定风险的同时使有利的不确定性开放。
真正的选择权概念提供强有力的工具,不仅为承包商,但也最高国家机构或其他条款的触发点在不确定的现金流量。
一个潜在的应用实物期权的概念在保修合同是拖延保修的决定,直到施工结束时,更多的性能资料(Cui et al. 2004年)。
然后,更准确地估计可以节省日后维修,可对是否购买保修做出更好的决定。
鸣谢:
作者非常感谢支持这项研究的交通大学中心的阿拉巴马州(UTCA )和阿拉巴马州承建商的收费基金。
参考文献:
1 Qingbin Cui;Philip Johnson;Aaron Quilk;Makarand Hastak, 《对使用二项式格子模型的新墨西哥44公路的估价保修条款》
2 《建造工程及管理》期刊,2008年1月,卷134.Number 1 ,10 - 17。