外文文献原文及译文
- 格式:doc
- 大小:51.50 KB
- 文档页数:7
国际服务贸易外文翻译文献(含:英文原文及中文译文)文献出处:《World Development》,2015,12(1):35-44.英文原文The research of international service trade and economic growth theoryChakraborty Kavin1 IntroductionThe study of the relation between international trade and economic growth is one of the most active issues. Since 1980s, the world has been in transition from national economy orientating towards natural resources and manufacturing industry to global and regional economy orientating towards information resources and service industry. After the signature of GA TS in1994, the institutional arrangements on liberalizing service trade result in a world-wide involvement division and exchanges of service trade, and it is undoubtedly that the positive interaction between service trade and investment leads to economic growth. But the theoretical research on service trade lags behind practice.Is it a statistic phenomenon or a universal rule of economic growth? To approach the above two issues from theoretical and empirical perspective is of great value to policy-making.For the proposition of that "International service trade will drive economic growth". Theoretical analysis shows that although service tradeis not a direct interpretative variable to economic growth, it can effect economic growth indirectly through other growing factors and technology upgrade, but the ways and mechanisms are different in different stages. In a certain stage of economic development, service trade (including investment) will have static and dynamic effect on factors supply and technology upgrade in one county, which will lead to the domestic alteration of resources condition structure. It is the enterprises that select industry structure, technology structure and trade structure according to dynamic alteration way of comparative technology structure and trade structure, which will ultimately promote evolution of economic growth gradually. So far as operational mechanism of service trade and investment is concerned, service trade affects factors supply in one country by physical capital accumulating effect, human capital effect, technology upgrade effect, institutional transition effect, employment effect and externality of technology, then influences the upgrade of industrial structure, the upgrade of technological structure and the transition of mode of economic growth. It is obvious that dynamic effect is greater than static effect; that external effect is playing more important role than internal effect; and that technology spillover effect of foreign direct investment in service industry is greater than that of service trade in a narrow sense (including across-border supply, consumption abroad and movement of natural person).For the research of mechanism about how service trade drive economic growth. Firstly, the paper verifies the causality between service trade and economic growths concerning different economic bodies and the representative countries. The results show that there are causalities between international service trade and economic growth in the whole world, in the developed countries, in the US and in china. In the developing countries, service trade is the Granger cause of economic growth; In the whole world and the developing countries, economic growth is the Granger cause of service trade; In the US, service export is the Granger cause of economic growth, and economic growth is the Granger cause of service import. On this basis, it is concluded that the opening of service industry will benefit economic growth in one country. Secondly, in order to explore on how the service trade and investment act on economic growth, empirical studies are employed to explain the case of US and that of China. The results show that the routes by which service trade affects economic growth in the US can be rowed as follows from more significant to less: employment effect, human capital effect, physical capital effect, technology effect, institution effect. The results of empirical analysis of China can be summarized that: the routes by which service export affects economic growth can be rowed as follows: employment effect, physical capital effect, institution effect, human capital effect, technology effect; the routs by which service import affectseconomic growth can be rowed as follows: technology effect, institution effect, employment effect, human capital effect, physical capital effect; the routes by which FDI in service affects economic growth can be rowed as follows: technology effect, human capital effect, institution effect, employment effect, physical capital effect. Moreover, the effect of FDI in service is stronger than service import, and the effect of service import is stronger than service export.According to the empirical test in this paper, the conclusion can be drawn as follows: service trade in a narrow sense will have static and dynamic effects on factor supply in one country through import and export of service, FDI in service industry is one of the most important cross-border transactions and is another important channel which will affect the transition of advantages on factor supply in one country. It should be emphasized that the above-mentioned channels will have different effects on countries at different stages of economic development. Whether the roles can be brought into play or not depends on given restraints. The input output of factors themselves cannot form a clear function, but will interact together and act on economic growth hand in hand through numerous feedback chain.Chinese economy is now undergoing transformation from elementary age to middle age of industrialization. Service trade and investment in current period have both advantages and disadvantages.Based on these judgments, we propose that China should pursue a policy favoring protectionism on management of service trade and adopt relevant countermeasures as follows. Scientific development view should be formed with an eye to harmonizing development of three industries so as to lay a solid industries foundation for service trade; The strategic programming should be stipulated and the market of service trade should be opened gradually; The rule of international transfer of service trade should be mastered and environment of utilizing foreign investment on service industry should be improved.As the characteristics of the world's service-oriented economy have gradually emerged, service trade originating from the upgrading of industrial structure has developed rapidly, and the scale of service trade is rapidly expanding. From the statistical data, the total exports of world service trade rose rapidly from 365 billion U.S. dollars in 1980 to 377.779 billion U.S. dollars in 2008, an increase of 9.35 times. Compared with the trade of goods with a long history, service trade is a new form of trade. With the continuous increase in absolute size and relatively low levels, service trade has become a focus of attention in modern society.2 The impact of overall service trade on economic growthAccording to the WTO General Agreement on Trade in Services (GA TS), which was signed in 1994, trade in services includes Cross- border Supply, Consumption A broad, Commercial Presence, and naturalperson mobility. (Movement of Natural Persn) Four modes. The service trade of these four modes has completely different properties and characteristics. Therefore, it is difficult to establish a unified theoretical framework for service trade to affect economic growth. The corresponding literature is very rare. The only foreign documents are mainly Robinson et al. (2002), who simply regard service trade as a commodity. Trade, without taking into account differences in the four trade models, studied the economic growth effects of service trade liberalization using the Computable General Equilibrium (CGE) model.Using empirical methods to study the literature on the impact of overall service trade on economic growth is more, but such studies are mostly domestic scholars. Research shows that the average contribution of China's overall service trade to economic growth is 18.9%.3 Effect of Service Trade in Different Industries on Economic GrowthAt present, the literature on the impact of industry trade in service trade on economic growth is mostly concentrated in such service sectors as finance, telecommunications, and health care. These studies have basically reached a relatively unanimous conclusion that the opening of the service sector or the increase in productivity can significantly promote economic growth. . For example, studies by Beck et al. (1998), M urinde & Ryan (2003), and Eschenbach (2004) suggest that the opening of the financial sector has, to a certain extent, broken the monopoly of domesticfinancial markets and prompted the orderly competition of financial markets. On the normal development track, productivity has improved, and it has finally led to economic growth in the country. Kim (2000) studied the relationship between the development of service trade in the distribution sector and the growth of total factor productivity (TFP) using Korea's input-output data. The results show that the liberalization of service trade not only significantly promoted its own TFP. The promotion also promoted the improvement of total factor productivity in the related manufacturing sector. The total factor productivity growth brought about by service trade almost covered the entire economic sector.4 Effect of Service Trade on Economic Growth by Different Trading ModesThere are few literatures on specific transaction models and theoretical studies on the impact of trade in services on economic growth. Carr et al. (2001) & M arkusen et al. (2005) theoretically examined the commercial existence model by means of the CGE model. The impact of the trade in services on economic growth shows that the opening up of trade in services is an important source of the increase in economic welfare of a country. From the perspective of economic welfare, the opening up of trade in services is a general trend. Subsequently, the use of CGE models to theoretically examine the impact of service trade on economic growth began to prevail. For example, Rutherford et al. (2005)used the CGE model to evaluate Russia's WTO accession effects, and Ko nan &Maskus (2006) used CGE models. The potential effects of Tunisia's elimination of barriers to trade in services were studied. Their conclusions indicate that the increase in the level of economic welfare in one country can benefit from the opening up of the service market, while the elimination of FDI market access barriers in the service sector is a pattern of four trades. The most important liberalization measures are the main sources of increased welfare in a country. There are a lot of literatures on the relationship between service trade and economic growth in specific models using empirical methods. In the four modes of trade in services, commercial presence is the most important one, and from the point of view of data availability, although statistical data is still not very accurate, commercial existence of service trade is based on service industry FDI as a carrier. To achieve this, researchers can use service industry FDI data to characterize the scale of service trade in this model, and this type of trade model has received more attention. Among them, Markusen (1989) believes that the existence of commercial trade in services has two positive and negative effects. The positive effect is that competition in the service sector has led to an increase in domestic demand for the sector’s production factors, which is conducive to output growth. The effect of market size and negative effects means that the intensified competition in the domestic market of service industries has led to the withdrawal ofdomestic service-oriented enterprises from the market. The study by Markusen (1989) shows that the effect of market size after the opening of the service market far exceeds the crowding-out effect. After offsetting the crowding-out effect, it can still promote the productivity improvement of the non-service sector and further lead to the structure of domestic trade in goods. The changes, those sectors that were previously low in productivity and dependent on imports, will evolve into high-productivity export sectors, which is quite similar to the latest research findings on the interactive development of producer services and manufacturing. Hoekman (2006) and Hoekman (2006) used India as an example to examine the impact of the existence of commercial trade in services in the finance, telecommunications, and transportation sectors on the competitiveness of the goods export sector, and believe that these sectors have been liberalized. The level of soft facilities has been increased, which in turn has greatly reduced the operating costs of the downstream product manufacturing sector, which has increased the export competitiveness. With the inefficiency of the domestic service industry, the unfavorable pattern is reversed with the help of commercial presence of service trade. Feasible choice. Guerrieri et al. (2005) took the EU as the research object and analyzed the role of commercial trade in services for knowledge accumulation and economic growth. The study concluded that the openness of the service market or the relaxation of domesticservice regulations has positively promoted economic growth. It was found that the imported service items may be more able to promote economic growth than the domestic same service items due to high technological content.5 Possible Future Research DirectionsIt is not difficult to find from the above-mentioned documents that since the development of service trade started late, research on the growth of service trade began to rise gradually from the 1980s, and more than 20 years of research in this area is in the ascendant. With the further enhancement of the status of trade in services, the possible directions for future research will generally include the following aspects.From the point of view of research methodology, classification of service trade can be studied. As the theory of goods trade has gradually matured, the development practice of service trade still calls for the birth of the theory of service trade. Helpman and Markusen, international economists, expressed on different occasions that the difficulty in establishing the theoretical system of service trade lies in the fact that there are large differences in various types of service trades, and it is difficult for researchers to overcome the gap between them. Classifying service trade according to certain standards and exploring the impact of various types of service trade on economic growth is a possible direction for future research.From the perspective of the research subjects, it is possible to study China’s service trade and economic growth. China’s GDP has already ranked second in the world. However, the service industry’s added value accounted for only 40% of GDP, which is obviously not commensurate with the status of an economic power. In addition, the trade in services is still relatively small compared to the trade in goods. Under such a realistic background, what is the relationship between China's service trade and economic growth? How will service trade contribute to China's economic growth? What impact will service outsourcing have on China's economy? With China in In the next decade, how will China make service trade an engine of economic growth? From the academic point of view, economists from all countries are paying attention to China’s economic development, and China’s service trade will also be improved. It will become a research hotspot.From the perspective of research topics, it is possible to study the impact of service outsourcing on economic growth. In 2008, the scale of global service outsourcing market has reached 1.5 trillion US dollars. According to the UNCTAD (UNCT AD) speculation, the global service outsourcing market will increase by 30%-40% in the next 5-10 years.The surging service industry outsourcing is a new form of service trade. How does service outsourcing drive economic growth through employment, industrial structure upgrading, and technology spillovers?What are the differences in the impact of contracting and receiving services on economic growth in the service industry? Research on these issues will start with the development of service outsourcing to important theoretical guidance.中文译文国际服务贸易与经济增长理论与实证研究Chakraborty Kavin1 引言国际贸易与经济增长始终是国际经济学最生动的论题之一。
金融体制、融资约束与投资——来自OECD的实证分析R.SemenovDepartment of Economics,University of Nijmegen,Nijmegen(荷兰内梅亨大学,经济学院)这篇论文考查了OECD的11个国家中现金流量对企业投资的影响.我们发现不同国家之间投资对企业内部可获取资金的敏感性具有显著差异,并且银企之间具有明显的紧密关系的国家的敏感性比银企之间具有公平关系的国家的低.同时,我们发现融资约束与整体金融发展指标不存在关系.我们的结论与资本市场信息和激励问题对企业投资具有重要作用这种观点一致,并且紧密的银企关系会减少这些问题从而增加企业获取外部融资的渠道。
一、引言各个国家的企业在显著不同的金融体制下运行。
金融发展水平的差别(例如,相对GDP的信用额度和相对GDP的相应股票市场的资本化程度),在所有者和管理者关系、企业和债权人的模式中,企业控制的市场活动水平可以很好地被记录.在完美资本市场,对于具有正的净现值投资机会的企业将一直获得资金。
然而,经济理论表明市场摩擦,诸如信息不对称和激励问题会使获得外部资本更加昂贵,并且具有盈利投资机会的企业不一定能够获取所需资本.这表明融资要素,例如内部产生资金数量、新债务和权益的可得性,共同决定了企业的投资决策.现今已经有大量考查外部资金可得性对投资决策的影响的实证资料(可参考,例如Fazzari(1998)、 Hoshi(1991)、 Chapman(1996)、Samuel(1998)).大多数研究结果表明金融变量例如现金流量有助于解释企业的投资水平。
这项研究结果解释表明企业投资受限于外部资金的可得性。
很多模型强调运行正常的金融中介和金融市场有助于改善信息不对称和交易成本,减缓不对称问题,从而促使储蓄资金投着长期和高回报的项目,并且提高资源的有效配置(参看Levine(1997)的评论文章)。
因而我们预期用于更加发达的金融体制的国家的企业将更容易获得外部融资.几位学者已经指出建立企业和金融中介机构可进一步缓解金融市场摩擦。
广东工业大学华立学院本科毕业设计(论文)外文参考文献译文及原文系部城建学部专业土木工程年级 2011级班级名称 11土木工程9班学号 23031109000学生姓名刘林指导教师卢集富2015 年5 月目录一、项目成本管理与控制 0二、Project Budget Monitor and Control (1)三、施工阶段承包商在控制施工成本方面所扮演的作用 (2)四、The Contractor’s Role in Building Cost Reduction After Design (4)一、外文文献译文(1)项目成本管理与控制随着市场竞争的激烈性越来越大,在每一个项目中,进行成本控制越发重要。
本文论述了在施工阶段,项目经理如何成功地控制项目预算成本。
本文讨论了很多方法。
它表明,要取得成功,项目经理必须关注这些成功的方法.1。
简介调查显示,大多数项目会碰到超出预算的问……功控制预算成本.2.项目控制和监测的概念和目的Erel and Raz (2000)指出项目控制周期包括测量成……原因以及决定纠偏措施并采取行动。
监控的目的就是纠偏措施的。
.。
标范围内。
3.建立一个有效的控制体系为了实现预算成本的目标,项目管理者需要建立一……被监测和控制是非常有帮助的。
项目成功与良好的沟通密。
决( Diallo and Thuillier, 2005).4.成本费用的检测和控制4.1对检测的优先顺序进行排序在施工阶段,很多施工活动是基于原来的计……用完了。
第四,项目管理者应该检测高风险活动,高风险活动最有。
..重要(Cotterell and Hughes, 1995)。
4.2成本控制的方法一个项目的主要费用包括员工成本、材料成本以及工期延误的成本。
为了控制这些成本费用,项目管理者首先应该建立一个成本控制系统:a)为财务数据的管理和分析工作落实责任人员b)确保按照项目的结构来合理分配所有的……它的变化-—在成本控制线上准确地记录所有恰..。
本科毕业论文外文参考文献译文及原文学院经济与贸易学院专业经济学(贸易方向)年级班别2007级 1 班学号3207004154学生姓名欧阳倩指导教师童雪晖2010 年 6 月 3 日目录1 外文文献译文(一)中国银行业的改革和盈利能力(第1、2、4部分) (1)2 外文文献原文(一)CHINA’S BANKING REFORM AND PROFITABILITY(Part 1、2、4) (9)1概述世界银行(1997年)曾声称,中国的金融业是其经济的软肋。
当一国的经济增长的可持续性岌岌可危的时候,金融业的改革一直被认为是提高资金使用效率和消费型经济增长重新走向平衡的必要(Lardy,1998年,Prasad,2007年)。
事实上,不久前,中国的国有银行被视为“技术上破产”,它们的生存需要依靠充裕的国家流动资金。
但是,在银行改革开展以来,最近,强劲的盈利能力已恢复到国有商业银行的水平。
但自从中国的国有银行在不久之前已经走上了改革的道路,它可能过早宣布银行业的改革尚未取得完全的胜利。
此外,其坚实的财务表现虽然强劲,但不可持续增长。
随着经济增长在2008年全球经济衰退得带动下已经开始软化,银行预计将在一个比以前更加困难的经济形势下探索。
本文的目的不是要评价银行业改革对银行业绩的影响,这在一个完整的信贷周期后更好解决。
相反,我们的目标是通过审查改革的进展和银行改革战略,并分析其近期改革后的强劲的财务表现,但是这不能完全从迄今所进行的改革努力分离。
本文有三个部分。
在第二节中,我们回顾了中国的大型国有银行改革的战略,以及其执行情况,这是中国银行业改革的主要目标。
第三节中分析了2007年的财务表现集中在那些在市场上拥有浮动股份的四大国有商业银行:中国工商银行(工商银行),中国建设银行(建行),对中国银行(中银)和交通银行(交通银行)。
引人注目的是中国农业银行,它仍然处于重组上市过程中得适当时候的后期。
第四节总结一个对银行绩效评估。
外文文献翻译译稿1卡尔曼滤波的一个典型实例是从一组有限的,包含噪声的,通过对物体位置的观察序列(可能有偏差)预测出物体的位置的坐标及速度。
在很多工程应用(如雷达、计算机视觉)中都可以找到它的身影。
同时,卡尔曼滤波也是控制理论以及控制系统工程中的一个重要课题。
例如,对于雷达来说,人们感兴趣的是其能够跟踪目标。
但目标的位置、速度、加速度的测量值往往在任何时候都有噪声。
卡尔曼滤波利用目标的动态信息,设法去掉噪声的影响,得到一个关于目标位置的好的估计。
这个估计可以是对当前目标位置的估计(滤波),也可以是对于将来位置的估计(预测),也可以是对过去位置的估计(插值或平滑)。
命名[编辑]这种滤波方法以它的发明者鲁道夫.E.卡尔曼(Rudolph E. Kalman)命名,但是根据文献可知实际上Peter Swerling在更早之前就提出了一种类似的算法。
斯坦利。
施密特(Stanley Schmidt)首次实现了卡尔曼滤波器。
卡尔曼在NASA埃姆斯研究中心访问时,发现他的方法对于解决阿波罗计划的轨道预测很有用,后来阿波罗飞船的导航电脑便使用了这种滤波器。
关于这种滤波器的论文由Swerling(1958)、Kalman (1960)与Kalman and Bucy(1961)发表。
目前,卡尔曼滤波已经有很多不同的实现。
卡尔曼最初提出的形式现在一般称为简单卡尔曼滤波器。
除此以外,还有施密特扩展滤波器、信息滤波器以及很多Bierman, Thornton开发的平方根滤波器的变种。
也许最常见的卡尔曼滤波器是锁相环,它在收音机、计算机和几乎任何视频或通讯设备中广泛存在。
以下的讨论需要线性代数以及概率论的一般知识。
卡尔曼滤波建立在线性代数和隐马尔可夫模型(hidden Markov model)上。
其基本动态系统可以用一个马尔可夫链表示,该马尔可夫链建立在一个被高斯噪声(即正态分布的噪声)干扰的线性算子上的。
系统的状态可以用一个元素为实数的向量表示。
消费者行为心理学中英文外文文献翻译(含:英文原文及中文译文)英文原文Frontiers of Social PsychologyArie W. Kruglanski 、Joseph P. ForgasFrontiers of Social Psychology is a new series of domain-specific handbooks. The purpose of each volume is to provide readers with a cutting-edge overview of the most recent theoretical, methodological, and practical developments in a substantive area of social psychology, in greater depth than is possible in general social psychology handbooks. The editors and contributors are all internationally renowned scholars whose work is at the cutting-edge of research.Scholarly, yet accessible, the volumes in the Frontiers series are an essential resource for senior undergraduates, postgraduates, researchers, and practitioners, and are suitable as texts in advanced courses in specific subareas of social psychology.Some Social Asp ects of Living in a Consumer SocietyThe following sketches will illustrate that in a consumer society much of the behavior studied by social psychologists relates to consumer stimuli and consumer behavior. Thus, the consumer context provides a rich field for the study of social phenomena and behavior.Consumer Decisions Are UbiquitousWhether we are in the supermarket or not, we are constantly making consumer decisions. We enroll in gyms, use our frequent-flyer miles for a vacation resort, buy health care, choose a restaurant, skip dessert for a healthier lifestyle. In fact, most of our daily decisions do not involve existential decisions such as whom to marry or whether to have children or not, but whether to have tea or coffee, use our credit card or pay cash, or other seemingly trivial decisions. Moreover, many of our daily (consumer) behaviors do not even require intentional decisions. Rather, they may be habitual, such as switching to CNN to get the news or accessing Google when looking up some information. A typical day of a typical person is filled with countless minor consumer decisions or the consequences of previous decisions, starting with the brand of toothpaste in the morning to choosing a movie after work.Consumer Choices Fulfill a Social-Identity FunctionAlthough for most people being a consumer may not be central to their identity, many of their consumer decisions are nevertheless highly identity-relevant insofar as they correspond to a larger set of values and beliefs and express important aspects of the self. Eating a vegetarian diet because one does not want to endorse cruelty to animals and boycotting clothes potentially made by child laborers are some examples. Some people buy a Prius out of environmental concerns; others boycott Japanese cars —such as the Prius —in order to help the local carindustry. In this respect, even the choice between Coke and Pepsi is not necessarily trivial. People who cannot discriminate Coke from Pepsi in a blind test, or who prefer Pepsi, may nevertheless adhere to Coke as a cultural icon. Attempts to change the formula of Coke met with angry protests and opposition. Clearly, consumer products and brands do not only fulfill utilitarian needs (Olson & Mayo, 2000; Shavitt, 1990). In a world of oversupply and differentiating brands, many consumers choose brands in order to express their personality or to affiliate themselves with desired others. They do not simply use a Mac; they are Mac users, and switching to another brand of PC would be akin to treason. From soft drinks to computers, brands may become an ideology. People may also perceive of products as extended selves (Belk, 1988); for example, they may identify with their cars just as they do with pets. Likewise, brands may define social groups. The Harley-Davidson Club is a legendary example; an Internet search revealed clubs for almost every car brand and model. In my hometown, I found a V olkswagen New Beetle Club whose stated purpose is to cultivate contacts between New Beetle Drivers by organizing social events (among others, a visit to a car cemetery). On the road, drivers of the same car model often greet each other. Apparently, driving the same model is sufficient to establish social closeness. Brands, products, and consumption habits not only help to establish social connectivity but also serve as status symbols, defining vertical andhorizontal social boundaries. By using particular brands or consuming specific products, people can express a certain lifestyle or attempt to convey a particular social impression. Subscribing to the opera conveys one’s social position just as going to a monster truck race does. Whether your choice of drink is wine or beer, cappuccino or herbal tea, your order expresses more than merely your taste in beverages.Consumer Choices Affect Social PerceptionGiven that brands and products are part of social expression, it is not surprising that people are judged by the brands and products they use. In particular, products of a social-identity function are used as bases for inferences about a target’s personality traits (Shavitt & Nelson, 2000). Likewise, smoking, food choice and amount of food intake have all been shown to affect social impressions. Depending on the subculture of the perceiver (age, country), different personality traits are assumed in smokers compared with nonsmokers (e.g., Cooper & Kohn, 1989; Jones & Carroll, 1998). Various studies found that eaters of a healthier diet are perceived as more feminine and in general judged more favorably than eaters of unhealthy foods (for a review see V artanian, Herman, & Polivy, 2007). Arguing that a Pepsi drinker is to a Coke drinker what a Capulet was to a Montague is, of course, an exaggeration, but clearly brands may distinguish ingroup from out-group members. Possibly this is most extreme among teenagers, where the brand of jeans is perceived todetermine coolness and popularity. Nevertheless, the phenomenon is not limited to teen culture, as testified by the previous examples of social communities defined by shared brands. In sum, from wet versus dry shaving to driving a Porsche versus a Smart, consumer behavior is used as a cue in person perception. Most likely, such cues also manifest in behavior toward these consumers. Physical attacks on women who wear fur are a most extreme example.Affective Consequences of Consumer BehaviorObviously, consumption and the use of products and services may give pleasure and satisfaction or displeasure and dissatisfaction. People may experience joy from wearing a new sweater or suffer emotional consequences when products or services fail or cause inconvenience. Product use is only one source of affective consumer experiences. The mere act of choosing and acquisition is another. People enjoy or dislike the experience of shopping. They may take pleasure from the freedom of simply choosing between different options (e.g., Botti & Iyengar, 2004), feel overwhelmed and confused by an abundance of options (e.g., Huffman & Kahn, 1998), or feel frustrated by a limited assortment that does not meet their particular needs (e.g., Chernev, 2003). They may experience gratification and a boost in self-esteem from the fact that they can afford a particular consumer lifestyle or grudge the fact that they cannot. Many daily sources of affective experiences involve consumerbehavior in one way or another.The Consumer Context Provides Unique Social InteractionsGranted, we rarely form deep and meaningful relationships with our hairdressers and waiters. Still, the consumer context affords many social interactions over a day. Again, these interactions— even if brief— may constitute a source of affective experiences. The smile of the barista, the compliment from the shop-assistant, and the friendly help from the concierge are just a few examples of how such consumerrelated interactions may make us feel good, worthy, and valued, whereas snappy and rude responses have the opposite effect. Besides, the social roles defined by the consumer context may provide unique opportunities for particular behaviors, interactions, and experiences not inherent in other roles. Being a client or customer makes one expect respect, courtesy, and attendance to one’s needs. For some, this may be the only role in their life that gives them a limited sense of being in charge and having others meet their demands. To give another example, complaining is a form of social interaction that mostly takes place within the consumer context. A search for ―complaint behavior‖ in the PsycI NFO database found that 34 out of 50 entries were studies from the consumer context. (The rest mostly related to health care, which may to some extent also be viewed as consumer context.) Given the importance of the consumer context to social experiences and interactions, it provides a prime opportunity forstudying these social behaviors.•How consumers think, feel, reason, and the psychology of screening for different items (such as brands, products); • Consumer behavior when they shop or make other marketing decisions;•Limits in consumer knowledge or access to information affect decisions and marketing outcomes;•How can marketers adapt and improve their marketing competitiveness and marketing strategies to attract consumers more efficiently?Bergi gives an official definition of consumer behavior: the process and the activities people perform when they research, select, purchase, use, evaluate, and deal with products and services in order to meet their needs. The behavior occurs in a group or an organization where individuals or individuals appear in this context. Consumer behavior includes using and handling products and studying how products are bought. The use of products is generally of great interest to marketers because it may affect how a product is in the best position or how we can encourage increased consumption.The Nicosia model focuses on the relationship between the company and its potential customers. The company communicates with consumers through its marketing messages or advertisements and consumers' reactions to the information they want to buy. Seeing this pattern, we willfind that companies and consumers are interconnected. Companies want to influence consumers. Consumers influence company decisions through their decisions.Consumer sentiment refers to a unique set of emotional reactions to the use of or eliciting a consumer experience in the product, a unique class or relationship of the emotional experience described and expressed (such as joy, anger and fear), such as the structural dimensions of the emotional category or pleasant/unpleasant, Relax/action, or calm/excited. Goods and services are often accompanied by emotional reactions (such as the fear caused by watching a horror movie). Emotional values are often associated with aesthetic choices (such as religion, reason). However, more material and utilitarian products also seem to have emotional value. For example, some foods cause childhood experiences and feel comfortable with them. Izad (1977) developed a method of emotional experience and introduced basic emotions. He uses ten words to distinguish the basic types of emotions: interest, joy, surprise, sadness, anger, disgust, contempt, fear, shame, and guilt. This method has been widely used by consumer research.In order to implement the interpersonal and personal construction in this framework, we use the concept of self-awareness to express the influence of consumer response on society. Self-awareness is defined as the individual's consistent trend to focus directly on inward or outward.This theory identifies two different types of people with self-consciousness. The open self-conscious person pays special attention to other people's views on their outside. The private self-conscious person pays more attention to their inner thoughts and feelings. In this case, we assume that the reputation of consumption may be different based on sensitivity to other people. This proposal is also consistent with previous research. It shows that people with different personal behaviors depend on their sensitivity to interpersonal influences. Dubois and Dikena emphasized that "we believe that the analysis of the direct relationship between consumers and brands is a key to improving understanding of such a market." This original assumption is that of private or The value of the open superior product comes from the inherent social status of these objects. Many existing studies emphasize the role of the role played in the exchange of information about their owners and social relationships.中文译文社会心理学前沿艾瑞·克鲁格兰斯基,约瑟夫·弗加斯社会心理学的前沿是一个新的领域专用手册系列。
目录1介绍 (1)在这一章对NS2的引入提供。
尤其是,关于NS2的安装信息是在第2章。
第3章介绍了NS2的目录和公约。
第4章介绍了在NS2仿真的主要步骤。
一个简单的仿真例子在第5章。
最后,在第.8章作总结。
2安装 (1)该组件的想法是明智的做法,以获取上述件和安装他们的个人。
此选项保存downloadingtime和大量内存空间。
但是,它可能是麻烦的初学者,因此只对有经验的用户推荐。
(2)安装一套ns2的all-in-one在unix-based系统 (2)安装一套ns2的all-in-one在Windows系统 (3)3目录和公约 (4)目录 (4)4运行ns2模拟 (6)ns2程序调用 (6)ns2模拟的主要步骤 (6)5一个仿真例子 (8)6总结 (12)1 Introduction (13)2 Installation (15)Installing an All-In-One NS2 Suite on Unix-Based Systems (15)Installing an All-In-One NS2 Suite on Windows-Based Systems (16)3 Directories and Convention (17)Directories and Convention (17)Convention (17)4 Running NS2 Simulation (20)NS2 Program Invocation (20)Main NS2 Simulation Steps (20)5 A Simulation Example (22)6 Summary (27)1介绍网络模拟器(一般叫作NS2)的版本,是证明了有用在学习通讯网络的动态本质的一个事件驱动的模仿工具。
模仿架线并且无线网络作用和协议(即寻址算法,TCP,UDP)使用NS2,可以完成。
一般来说,NS2提供用户以指定这样网络协议和模仿他们对应的行为方式。
外文文献翻译(含:英文原文及中文译文)英文原文Germany Bauhaus design and Future Design TrendAbstractGerman Bauhaus had a significant influence on the modern design education, meanwhile, it established the foundation of the leading position in the world for German industrial design. Through analyzing on current industrial design conditions from different countries, art design is considered as the main part of industrial design. This paper reviewed the last 10 years’ development of industrial design program in Zhejiang University of Science and Technology. The industrial design program have taken considerable achievements in many fields, such as the practice of Germany model, disciplines construction, teaching reform, manufactures & college cooperation, project teaching and design competitions. And Y ou cannot ignore the industrial design ten trend Keywords: Bauhaus, industrial design, project teaching, practice , 10 Industrial Design Trends1. German Bauhaus and industrial designGerman Bauhaus Design and Future Design TrendsAbstractGerman Bauhaus has a significant influence. At the same time, modern design education laid the foundation for the world's leading German industrial design. By analyzing the current industrial design conditions in different countries, artistic design is considered as the main industrial design. This article reviews the process of developing industrial designs in the last 10 years. Industrial design programs have taken considerable success in many areas such as the German practice model, professional construction, teaching reform, collaboration between production and engineering colleges, teaching and design competitions and ten trends in industrial design that you cannot ignore.Key words: ten trends of Bauhaus, industrial design, project teaching, practice, industrial design1. German Bauhaus and Industrial DesignIn 1919, the school had built Bauhaus Weimar, Germany. This is known as the “cradle of world industrial design” and the history of this milestone art design. Bauhaus believes that the most important thing is to allow students to explore their own ways of designing, rather than teaching their teachers; to cultivate students' ability to think independently and critically, instead of putting certain design styles on them. Compared with other schools with similar design education, Bauhaus philosophy has a unique philosophy of education. It took a thorough reform of the traditional art design education system andestablished art design as a new professional discipline. At the same time, Walter Gropius, the founder of Bauhaus, put forward the "unified art and technology" as the leading design philosophy of education.One of Bauhaus's works is a general industrial product that actively purifies the form. Bauhaus stressed that the design of the product's shape should be based on basic geometric models such as cubes, squares, and circles. The form of the product, and outlined should be simple and varied in different ways and follow abstract forms of principles and aesthetics. Because of Bauhaus's bravery and active exploration and reform, he took a major influence on the formation of the modernist artistic style and enabled Bauhaus to design a world-class reputation. Therefore Bauhaus became a milestone in the history of modern design art.The American artist Jose Sinel first mentioned the industrial design of the term in 1919. However, in China, until 1983, the Ministry of Education had met the industrial design disciplines and sample major ordinary universities. The original name was "The main product formation was" students for the arts. In 1998, the national priority category was adjusted to be integrated into international conventions. The “major” product formation has long since focused on the human-product-environment relationship in the field of product morphology and other research. This name has replaced "industrial design" and some of the major schools for engineering and art education.Bauhaus has many top European artists during this time, such as Kandinsky and Kerry. They are famous abstract painters. Their teaching trains students and leads to Bauhaus's 20th-century art design. The most famous industrial designers such as Philip Starck behar Marc Newson and Maca graduated from the School of Art Design. Their success proves that industrial design education in art design education is effective. The product form design is still an important aspect. Industrial design in undergraduate industrial design research is currently an important part of the famous Art Design School.The Academy of Art and Design of the Norwegian Royal Academy of Arts, the University Politecnico Duisburg-Esse Milan, the applied sciences and arts of Hannover University (Hanover Industrial Design Division), all of whom belong to the School of Art and Design.According to a survey: the American Association of Industrial Designers (IDSA organized by the United States) in 1998, there are 49 colleges, which have industrial design undergraduate or graduate programs registered on the IDSA-sponsored list. Typical industrial design students are usually set up in art schools and can obtain bachelor degree or above, fine arts or related majors. Most people are accredited to the school NASAD (National Association of Arts and Design). Only 15 - schools are not certified. After five years o f IDSA’s announcement, only registered industrial design students can be recognized. Among them are37 industrial design majors in universities, 6 in design colleges, and 4 in art schools. This situation has not changed in the current year.Asia: In Japan, industrial design majors also set in art schools or independent industrial design faculties, such as Tokyo Zokei University, Musashino Art University, Tama Art University, and University of Tsukuba etc. In Hongkong, The Hong Kong Polytechnic University has famous industrial design programs. In Taiwan, Shih Chien University, National Cheng Kung University, National Yunlin University of Science & Technology have famous industrial design programs. In mainland of China, Jiangnan University, Tsinghua University, Hunan University, Tongji University and the Guangzhou academic of fine arts all have their industrial design departments in the art design schools or departments.2 10 Industrial Design Trends Y ou Can't Ignore2.1 Design For A CauseCompanies including Herman Miller and American Apparel are promoting their ideals through design. Y ves Béhar's leaf lamp for Herman Miller (shown) uses a biomorphic grid of LEDs, which consume 40% percent less energy than fluorescent lights and last for 100,000 hours. And Nike plans to make its entire footwear line out of sustainable materials by 2010.2.2 SimplexitySteve McCallion, executive creative director of Portland, Ore.-basedindustrial design firm Ziba Design, says there's a trend toward "simplexity," products that have many functions but are approachable, ergonomically correct and easy to use--like Apple's iPhone. The baby boomers have also propelled simplexity; as the generation ages, the need for easy-to-use, at-home medical equipment becomes greater. Ami V erhalen, director of industrial design at Madison, Wis.-based Design Concepts, says that in-home health care will be a huge driver for product innovation in the upcoming decade.2.3 PersonalizationFrom Nike ID shoes to Build-a-Bear teddies, retailers are adding a "build your own" element to brands. Do it yourself--or DIY--serves as an important element of this trend. Publications like Ready Made magazine and books like designer Wendy Mullin's Sew U encourage consumers to put their own spin on things.2.4 GlobalizationLike other industries, outsourcing has affected international design. Today a designer in Delhi might be working with a manufacturer in Columbus. Steve McCallion says that the globalization of product design has created Internet communities that enable more people to participate in the design process. Companies like Kid Robot can employ toy designers from Tokyo to Tucson with greater ease than ever.2.5 OrnamentationIn fashion design, we're seeing a return to minimalism, but in home decor, ornate details are in fashion. For the first time in decades, wallpaper is in fashion, and the details are rich--brocades, velvets and jewel-tone colors. Long-forgotten textile designers like Florence Broadhurst and V era Neumann are receiving attention from a new generation of design-savvy consumers.2.6 Polarization Of DesignBig-box or luxury retailer? Many experts say that design has been polarized, with innovative products available at both the very high end (Neiman Marcus, Moss) and the very low end (Target, Ikea). Meanwhile, midrange retailers like Macy's suffer from lack of fresh, on-trend ideas. That isolates the huge chunk of the population that can afford something higher-end than the $200 Malm bed at Ikea but scoff at the price of a $16,000 Hastens mattress.2.7 Pink DesignGadgets are a guy's game, right? Not if you consider the latest products with feminine mystique. Motorola released a lipstick pink Razr cellphone, and more recently, LG released a Prada phone. More and more manufacturers are creating sleeker, feminized versions of their clunky, chunky products, and both men and women are biting. Want proof of the feminization of product design? Just check out , which rates several items a day as "Geek chic" or "Just Plain Geeky."2.8 Mass ImperfectionSome designers are creating intentionally flawed pieces, like designer Jason Miller's duct tape chair or Bodum's Pavina glassware collection, which uses mouth-blown double-walled glass, giving each piece a slight variation in height, thickness and weight. Whiskered and weathered textiles--on denim as well as furniture and tapestries--are more recognizable examples of intentional imperfection in production.2.9 CraftAs mass retailers like Target become more design-focused, there's a countertrend of independent manufacturers and designers creating one-off, heirloom pieces. Where to find these limited-edition treasures? Artisan e-commerce sites like , classical craft companies like Heath Ceramics and modernist design houses such as Design Within Reach.2.10 Focus On The Other 90%Anthony Pannozzo, vice president of design strategy at Waltham, Mass.-based firm Herbst LaZar Bell, says that well-designed products are available to only 10% of the world's population. However, more and more designers are starting to cater to consumers in Africa, Asia and Latin America.中文译文德国包豪斯设计与未来设计趋势摘要德国包豪斯有显著的影响, 与此同时, 现代设计教育奠定了基础, 它处于世界领先地位的德国工业设计。
外文文献翻译(含:英文原文及中文译文)英文原文POSSIBILITIES AND LIMITA TIONS OF ACCIDENT ANALYSISS.OppeAbstraetAccident statistics, especially collected at a national level are particularly useful for the description, monitoring and prognosis of accident developments, the detection of positive and negative safety developments, the definition of safety targets and the (product) evaluation of long term and large scale safety measures. The application of accident analysis is strongly limited for problem analysis, prospective and retrospective safety analysis on newly developed traffic systems or safety measures, as well as for (process) evaluation of special short term and small scale safety measures. There is an urgent need for the analysis of accidents in real time, in combination with background behavioural research. Automatic incident detection, combined with video recording of accidents may soon result in financially acceptable research. This type of research may eventually lead to a better understanding of the concept of risk in traffic and to well-established theories.Keyword: Consequences; purposes; describe; Limitations; concerned; Accident Analysis; possibilities1. Introduction.This paper is primarily based on personal experience concerning traffic safety, safety research and the role of accidents analysis in this research. These experiences resulted in rather philosophical opinions as well as more practical viewpoints on research methodology and statistical analysis. A number of these findings are published already elsewhere.From this lack of direct observation of accidents, a number of methodological problems arise, leading to continuous discussions about the interpretation of findings that cannot be tested directly. For a fruitful discussion of these methodological problems it is very informative to look at a real accident on video. It then turns out that most of the relevant information used to explain the accident will be missing in the accident record. In-depth studies also cannot recollect all the data that is necessary in order to test hypotheses about the occurrence of the accident. For a particular car-car accident, that was recorded on video at an urban intersection in the Netherlands, between a car coming from a minor road, colliding with a car on the major road, the following questions could be asked: Why did the driver of the car coming from the minor road, suddenly accelerate after coming almost to a stop and hit the side of the car from the left at the main road? Why was the approaching car not noticed? Was it because the driver was preoccupied with the two cars coming from the right and the gap before them that offered him thepossibility to cross? Did he look left before, but was his view possibly blocked by the green van parked at the corner? Certainly the traffic situation was not complicated. At the moment of the accident there were no bicyclists or pedestrians present to distract his attention at the regularly overcrowded intersection. The parked green van disappeared within five minutes, the two other cars that may have been important left without a trace. It is hardly possible to observe traffic behavior under the most relevant condition of an accident occurring, because accidents are very rare events, given the large number of trips. Given the new video equipment and the recent developments in automatic incident and accident detection, it becomes more and more realistic to collect such data at not too high costs. Additional to this type of data that is most essential for a good understanding of the risk increasing factors in traffic, it also important to look at normal traffic behavior as a reference base. The question about the possibilities and limitations of accident analysis is not lightly answered. We cannot speak unambiguously about accident analysis. Accident analysis covers a whole range of activities, each originating from a different background and based on different sources of information: national data banks, additional information from other sources, especially collected accident data, behavioral background data etc. To answer the question about the possibilities and limitations, we first have to look at the cycle of activities in the area of traffic safety. Some ofthese activities are mainly concerned with the safety management of the traffic system; some others are primarily research activities.The following steps should be distinguished:- detection of new or remaining safety problems;- description of the problem and its main characteristics;- the analysis of the problem, its causes and suggestions for improvement;- selection and implementation of safety measures;- evaluation of measures taken.Although this cycle can be carried out by the same person or group of persons, the problem has a different (political/managerial or scientific) background at each stage. We will describe the phases in which accident analysis is used. It is important to make this distinction. Many fruitless discussions about the method of analysis result from ignoring this distinction. Politicians, or road managers are not primarily interested in individual accidents. From their perspective accidents are often treated equally, because the total outcome is much more important than the whole chain of events leading to each individual accident. Therefore, each accident counts as one and they add up all together to a final safety result.Researchers are much more interested in the chain of events leading to an individual accident. They want to get detailed information abouteach accident, to detect its causes and the relevant conditions. The politician wants only those details that direct his actions. At the highest level this is the decrease in the total number of accidents. The main source of information is the national database and its statistical treatment. For him, accident analysis is looking at (subgroups of) accident numbers and their statistical fluctuations. This is the main stream of accident analysis as applied in the area of traffic safety. Therefore, we will first describe these aspects of accidents.2. The nature of accidents and their statistical characteristics.The basic notion is that accidents, whatever there cause, appear according to a chance process. Two simple assumptions are usually made to describe this process for (traffic) accidents:- the probability of an accident to occur is independent from the occurrence of previous accidents;-the occurrence of accidents is homogeneous in time.If these two assumptions hold, then accidents are Poisson distributed. The first assumption does not meet much criticism. Accidents are rare events and therefore not easily influenced by previous accidents. In some cases where there is a direct causal chain (e.g. , when a number of cars run into each other) the series of accidents may be regarded as one complicated accident with many cars involved.The assumption does not apply to casualties. Casualties are often related to the same accident andtherefore the independency assumption does not hold. The second assumption seems less obvious at first sight. The occurrence of accidents through time or on different locations are not equally likely. However, the assumption need not hold over long time periods. It is a rather theoretical assumption in its nature. If it holds for short periods of time, then it also holds for long periods, because the sum of Poisson distributed variables, even if their Poisson rates are different, is also Poisson distributed. The Poisson rate for the sum of these periods is then equal to the sum of the Poisson rates for these parts.The assumption that really counts for a comparison of (composite) situations, is whether two outcomes from an aggregation of situations in time and/or space, have a comparable mix of basic situations. E.g. , the comparison of the number of accidents on one particular day of the year, as compared to another day (the next day, or the same day of the next week etc.). If the conditions are assumed to be the same (same duration, same mix of traffic and situations, same weather conditions etc.) then the resulting numbers of accidents are the outcomes of the same Poisson process. This assumption can be tested by estimating the rate parameter on the basis of the two observed values (the estimate being the average of the two values). Probability theory can be used to compute the likelihood of the equality assumption, given the two observations and their mean.This statistical procedure is rather powerful. The Poisson assumptionis investigated many times and turns out to be supported by a vast body of empirical evidence. It has been applied in numerous situations to find out whether differences in observed numbers of accidents suggest real differences in safety. The main purpose of this procedure is to detect differences in safety. This may be a difference over time, or between different places or between different conditions. Such differences may guide the process of improvement. Because the main concern is to reduce the number of accidents, such an analysis may lead to the most promising areas for treatment. A necessary condition for the application of such a test is, that the numbers of accidents to be compared are large enough to show existing differences. In many local cases an application is not possible. Accident black-spot analysis is often hindered by this limitation, e.g., if such a test is applied to find out whether the number of accidents at a particular location is higher than average. The procedure described can also be used if the accidents are classified according to a number of characteristics to find promising safety targets. Not only with aggregation, but also with disaggregation the Poisson assumption holds, and the accident numbers can be tested against each other on the basis of the Poisson assumptions. Such a test is rather cumbersome, because for each particular case, i.e. for each different Poisson parameter, the probabilities for all possible outcomes must be computed to apply the test. In practice, this is not necessary when the numbers are large. Then the Poissondistribution can be approximated by a Normal distribution, with mean and variance equal to the Poisson parameter. Once the mean value and the variance of a Normal distribution are given, all tests can be rephrased in terms of the standard Normal distribution with zero mean and variance one. No computations are necessary any more, but test statistics can be drawn from tables.3. The use of accident statistics for traffic safety policy.The testing procedure described has its merits for those types of analysis that are based on the assumptions mentioned. The best example of such an application is the monitoring of safety for a country or region over a year, using the total number of accidents (eventually of a particular type, such as fatal accidents), in order to compare this number with the outcome of the year before. If sequences of accidents are given over several years, then trends in the developments can be detected and accident numbers predicted for following years. Once such a trend is established, then the value for the next year or years can be predicted, together with its error bounds. Deviations from a given trend can also be tested afterwards, and new actions planned. The most famous one is carried out by Smeed 1949. We will discuss this type of accident analysis in more detail later.(1). The application of the Chi-square test for interaction is generalised to higher order classifications. Foldvary and Lane (1974), inmeasuring the effect of compulsory wearing of seat belts, were among the first who applied the partitioning of the total Chi-square in values for the higher order interactions of four-way tables.(2). Tests are not restricted to overall effects, but Chi-square values can be decomposed regarding sub-hypotheses within the model. Also in the two-way table, the total Chisquare can be decomposed into interaction effects of part tables. The advantage of 1. and 2. over previous situations is, that large numbers of Chi-square tests on many interrelated (sub)tables and corresponding Chi-squares were replaced by one analysis with an exact portioning of one Chi-square.(3). More attention is put to parameter estimation. E.g., the partitioning of the Chi-square made it possible to test for linear or quadratic restraints on the row-parameters or for discontinuities in trends.(4). The unit of analysis is generalised from counts to weighted counts. This is especially advantageous for road safety analyses, where corrections for period of time, number of road users, number of locations or number of vehicle kilometres is often necessary. The last option is not found in many statistical packages. Andersen 1977 gives an example for road safety analysis in a two-way table. A computer programme WPM, developed for this type of analysis of multi-way tables, is available at SWOV (see: De Leeuw and Oppe 1976). The accident analysis at this level is not explanatory. It tries to detect safety problems that need specialattention. The basic information needed consists of accident numbers, to describe the total amount of unsafety, and exposure data to calculate risks and to find situations or (groups of) road users with a high level of risk. 4. Accident analysis for research purposes.Traffic safety research is concerned with the occurrence of accidents and their consequences. Therefore, one might say that the object of research is the accident. The researcher’s interest however is less focused at this final outcome itself, but much more at the process that results (or does not result) in accidents. Therefore, it is better to regard the critical event in traffic as his object of study. One of the major problems in the study of the traffic process that results in accidents is, that the actual occurrence is hardly ever observed by the researcher.Investigating a traffic accident, he will try to reconstruct the event from indirect sources such as the information given by the road users involved, or by eye-witnesses, about the circumstances, the characteristics of the vehicles, the road and the drivers. As such this is not unique in science, there are more examples of an indirect study of the object of research. However, a second difficulty is, that the object of research cannot be evoked. Systematic research by means of controlled experiments is only possible for aspects of the problem, not for the problem itself. The combination of indirect observation and lack of systematic control make it very difficult for the investigator to detectwhich factors, under what circumstances cause an accident. Although the researcher is primarily interested in the process leading to accidents, he has almost exclusively information about the consequences, the product of it, the accident. Furthermore, the context of accidents is complicated. Generally speaking, the following aspects can be distinguished: - Given the state of the traffic system, traffic volume and composition, the manoeuvres of the road users, their speeds, the weather conditions, the condition of the road, the vehicles, the road users and their interactions, accidents can or cannot be prevented.- Given an accident, also depending on a large number of factors, such as the speed and mass of vehicles, the collision angle, the protection of road users and their vulnerability, the location of impact etc., injuries are more or less severe or the material damage is more or less substantial. Although these aspects cannot be studied independently, from a theoretical point of view it has advantages to distinguish the number of situations in traffic that are potentially dangerous, from the probability of having an accident given such a potentially dangerous situation and also from the resulting outcome, given a particular accident.This conceptual framework is the general basis for the formulation of risk regarding the decisions of individual road users as well as the decisions of controllers at higher levels. In the mathematical formulation of risk we need an explicit description of our probability space, consistingof the elementary events (the situations) that may result in accidents, the probability for each type of event to end up in an accident, and finally the particular outcome, the loss, given that type of accident.A different approach is to look at combinations of accident characteristics, to find critical factors. This type of analysis may be carried out at the total group of accidents or at subgroups. The accident itself may be the unit of research, but also a road, a road location, a road design (e.g. a roundabout) etc.中文译文交通事故分析的可能性和局限性S.Oppe摘要交通事故的统计数字, 尤其国家一级的数据对监控和预测事故的发展, 积极或消极检测事故的发展, 以及对定义安全目标和评估工业安全特别有益。
本科毕业设计(论文)外文参考文献译文及原文学院自动化学院专业电气工程及其自动化(电力系统自动化方向)年级班别2011级3班学号学生姓名指导教师2015年3月10日通过对磁场的分析改进超高压变电站扩展连接器的设计Joan Hernández-Guiteras a, Jordi-Roger Ribaa,⇑, LuísRomeralba UniversitatPolitècnica de Catalunya, Electrical Engineering Department, 08222 Terrassa, Spainb UniversitatPolitècnica de Catalunya, Electronic Engineering Department, 08222 Terrassa, Spain摘要:在世界上很多的国家,电力需求的增长比输电容量的发展更快。
由于环境的限制、社会的担忧以及经济上的投入,建设新的输电线路是一项严峻的挑战。
除此以外,输电网经常要承担接近额定容量的负载。
因此,提高输电系统的效率和可靠性受到了关注。
这项研究主要针对一个400KV,3000A,50Hz的超高压变电站扩展连接器,用于连接两个母线直径均为150mm的变电站。
该变电站连接器是一个四线制的铝导线,为母线之间的相互电能传输提供了路径。
前期的初步试验显示:电流在输电线路中的不平衡分布,主要是受到了距离的影响。
应用一个三维的有限元素法,可以改进设计,以及对改进前后两个版本的连接器的电磁性能和热性能进行评估比较。
这份报告中将提出:在实验室条件下的检验已经验证了仿真方法的准确性。
这也许将会是促进变电站连接器设计进程的一个很有价值的工具。
因此,将不仅仅提高其热性能,还将提高其可靠性。
关键词:变电站连接器、超高压、电力传输系统、有限单元法、数值模拟、临近效应、热学分析1.引入全球能源需求的频繁增长,连同分散的和可再生能源份额的增长促进超高压和特高压电力传输系统[1]的建设和研究。
中文4500字本科生毕业设计(论文)外文原文及译文所在系管理系学生姓名郭淼专业会计学班级学号指导教师2013年6月外文文献原文及译文Internal ControlEmergence and development of the theory of the evolution of the internal controlInternal control in Western countries have a long history of development, according to the internal control characteristics at different stages of development, the development of internal control can be divided into four stages, namely the internal containment phase, the internal control system phase, the internal control structure phase, overall internal control framework stage.Internal check stages: infancy internal controlBefore the 1940s, people used to use the concept of internal check. This is the embryonic stage of internal control. "Keshi Accounting Dictionary" definition of internal check is "to provide effective organization and mode of operation, business process design errors and prevent illegal activities occur. Whose main characteristic is any individual or department alone can not control any part of one or the right way to conduct business on the division of responsibility for the organization, each business through the normal functioning of other individuals or departments for cross-examination or cross-control. designing effective internal check to ensure that all businesses can complete correctly after a specified handler in the process of these provisions, the internal containment function is always an integral part. "The late 1940s, the internal containment theory become important management methods and concepts. Internal check on a "troubleshooting a variety of measures" for the purpose of separation of duties and account reconciliation as a means to money and accounting matters and accounts as the main control object primary control measures. Its characteristics are account reconciliation and segregation of duties as the main content and thus cross-examination or cross-control. In general, the implementation of internal check function can be roughly divided into the following four categories: physical containment; mechanical containment; institutional containment; bookkeeping contain. The basic idea is to contain the internal "security is the result of checks and balances," which is based on two assumptions: First: two or more persons1西安交通大学城市学院本科毕业设计(论文)or departments making the same mistake unconsciously chance is very small; Second: Two or more the possibility of a person or department consciously partnership possibility of fraud is much lower than a single person or department fraud. Practice has proved that these assumptions are reasonable, internal check mechanism for organizations to control, segregation of duties control is the foundation of the modern theory of internal control.Internal control system phases:generating of internal controlThe late1940s to the early1970s, based on the idea of internal check, resulting in the concept of the internal control system, which is the stage in the modern sense of internal control generated. Industrial Revolution has greatly promoted the major change relations of production, joint-stock company has gradually become the main form of business organization of Western countries, in order to meet the requirements of prevailing socio-economic relations,to protect the economic interests of investors and creditors, the Western countries have legal requirements in the form of strengthen the corporate financial and accounting information as well as internal management of this economic activity.In 1934, the "securities and exchange act" issued by the U.S. government for the first time puts forward the concept of "internal accounting control", the implementation of general and special authorization book records, trading records, and compared different remedial measures such as transaction assets. In 1949, the American institute of certified public accountants (AICPA) belongs to the audit procedures of the committee (CPA) in the essential element of internal control: the system coordination, and its importance to management department and the independence of certified public accountants' report, the first official put forward the definition of internal control: "the design of the internal control includes the organization and enterprise to take all of the methods and measures to coordinate with each other. All of these methods and measures used to protect the property of the enterprise, to check the accuracy of accounting information, improve the efficiency of management, promote enterprise stick to established management guidelines." The definition from the formulation and perfecting the inner control of the organization, plan, method and measures such as rules and regulations to implement internal control, break through the limitation of control related to the financial and accounting department directly, the four objectives of internal control, namely the enterprise in commercial2外文文献原文及译文activities to protect assets, check the veracity and reliability of financial data, improve the work efficiency, and promote to management regulations. The definition of positive significance is to help management authorities to strengthen its management, but the scope of limitation is too broad. In 1958, the commission issued no. 29 audit procedures bulletin "independent auditors evaluate the scope of internal control", according to the requirements of the audit responsibility, internal control can be divided into two aspects, namely, the internal accounting control and internal management control. The former is mainly related to the first two of the internal control goal, the latter mainly relates to the internal control after two goals. This is the origin of the internal control system of "dichotomy". Because the concept of management control is vague and fuzzy, in the actual business line between internal control and internal accounting control is difficult to draw. In order to clear the relations between the two, in 1972 the American institute of certified public accountants in the auditing standards announcement no. 1, this paper expounds the internal management control and internal accounting control: the definition of "internal management control including, but not limited to organization plan, and the administrative department of the authorized approval of economic business decision-making steps on the relevant procedures and records. This authorization of items approved activities is the responsibility of management, it is directly related to the management department to perform the organization's business objectives, is the starting point of the economic business accounting control." At the same time, the important content of internal accounting control degree and protect assets, to ensure that the financial records credibility related institutions plans, procedures and records. After a series of changes and redefine the meaning of the internal control is more clear than before and the specification, increasingly broad scope, and introduces the concept of internal audit, has received recognition around the world and references, the internal control system is made.The internal control structure stage: development of the internal controlTheory of internal control structure formed in the 90 s to the 1980 s, this phase of western accounting audit of internal control research focus gradually from the general meaning to specific content to deepen. During this period, the system management theory has become the new management idea, it says: no physical objects in the world are composed of elements of3西安交通大学城市学院本科毕业设计(论文)system, due to the factors, there exists a complicated nonlinear relationship between system must have elements do not have new features, therefore, should be based on the whole the relationship between elements. System management theory will enterprise as a organic system composed of subsystems on management, pay attention to the coordination between the subsystems and the interaction with the environment. In the modern company system and system management theory, under the concept of early already cannot satisfy the need of internal control systems. In 1988, the American institute of certified public accountants issued "auditing standards announcement no. 55", in the announcement, for the first time with the word "internal control structure" to replace the original "internal control", and points out that: "the enterprise's internal control structure including provide for specific target reasonable assurance of the company set up all kinds of policies and procedures". The announcement that the internal control structure consists of control environment, accounting system (accounting system), the control program "three components, the internal control as a organic whole composed of these three elements, raised to the attention of the internal control environment.The control environment, reflecting the board of directors, managers, owners, and other personnel to control the attitude and behavior. Specific include: management philosophy and operating style, organizational structure, the function of the board of directors and the audit committee, personnel policies and procedures, the way to determine the authority and responsibility, managers control method used in the monitoring and inspection work, including business planning, budgeting, forecasting, profit plans, responsibility accounting and internal audit, etc.Accounting systems, regulations of various economic business confirmation, the collection, classification, analysis, registration and preparing method. An effective accounting system includes the following content: identification and registration of all legitimate economic business; Classifying the various economic business appropriate, as the basis of preparation of statements; Measuring the value of economic business to make its currency's value can be recorded in the financial statements; Determine the economic business events, to ensure that it recorded in the proper accounting period; Describe properly in the financial statements of4外文文献原文及译文economic business and related content.The control program, refers to the management policies and procedures, to ensure to achieve certain purpose. It includes economic business and activity approval; Clear division of the responsibility of each employee; Adequate vouchers and bills setting and records; The contact of assets and records control; The business of independent audit, etc. Internal structure of control system management theory as the main control thought, attaches great importance to the environmental factors as an important part of internal control, the control environment, accounting system and control program three elements into the category of internal control; No longer distinguish between accounting control and management control, and uniform in elements describe the internal control, think the two are inseparable and contact each other.Overall internal control framework stages: stage of internal controlAfter entering the 1990 s, the study of internal control into a new stage. With the improvement of the corporate governance institutions, the development of electronic information technology, in order to adapt to the new economic and organizational form, using the new management thinking, "internal control structure" for the development of "internal control to control the overall framework". In 1992, the famous research institutions internal control "by organization committee" (COSO) issued a landmark project - "internal control - the whole framework", also known as the COSO report, made the unification of the internal control system framework. In 1994, the report on the supplement, the international community and various professional bodies widely acknowledged, has wide applicability. The COSO report is a historical breakthrough in the research of internal control theory, it will first put forward the concept of internal control system of the internal control by the original planar structure for the development of space frame model, represents the highest level of the studies on the internal control in the world.The COSO report defines internal control as: "designed by enterprise management, to achieve the effect and efficiency of the business, reliable financial reporting and legal compliance goals to provide reasonable assurance, by the board of directors, managers and other staff to5西安交通大学城市学院本科毕业设计(论文)implement a process." By defining it can be seen that the COSO report that internal control is a process, will be affected by different personnel; At the same time, the internal control is a in order to achieve business objectives the group provides reasonable guarantee the design and implementation of the program. The COSO report put forward three goals and the five elements of internal control. The three major target is a target business objectives, information and compliance. Among them, the management goal is to ensure business efficiency and effectiveness of the internal control; Information goal is refers to the internal control to ensure the reliability of the enterprise financial report; Compliance goal refers to the internal controls should abide by corresponding laws and regulations and the rules and regulations of the enterprise.COSO report that internal control consists of five elements contact each other and form an integral system, which is composed of five elements: control environment, risk assessment, control activities, information and communication, monitoring and review.Control Environment: It refers to the control staff to fulfill its obligation to carry out business activities in which the atmosphere. Including staff of honesty and ethics, staff competence, board of directors or audit committee, management philosophy and management style, organizational structure, rights and responsibilities granted to the way human resources policies and implementation.Risk assessment: It refers to the management to identify and take appropriate action to manage operations, financial reporting, internal or external risks affecting compliance objectives, including risk identification and risk analysis. Risk identification including external factors (such as technological development, competition, changes in the economy) and internal factors (such as the quality of the staff, the company nature of activities, information systems handling characteristics) to be checked. Risk analysis involves a significant degree of risk estimates to assess the likelihood of the risk occurring, consider how to manage risk.Control activities: it refers to companies to develop and implement policies and procedures, and 6外文文献原文及译文to take the necessary measures against the risks identified in order to ensure the unit's objectives are achieved. In practice, control activities in various forms, usually following categories: performance evaluation, information processing, physical controls, segregation of duties.Information and communication: it refers to enable staff to perform their duties, to provide staff with the exchange and dissemination of information as well as information required in the implementation, management and control operations process, companies must identify, capture, exchange of external and internal information. External information, including market share, regulatory requirements and customer complaints and other information. The method of internal information including accounting system that records created by the regulatory authorities and reporting of business and economic matters, maintenance of assets, liabilities and owners' equity and recorded. Communication is so that employees understand their responsibilities to maintain control over financial reporting. There are ways to communicate policy manuals, financial reporting manuals, reference books, as well as examples such as verbal communication or management.Monitoring: It refers to the evaluation of internal controls operation of the quality of the process, namely the reform of internal control, operation and improvement activities evaluated. Including internal and external audits, external exchanges.Five elements of internal control system is actually wide-ranging, interrelated influence each other. Control environment is the basis for the implementation of other control elements; control activities must be based on the risks faced by companies may have a detailed understanding and assessment basis; while risk assessment and control activities within the enterprise must use effective communication of information; Finally, effective monitoring the implementation of internal control is a means to protect the quality. Three goals and five elements for the formation and development of the internal control system theory laid the foundation, which fully reflects the guiding ideology of the modern enterprise management idea that security is the result of systems management. COSO report emphasizes the integration framework and internal control system composed of five elements, the framework for the7西安交通大学城市学院本科毕业设计(论文)establishment of an internal control system, operation and maintenance of the foundation.In summary,because of social, economic and environmental change management, internal control functions along with the changes, in order to guide the evolution of the internal control theory. As can be seen from the history of the development of internal control theory, often derived from the internal control organizational change management requirements, from an agricultural economy to an industrial economy, innovation management methods and tools for the development of the power to bring internal controls.From the internal containment center,controlled by the internal organization of the mutual relations between the internal control of various subsystems and went to COSO as the representative to the prevention and management loopholes to prevent the goal, through the organization of control and information systems,to achieve the overall system optimization of modern internal sense of control theory, from Admiral time, corresponding to the two economic revolution.Therefore, in the analysis of foreign internal control theory and Its Evolution, requires a combination of prevailing socio-economic environment and business organization and management requirements, so as to understand the nature of a deeper internal control theory of development.8外文文献原文及译文译文:内部控制Ge.McVay一、内部控制理论的产生与发展演进内部控制在西方国家已经有比较长的发展历史,根据内部控制在不同发展阶段的特征,可以将内部控制的发展分为四个阶段,即内部牵制阶段、内部控制制度阶段、内部控制结构阶段、内部控制整体框架阶段。
外文文献翻译(含:英文原文及中文译文)文献出处:Y Hassan. Geometric Design of Highways[J] Advances in Transportation Studies, 2016, 6(1):31-41.英文原文Geometric Design of HighwaysY HassanA. Alignment DesignThe alignment of a road is shown on the plane view and is a series of straight lines called tangents connected by circular curves. In modern practice it is common to interpose transition or spiral curves between tangents and circular curves.The line shape should be continuous, sudden changes from flat line to small radius curve or sudden change of long line end connected to small radius curve should be avoided, otherwise a traffic accident may occur. Similarly, arcs with different radii end-to-end (complex curves) or short straight lines between two arcs with different radii are bad lines unless an easing curve is inserted between arcs. The long, smooth curve is always a good line because it is beautifully lined and will not be abandoned in the future. However, it is not ideal that the two-way road line shape is composed entirely of curves, because some drivers always hesitate to pass through curved road segments. The long and slow curve is used in thesmaller corners. If you use a short curve, you will see "kinks." In addition, the design of the flat and vertical sections of the line should be considered comprehensively and should not be only one. No matter which, for example, when the starting point of the flat curve is near the vertex of the vertical curve, serious traffic accidents will occur.V ehicles driving on curved sections are subjected to centrifugal force, and they need a force of the same magnitude in the opposite direction due to the height and lateral friction to offset it. From the viewpoint of highway design, the high or horizontal friction cannot exceed a certain value. The maximum, these control values for a certain design speed may limit the curvature of the curve. In general, the curvature of a circular curve is represented by its radius. For the linear design, the curvature is often described by the curvature, ie, the central angle corresponding to the 100-foot curve, which is inversely proportional to the radius of the curve.A normal road arch is set in a straight section of the road, and the curve section is set to a super high, and an excessively gradual road section must be set between the normal section and the super high section. The usual practice is to maintain the design elevation of each midline of the road unchanged. By raising the outer edge and lowering the inner edge to form a super high, for the line shape where the straight line is directly connected with the circle curve, the super high should never start on the straight line before reaching the curve. At the other end of the curve at acertain distance to reach all the ultra-high.If the vehicle is driving at a high speed on a restricted section of road, such as a straight line connected with a small radius circle curve, driving will be extremely uncomfortable. When the car enters the curve section, the super high starts and the vehicle tilts inward, but the passenger must maintain the body because it is not subjected to centrifugal force at this time. When the car reaches the curve section, centrifugal force suddenly occurs, forcing passengers to make further posture adjustments. When the car leaves the curve, the above process is just the opposite. After inserting the relaxation curve, the radius gradually transitions from infinity to a certain fixed value on the circle curve, the centrifugal force gradually increases, and ultra-high levels are carefully set along the relaxation curve, and the centrifugal force is gradually increased, thereby avoiding driving bumps.The easement curve has been used on railways for many years, but it has recently been applied on highways. This is understandable. The train must follow a precise orbit, and the uncomfortable feeling mentioned above can only be eliminated after the ease curve is used. However, the driver of a car can change the lateral position on the road at will, and he can provide a relaxation curve for himself by making a roundabout curve. But doing this in one lane (sometimes in other lanes) is very dangerous. A well-designed relaxation curve makes the above roundaboutnessunnecessary. Multi-cluster safety is a measure, and roads are widely used as transition curves.For a circular curve with the same radius, adding an easing curve at the end will change the relative positions of the curve and the straight line. Therefore, whether or not to use an easing curve should be determined before final alignment survey. The starting point of the general curve is labeled PC or BC and the end point is labeled PT or EC. For curves with transition curves, the usual marker configurations are: TC, SC, CS, and ST.For two-way roads, the road width should be increased at sharp bends. This is mainly based on the following factors: 1. The driver is afraid to get out of the edge of the road. 2. Due to the difference in the driving trajectory of the front and rear wheels of the vehicle, the effective lateral width of the vehicle increases; 3. The width of the front of the vehicle that is inclined relative to the centerline of the road. For roads that are 24 feet wide, the added width is negligible. Only if the design speed is 30 mil / h and the curvature is up to 2 ft. However, for narrow roads, widening is very important even on smooth curve sections. The recommended widening values and widened designs are shown in ". Highway linear design."B. Longitudinal slope lineThe vertical alignment of the highway and its impact on the safety andeconomy of vehicle operation constitute one of the most important elements in highway design. V ertical lines consist of straight lines and vertical parabolas or circular lines called vertical slope lines. When a grade line rises gradually from a horizontal line, it is called an uphill, and vice versa, it is called a downhill slope. In the analysis of slope and slope control, designers usually have to study the effect of changes in slope on the midline profile. In determining the slope, the ideal situation is the balance of excavation and filling, and there is no large amount of borrowers and abandoned parties. All the earth moving is carried down as far as possible and the distance is not long. The slope should change with the terrain and be consistent with the direction of ascent and descent of the existing drainage system. In the mountains, the slopes should be balanced to minimize the total cost. In the plain or grassland areas, the slope is approximately parallel to the surface, but higher than the surface at a sufficient height to facilitate drainage of the surface. If necessary, winds can be used to remove surface snow. If the road is approaching or running along a river, the current height of the slope is determined by the expected flood level. In any case, the gentle slope should be set at the excavation section compared to the short vertical section connecting the short vertical curve due to the upslope downslope, and the section from the downslope upslope should be set at the fill. Road section. Such a good linear design can often avoid the formation of a mound or depressionopposite to the current landscape. Other considerations are much more important when determining the vertical slope line than when filling the balance. Study and make more detailed adjustments to advanced issues. In general, the slope of the design that is consistent with the existing conditions is better, which can avoid some unnecessary costs.In slope analysis and control, the impact of slope on motor vehicle operating costs is one of the most important considerations. As the slope increases, the fuel consumption will obviously increase and the speed will slow down. A more economical solution can balance the annual increase in the annual cost of reducing the slope and increasing the annual cost of running the vehicle without increasing the slope. The exact solution to this problem depends on the understanding of traffic flow and traffic type, which can only be known through traffic investigations.In different states, where the maximum longitudinal gradient is also very different, AASHTO recommends that the maximum longitudinal slope be selected based on the time and terrain. The current design has a maximum longitudinal gradient of 5% at a design speed of 70 mil / h. At a design speed of 30 mil / h, the maximum longitudinal slope is generally 7% - 12% depending on the topography.When using longer sustained climbs, the slope length cannot exceed the critical slope length when no slow-moving vehicle is provided. The critical slope length can vary from 1700 ft in 3% grade to 500 ft in 8%grade. The slope of the continuous long slope must be less than the maximum slope of any end surface of the highway. Usually the long continuous single slope is disconnected and the lower part is designed as a steep slope, while approaching the top of the slope allows the slope to decrease. At the same time, it is necessary to avoid obstruction of the view due to the inclination of the longitudinal section.The maximum longitudinal gradient of the highway is 9%. Only when the drainage of the road is a problem, if the water must be drained to the side ditch or the drainage ditch, the minimum gradient criterion is of importance. In this case, AASHTO recommends a minimum gradient of0.35%.C. sight distanceIn order to ensure the safety of driving, the road must be designed to have a sufficient distance in front of the driver's line of sight, so that they can avoid obstacles other than the obstacles, or safely overtake. The line-of-sight is the length of the road visible to the driver of the vehicle. Two meanings: "parking distance" or "non-passing sight distance" or "overtaking sight distance."No matter what happens, reasonable design requires the driver to see this danger outside a certain distance, and brake the car before hitting it. In addition, it is not safe to think that the vehicle can avoid danger by leaving the driving lane. Because this can cause the vehicle to lose controlor to collide with another car.The parking distance is composed of two parts: The first part is the distance that the driver takes before the driver finds an obstacle and brakes. In this detection and reaction phase, the vehicle travels at its initial speed; the second part is the driver’s Part of the pa rking distance depends on the speed of the vehicle and the driver's visual time and braking time. The second part of the parking distance depends on the speed, the brakes, the tires, the conditions of the road surface, and the line shape and slope of the road.Otherwise, the capacity of the highway will be reduced, and the accident will increase, because the irritable driver would risk a collision and overtake the vehicle if he cannot safely overtake the vehicle. The minimum distance in front of which the driver can safely be seen is called the overtaking distance.When making a decision on whether to pass or not, the driver must compare the visibility distance ahead and the distance required to complete the overtaking movement. The factors that influence him to make a decision are the degree of caution in driving and the acceleration performance of the vehicle. Due to the significant differences between humans, the overtaking behavior, which is mainly determined by human judgments and actions rather than the mechanical theorem, varies greatly from driver to driver. In order to establish the line-of-sight value forovertaking, engineers observed many drivers’ overtaking behavior. Between 1938 and 1941, a basic survey was established to establish a standard of over- sight distance. Assume that the operating conditions are as follows:1. It is driven at a uniform speed by the overtaking vehicle.2. Overtaking When entering the overtaking area, decelerate after being overtaken.3. When arriving at the overtaking area, the driver needs to observe the passing area for a short time and start overtaking.4. In the face of the opposite vehicle, the overtaking is completed in a delayed start-up and a hurried turn. In the overtaking process, overtaking accelerates in the overtaking lane and the average speed is 10 mil / h faster than being overtaken.5. When overtaking returns to its original lane, there must be a safe distance between it and the opposite vehicle on the other lane.The sum of the above five items is the over sight distance.中文译文公路线形设计作者:Y HassanA. 平面设计道路的线形反映在平面图上是由一系列的直线和与直线相连的圆曲线构成的。
高校学生就业问题研究外文参考文献译文及原文1. 文献参考1. Brown, D. (2015). "The challenges faced by university graduates in the job market." Journal of Education and Work, 28(8), 917-929.2. Smith, J. (2016). "Factors influencing university graduates' employability." Studies in Higher Education, 41(1), 163-178.3. Johnson, R. (2017). "The role of career counseling in supporting university students' transition to the workplace." Journal of Career Development, 44(3), 275-288.2. 文献译文1. Brown, D.(2015)。
"大学毕业生在就业市场中所面临的挑战。
" 《教育与职业学刊》,28(8),917-929。
2. Smith, J.(2016)。
"影响大学毕业生就业能力的因素。
" 《高等教育研究》,41(1),163-178。
3. Johnson, R.(2017)。
"职业咨询在支持大学生进入职场的角色。
" 《职业发展学刊》,44(3),275-288。
3. 原文1. Brown, D. (2015). "The challenges faced by university graduates in the job market." Journal of Education and Work, 28(8), 917-929.2. Smith, J. (2016). "Factors influencing university graduates' employability." Studies in Higher Education, 41(1), 163-178.3. Johnson, R. (2017). "The role of career counseling in supporting university students' transition to the workplace." Journal of Career Development, 44(3), 275-288.。
毕业设计说明书英文文献及中文翻译班姓 名:学 院:专指导教师:2014 年 6 月软件学院 软件工程An Introduction to JavaThe first release of Java in 1996 generated an incredible amount of excitement, not just in the computer press, but in mainstream media such as The New York Times, The Washington Post, and Business Week. Java has the distinction of being the first and only programming language that had a ten-minute story on National Public Radio. A $100,000,000 venture capital fund was set up solely for products produced by use of a specific computer language. It is rather amusing to revisit those heady times, and we give you a brief history of Java in this chapter.In the first edition of this book, we had this to write about Java: “As a computer language, Java’s hype is overdone: Java is certainly a good program-ming language. There is no doubt that it is one of the better languages available to serious programmers. We think it could potentially have been a great programming language, but it is probably too late for that. Once a language is out in the field, the ugly reality of compatibility with existing code sets in.”Our editor got a lot of flack for this paragraph from someone very high up at Sun Micro- systems who shall remain unnamed. But, in hindsight, our prognosis seems accurate. Java has a lot of nice language features—we examine them in detail later in this chapter. It has its share of warts, and newer additions to the language are not as elegant as the original ones because of the ugly reality of compatibility.But, as we already said in the first edition, Java was never just a language. There are lots of programming languages out there, and few of them make much of a splash. Java is a whole platform, with a huge library, containing lots of reusable code, and an execution environment that provides services such as security, portability across operating sys-tems, and automatic garbage collection.As a programmer, you will want a language with a pleasant syntax and comprehensible semantics (i.e., not C++). Java fits the bill, as do dozens of other fine languages. Some languages give you portability, garbage collection, and the like, but they don’t have much of a library, forcing you to roll your own if you want fancy graphics or network- ing or database access. Well, Java has everything—a good language, a high-quality exe- cution environment, and a vast library. That combination is what makes Java an irresistible proposition to so many programmers.SimpleWe wanted to build a system that could be programmed easily without a lot of eso- teric training and which leveraged t oday’s standard practice. So even though wefound that C++ was unsuitable, we designed Java as closely to C++ as possible in order to make the system more comprehensible. Java omits many rarely used, poorly understood, confusing features of C++ that, in our experience, bring more grief than benefit.The syntax for Java is, indeed, a cleaned-up version of the syntax for C++. There is no need for header files, pointer arithmetic (or even a pointer syntax), structures, unions, operator overloading, virtual base classes, and so on. (See the C++ notes interspersed throughout the text for more on the differences between Java and C++.) The designers did not, however, attempt to fix all of the clumsy features of C++. For example, the syn- tax of the switch statement is unchanged in Java. If you know C++, you will find the tran- sition to the Java syntax easy. If you are used to a visual programming environment (such as Visual Basic), you will not find Java simple. There is much strange syntax (though it does not take long to get the hang of it). More important, you must do a lot more programming in Java. The beauty of Visual Basic is that its visual design environment almost automatically pro- vides a lot of the infrastructure for an application. The equivalent functionality must be programmed manually, usually with a fair bit of code, in Java. There are, however, third-party development environments that provide “drag-and-drop”-style program development.Another aspect of being simple is being small. One of the goals of Java is to enable the construction of software that can run stand-alone in small machines. The size of the basic interpreter and class support is about 40K bytes; adding the basic stan- dard libraries and thread support (essentially a self-contained microkernel) adds an additional 175K.This was a great achievement at the time. Of course, the library has since grown to huge proportions. There is now a separate Java Micro Edition with a smaller library, suitable for embedded devices.Object OrientedSimply stated, object-oriented design is a technique for programming that focuses on the data (= objects) and on the interfaces to that object. To make an analogy with carpentry, an “object-oriented” carpenter would be mostly concerned with the chair he was building, and secondari ly with the tools used to make it; a “non-object- oriented” carpenter would think primarily of his tools. The object-oriented facilities of Java are essentially those of C++.Object orientation has proven its worth in the last 30 years, and it is inconceivable that a modern programming language would not use it. Indeed, the object-oriented features of Java are comparable to those of C++. The major difference between Java and C++ lies in multiple inheritance, which Java has replaced with the simpler concept of interfaces, and in the Java metaclass model (which we discuss in Chapter 5). NOTE: If you have no experience with object-oriented programming languages, you will want to carefully read Chapters 4 through 6. These chapters explain what object-oriented programming is and why it is more useful for programming sophisticated projects than are traditional, procedure-oriented languages like C or Basic.Network-SavvyJava has an extensive library of routines for coping with TCP/IP protocols like HTTP and FTP. Java applications can open and access objects across the Net via URLs with the same ease as when accessing a local file system.We have found the networking capabilities of Java to be both strong and easy to use. Anyone who has tried to do Internet programming using another language will revel in how simple Java makes onerous tasks like opening a socket connection. (We cover net- working in V olume II of this book.) The remote method invocation mechanism enables communication between distributed objects (also covered in V olume II).RobustJava is intended for writing programs that must be reliable in a variety of ways.Java puts a lot of emphasis on early checking for possible problems, later dynamic (runtime) checking, and eliminating situations that are error-prone. The single biggest difference between Java and C/C++ is that Java has a pointer model that eliminates the possibility of overwriting memory and corrupting data.This feature is also very useful. The Java compiler detects many problems that, in other languages, would show up only at runtime. As for the second point, anyone who has spent hours chasing memory corruption caused by a pointer bug will be very happy with this feature of Java.If you are coming from a language like Visual Basic that doesn’t explicitly use pointers, you are probably wondering why this is so important. C programmers are not so lucky. They need pointers to access strings, arrays, objects, and even files. In Visual Basic, you do not use pointers for any of these entities, nor do you need to worry about memory allocation for them. On the other hand, many data structures are difficult to implementin a pointerless language. Java gives you the best of both worlds. You do not need point- ers for everyday constructs like strings and arrays. You have the power of pointers if you need it, for example, for linked lists. And you always have complete safety, because you can never access a bad pointer, make memory allocation errors, or have to protect against memory leaking away.Architecture NeutralThe compiler generates an architecture-neutral object file format—the compiled code is executable on many processors, given the presence of the Java runtime sys- tem. The Java compiler does this by generating bytecode instructions which have nothing to do with a particular computer architecture. Rather, they are designed to be both easy to interpret on any machine and easily translated into native machine code on the fly.This is not a new idea. More than 30 years ago, both Niklaus Wirth’s original implemen- tation of Pascal and the UCSD Pascal system used the same technique.Of course, interpreting bytecodes is necessarily slower than running machine instruc- tions at full speed, so it isn’t clear that this is even a good idea. However, virtual machines have the option of translating the most frequently executed bytecode sequences into machine code, a process called just-in-time compilation. This strategy has proven so effective that even Microsoft’s .NET platform relies on a virt ual machine.The virtual machine has other advantages. It increases security because the virtual machine can check the behavior of instruction sequences. Some programs even produce bytecodes on the fly, dynamically enhancing the capabilities of a running program.PortableUnlike C and C++, there are no “implementation-dependent” aspects of the specifi- cation. The sizes of the primitive data types are specified, as is the behavior of arith- metic on them.For example, an int in Java is always a 32-bit integer. In C/C++, int can mean a 16-bit integer, a 32-bit integer, or any other size that the compiler vendor likes. The only restriction is that the int type must have at least as many bytes as a short int and cannot have more bytes than a long int. Having a fixed size for number types eliminates a major porting headache. Binary data is stored and transmitted in a fixed format, eliminating confusion about byte ordering. Strings are saved in a standard Unicode format. The libraries that are a part of the system define portable interfaces. For example,there is an abstract Window class and implementations of it for UNIX, Windows, and the Macintosh.As anyone who has ever tried knows, it is an effort of heroic proportions to write a pro- gram that looks good on Windows, the Macintosh, and ten flavors of UNIX. Java 1.0 made the heroic effort, delivering a simple toolkit that mapped common user interface elements to a number of platforms. Unfortunately, the result was a library that, with a lot of work, could give barely acceptable results on different systems. (And there were often different bugs on the different platform graphics implementations.) But it was a start. There are many applications in which portability is more important than user interface slickness, and these applications did benefit from early versions of Java. By now, the user interface toolkit has been completely rewritten so that it no longer relies on the host user interface. The result is far more consistent and, we think, more attrac- tive than in earlier versions of Java.InterpretedThe Java interpreter can execute Java bytecodes directly on any machine to which the interpreter has been ported. Since linking is a more incremental and lightweight process, the development process can be much more rapid and exploratory.Incremental linking has advantages, but its benefit for the development process is clearly overstated. Early Java development tools were, in fact, quite slow. Today, the bytecodes are translated into machine code by the just-in-time compiler.MultithreadedThe benefits of multithreading are better interactive responsiveness and real-time behavior.If you have ever tried to do multithreading in another language, you will be pleasantly surprised at how easy it is in Java. Threads in Java also can take advantage of multi- processor systems if the base operating system does so. On the downside, thread imple- mentations on the major platforms differ widely, and Java makes no effort to be platform independent in this regard. Only the code for calling multithreading remains the same across machines; Java offloads the implementation of multithreading to the underlying operating system or a thread library. Nonetheless, the ease of multithread- ing is one of the main reasons why Java is such an appealing language for server-side development.Java程序设计概述1996年Java第一次发布就引起了人们的极大兴趣。
外文文献翻译译文一、外文原文原文:The NIH budgetIntroductionFederal funding for biomedical research in the United States has fueled discoveries that have advanced our understanding of human disease, led to novel and effective diagnostic tools and therapies, and made our research enterprise an international paragon. Although it was not the original intent, this investment, through the National Institutes of Health (NIH), has also become an essential source of support for academic medical centers, providing funds for faculty and staff salaries, operational expenses, and even capital improvements related to research that can no longer be supported by clinical income. Until approximately 20 years ago, clinical income often subsidized research, but managed care, increased scrutiny and efficiency in the management of clinical expenses, and reductions in federal support for teaching hospitals have rendered clinical margins insufficient to support the research mission. Although some may see institution building as an inappropriate use of NIH funds, a consistent, productive biomedical research enterprise requires a solid infrastructure.Ensuring durable federal support for such research has not, however, been without tribulations. As with all line items in the federal budget, NIH funding is subject to the vicissitudes of the political process, and intermittent periods of growth have been followed by periods of decline. Some argue that funding cycles refresh the research enterprise, eliminating through competition investigators whose work is not of the highest quality. Though not as sanguine about their purposes or consequences, the academic medical community has accepted these cycles and works to find ways to dampen the effects of downturns on research programs and institutional stability.Redefined the concept of a comprehensive budget managementBudget originated in 18th century England, first implemented in government departments, the purpose is mainly used to control the king's right to tax, thereby limiting government spending. Then in the U.S., the budget has been further developed. American small towns by the public budget for the implementation of the national budget system, the establishment of the United States played an important role. Inspired by the government budget.budget management concept which was subsequently used by large companies in the United States to go to business management.Later emerged the concept of the overall budget management, budget management is the use of the main line of the internal departments of various financial and non-financial resources to control, reflecting a series of evaluation activities, and to improve the management level and management efficiency. Since the 20th century, a comprehensive budget management by the United States, many large enterprises such as: General Electric, DuPont and General Motors, so successfully applied and obtained good results. Since then, this method will soon become a large modern industrial and commercial enterprises on the standard operating procedures. From the initial planning, coordination of production become both now control, motivation, evaluation function of an integrated strategic approach to the implementation of enterprise management mechanism, and then in the internal control system of the heart.Comprehensive budget management to reflect the following three features: First, to enhance the organization's governance capacity, strengthening organization and management of contractual; Secondly, the overall budget management is to organize the constraints necessary incentives to play the price in the market separation of powers and incentives; Finally, the overall budget management strategy and daily operations of enterprise link. Establish and perfect the modern enterprise system, the budget must be set up scientific management system, the full budget is not just a modern enterprise budget form, and is a target, coordination, control, assessment in one set of control mechanisms. The Comprehensive Budget Management conducive to enhanced resistance to adapt to market changes and risks, will help streamline business management system and operational mechanism for achieving the most effective business strategy support system.Comprehensive budget management system, including: budget preparation, budget implementation and monitoring and evaluation and performance evaluation of the budget, etc.. The budget was from the strategy from the beginning of shareholders to demand and market conditions, and to transform strategy into daily operational performance indicators and a series of quantitative and specific forms and documents as the realization of carrier. Budget execution and monitoring is in the budget goals into reality in the process of budget execution will be reflected in the progress and results of the identification and decomposition of differences and thus. Assessment and performance evaluation of the budget is through regular and ad hoc evaluation methods, analysis and decomposition of the differences, and correcting for timely and appropriate incentives.The Future of Biomedical ResearchWe have recently entered another period of stagnant funding for the NIH. Having doubled between 1998 and 2003, the NIH budget is expected to be $28.6 billion for fiscal year 2007, a 0.1 percent decrease from last year,1 or a 3.8 percent decrease after adjustment for inflation — the first true budgeted reduction in NIH support since 1970. Whereas national defense spending has reached approximately $1,600 per capita, federal spending for biomedical research now amounts to about $97 per capita — a rather modest investment in "advancing the health, safety, and well-being of our people."1 This downturn is more severe than any we have faced previously, since it comes on the heels of the doubling of the budget and threatens to erode the benefits of that investment. It takes many years for institutions to develop investigators skilled in modern research techniques and to build the costly, complicated infrastructure necessary for biomedical research. Rebuilding the investigator pool and the infrastructure after a downturn is expensive and time-consuming and weakens the benefits of prior funding. This situation is unlikely to improve anytime soon: the resources required for the war in Iraq and for hurricane relief, along with the erosion of the tax base by the current administration's fiscal policies, are expected to have long-term, far-reaching effects.Most institutes within the NIH have quickly adopted policy changes to minimizethe adverse consequences, including reducing the maximum grant term from five years to four years, eliminating cost-of-living increases, and capping the amounts of awards. These changes have important effects on currently funded research and the infrastructure that it requires. Moreover, the future of biomedical research is also affected: NIH training grants represent a major source of support for postdoctoral and clinical fellows during their research experiences, and budget limitations affect not only available training slots but also the training climate. As it becomes increasingly difficult for established investigators to renew their grants, their frustration is transmitted to trainees, who increasingly opt for alternative career paths, shrinking the pipeline of future investigators.Meanwhile, for more than 10 years, the pharmaceutical industry has been investing larger amounts in research and development than the federal government —$51.3 billion in fiscal year 2005,2 for instance, or 78 percent more than NIH funding that year. Fiscal conservatives may view this industry investment as an appropriate, market-driven solution that should suffice and that does not justify additional government funding for biomedical research. However, the lion's share of industry funds is applied to drug development, especially clinical trials, rather than to fundamental research and is targeted to applications that are first and foremost of value to the industry. Federal funding has traditionally targeted a broad range of investigator-initiated research, from studies of molecular mechanisms of disease to population-based studies of disease prevalence, promoting an unrestricted environment of biomedical discovery that serves as the basis for industry-driven development. These approaches are complementary, and both have served society well.How, then, can we ensure that funding for biomedical research is maintained at adequate levels for the foreseeable future? Korn and colleagues have argued that stability and quality can be ensured by maintaining overall funding at an annual growth rate of 8 to 9 percent (unadjusted for inflation).3 They base their conclusion on the costs associated with six basic goals, which I endorse: preserving the integrity of the merit and peer-review process, which requires that funding levels not fall below the 30th percentile success rate; maintaining a stable pool of new investigators; sustainingcommitments to continuing awards; preserving the capacity of institutions that receive grants by minimizing cost-sharing with the federal government (e.g., for lease costs or animal care); recognizing the continuous growth of new research technologies; and maintaining a robust intramural NIH research program. I would, however, modify the annual required growth rate to 5 to 6 percent real growth plus inflation: the annual growth rate over the past 30 years has been approximately 10 percent, which reflects an annual average real growth rate of 5.2 percent and an average inflation rate of 4.8 percent (ranging from 1.1 to 13.3 percent).Unfortunately, the federal government probably cannot accommodate this growth rate under its current fiscal constraints. So maintaining, by statute, a stable base level of funding equivalent to the fiscal year 2006 budget, with annual inflationary adjustments, seems to me a reasonable starting point. Congress may then choose to allocate additional resources annually, subject to availability, aiming for an annual real growth rate of 5 to 6 percent. Alternatively, to avoid politicization of the flow of funds and their targets, a dedicated tax could be imposed on consumer products that threaten human health —such as fast foods, tobacco, and alcohol —and used to maintain the biomedical research infrastructure by a formulaic allocation, much as the gasoline tax is used to maintain the federal highway infrastructure.The NIH can optimize the use of these funds by limiting the size and duration of awards as well as the number of awards per investigator. It might also consider shifting the target of grants. Whereas other countries often provide funding as an award for work accomplished before the application, the NIH theoretically funds proposed work —though in reality, the peer-review process effectively requires that a hypothesis virtually be proved correct before funding is approved. Within the NIH intramural research program, funding levels for individual laboratories are often decided on the basis of accomplishments during the previous cycle, so there is already a precedent that can be applied to the extramural program. Of course, new investigators would need to be reviewed differently to ensure appropriate allocation of funds to these promising members of the research community who have no or limited previous research accomplishments.Even with such changes, however, it would be preferable for academic medicalcenters to cease relying so heavily on the NIH for research funding. In addition tohaving investigators seek funding from not-for-profit organizations and from industry, Ibelieve that centers should encourage major nongovernmental funding organizations toconsolidate their resources into a durable pool of support for the best research proposalsin the life sciences. In addition, individual centers should encourage generous donors tosupport unrestricted research endowments designed to fund translational and clinicalresearch programs within the medical center or to contribute to a national pool linkedwith support from industry to establish a national endowment for funding translationalresearch and drug or device development within academic medical centers. Suchpromotion of later-phase research within academic medical centers could enhance thevalue of the intellectual property derived from it, financial benefits from which could,in turn, be used to establish research endowments within the medical centers.The federal government might also consider alternative ways to fund the NIH budget that are independent of allocations from the tax base. One approach might include seeking support from industries whose products contribute to the burden of disease, providing tax credits as an incentive for their contribution. These resources could be used to establish an independently managed national fund, which could be used to ensure adequate support for biomedical research without the funding gaps or oscillations that currently plague the process. In this scenario, unused money from any fiscal year would be retained in the fund, with the goal of achieving self-sustained growth.Whatever mechanisms are ultimately chosen, it seems clear that new methods ofsupport must be developed if biomedical research is to continue to thrive in the UnitedStates. The goal of a durable, steady stream of support for research in the life scienceshas never been more pressing, since the research derived from that support has neverpromised greater benefits. The fate of life-sciences research should not be consigned tothe political winds of Washington.Source: Joseph Loscalzo.The NIH budget.The New England Journal of Medicine April20, 2006 V ol.354(16)P1665-1667二、翻译文章译文:NIH 预算介绍美国联邦对生物医学研究的资助,推动我们对人类疾病的认识和发现,指出了新的和有效的诊断工具和治疗方法,使我们的研究成为企业的国际典范。
外文文献翻译(附原文)外文译文一:产业集群的竞争优势——以中国大连软件工业园为例Weilin Zhao,Chihiro Watanabe,Charla-Griffy-Brown[J]. Marketing Science,2009(2):123-125.摘要:本文本着为促进工业的发展的初衷探讨了中国软件公园的竞争优势。
产业集群深植于当地的制度系统,因此拥有特殊的竞争优势。
根据波特的“钻石”模型、SWOT模型的测试结果对中国大连软件园的案例进行了定性的分析。
产业集群是包括一系列在指定地理上集聚的公司,它扎根于当地政府、行业和学术的当地制度系统,以此获得大量的资源,从而获得产业经济发展的竞争优势。
为了成功驾驭中国经济范式从批量生产到开发新产品的转换,持续加强产业集群的竞争优势,促进工业和区域的经济发展是非常有必要的。
关键词:竞争优势;产业集群;当地制度系统;大连软件工业园;中国;科技园区;创新;区域发展产业集群产业集群是波特[1]也推而广之的一个经济发展的前沿概念。
作为一个在全球经济战略公认的专家,他指出了产业集群在促进区域经济发展中的作用。
他写道:集群的概念,“或出现在特定的地理位置与产业相关联的公司、供应商和机构,已成为了公司和政府思考和评估当地竞争优势和制定公共决策的一种新的要素。
但是,他至今也没有对产业集群做出准确的定义。
最近根据德瑞克、泰克拉[2]和李维[3]检查的关于产业集群和识别为“地理浓度的行业优势的文献取得了进展”。
“地理集中”定义了产业集群的一个关键而鲜明的基本性质。
产业由地区上特定的众多公司集聚而成,他们通常有共同市场、,有着共同的供应商,交易对象,教育机构和其它像知识及信息一样无形的东西,同样地,他们也面临相似的机会和威胁。
在全球产业集群有许多种发展模式。
比如美国加州的硅谷和马萨诸塞州的128鲁特都是知名的产业集群。
前者以微电子、生物技术、和风险资本市场而闻名,而后者则是以软件、计算机和通讯硬件享誉天下[4]。
外文文献翻译原文Analysis of Con tin uous Prestressed Concrete BeamsChris BurgoyneMarch 26, 20051、IntroductionThis conference is devoted to the development of structural analysis rather than the strength of materials, but the effective use of prestressed concrete relies on an appropriate combination of structural analysis techniques with knowledge of the material behaviour. Design of prestressed concrete structures is usually left to specialists; the unwary will either make mistakes or spend inordinate time trying to extract a solution from the various equations.There are a number of fundamental differences between the behaviour of prestressed concrete and that of other materials. Structures are not unstressed when unloaded; the design space of feasible solutions is totally bounded;in hyperstatic structures, various states of self-stress can be induced by altering the cable profile, and all of these factors get influenced by creep and thermal effects. How were these problems recognised and how have they been tackled?Ever since the development of reinforced concrete by Hennebique at the end of the 19th century (Cusack 1984), it was recognised that steel and concrete could be more effectively combined if the steel was pretensioned, putting the concrete into compression. Cracking could be reduced, if not prevented altogether, which would increase stiffness and improve durability. Early attempts all failed because the initial prestress soon vanished, leaving the structure to be- have as though it was reinforced; good descriptions of these attempts are given by Leonhardt (1964) and Abeles (1964).It was Freyssineti’s observations of the sagging of the shallow arches on three bridges that he had just completed in 1927 over the River Allier near Vichy which led directly to prestressed concrete (Freyssinet 1956). Only the bridge at Boutiron survived WWII (Fig 1). Hitherto, it had been assumed that concrete had a Young’s modulus which remained fixed, but he recognised that the de- ferred strains due to creep explained why the prestress had been lost in the early trials. Freyssinet (Fig. 2) also correctly reasoned that high tensile steel had to be used, so that some prestress would remain after the creep had occurred, and alsothat high quality concrete should be used, since this minimised the total amount of creep. The history of Freyssineti’s early prestressed concrete work is written elsewhereFigure1:Boutiron Bridge,Vic h yFigure 2: Eugen FreyssinetAt about the same time work was underway on creep at the BRE laboratory in England ((Glanville 1930) and (1933)). It is debatable which man should be given credit for the discovery of creep but Freyssinet clearly gets the credit for successfully using the knowledge to prestress concrete.There are still problems associated with understanding how prestressed concrete works, partly because there is more than one way of thinking about it. These different philosophies are to some extent contradictory, and certainly confusing to the young engineer. It is also reflected, to a certain extent, in the various codes of practice.Permissible stress design philosophy sees prestressed concrete as a way of avoiding cracking by eliminating tensile stresses; the objective is for sufficient compression to remain after creep losses. Untensionedreinforcement, which attracts prestress due to creep, is anathema. This philosophy derives directly from Freyssinet’s logic and is primarily a working stress concept.Ultimate strength philosophy sees prestressing as a way of utilising high tensile steel as reinforcement. High strength steels have high elastic strain capacity, which could not be utilised when used as reinforcement; if the steel is pretensioned, much of that strain capacity is taken out before bonding the steel to the concrete. Structures designed this way are normally designed to be in compression everywhere under permanent loads, but allowed to crack under high live load. The idea derives directly from the work of Dischinger (1936) and his work on the bridge at Aue in 1939 (Schonberg and Fichter 1939), as well as that of Finsterwalder (1939). It is primarily an ultimate load concept. The idea of partial prestressing derives from these ideas.The Load-Balancing philosophy, introduced by T.Y. Lin, uses prestressing to counter the effect of the permanent loads (Lin 1963). The sag of the cables causes an upward force on the beam, which counteracts the load on the beam. Clearly, only one load can be balanced, but if this is taken as the total dead weight, then under that load the beam will perceive only the net axial prestress and will have no tendency to creep up or down.These three philosophies all have their champions, and heated debates take place between them as to which is the most fundamental.2、Section designFrom the outset it was recognised that prestressed concrete has to be checked at both the working load and the ultimate load. For steel structures, and those made from reinforced concrete, there is a fairly direct relationship between the load capacity under an allowable stress design, and that at the ultimate load under an ultimate strength design. Older codes were based on permissible stresses at the working load; new codes use moment capacities at the ultimate load. Different load factors are used in the two codes, but a structure which passes one code is likely to be acceptable under the other.For prestressed concrete, those ideas do not hold, since the structure is highly stressed, even when unloaded. A small increase of load can cause some stress limits to be breached, while a large increase in load might be needed to cross other limits. The designer has considerable freedom to vary both the working load and ultimate load capacities independently; both need to be checked.A designer normally has to check the tensile and compressive stresses, in both the top and bottom fibre of the section, for every load case. The critical sections are normally, but not always, the mid-span and the sections over piers but other sections may become critical ,when the cable profile has to be determined.The stresses at any position are made up of three components, one of which normally has a different sign from the other two; consistency of sign convention is essential.If P is the prestressing force and e its eccentricity, A and Z are the area of the cross-section and its elastic section modulus, while M is the applied moment, then where ft and fc are the permissible stresses in tension and compression.c e t f ZM Z P A P f ≤-+≤Thus, for any combination of P and M , the designer already has four in- equalities to deal with.The prestressing force differs over time, due to creep losses, and a designer isusually faced with at least three combinations of prestressing force and moment;• the applied moment at the time the prestress is first applied, before creep losses occur,• the maximum applied moment after creep losses, and• the minimum applied moment after creep losses.Figure 4: Gustave MagnelOther combinations may be needed in more complex cases. There are at least twelve inequalities that have to be satisfied at any cross-section, but since an I-section can be defined by six variables, and two are needed to define the prestress, the problem is over-specified and it is not immediately obvious which conditions are superfluous. In the hands of inexperienced engineers, the design process can be very long-winded. However, it is possible to separate out the design of the cross-section from the design of the prestress. By considering pairs of stress limits on the same fibre, but for different load cases, the effects of the prestress can be eliminated, leaving expressions of the form:rangestress e Perm issibl Range Mom entZ These inequalities, which can be evaluated exhaustively with little difficulty, allow the minimum size of the cross-section to be determined.Once a suitable cross-section has been found, the prestress can be designed using a construction due to Magnel (Fig.4). The stress limits can all be rearranged into the form:()M fZ PA Z e ++-≤1 By plotting these on a diagram of eccentricity versus the reciprocal of the prestressing force, a series of bound lines will be formed. Provided the inequalities (2) are satisfied, these bound lines will always leave a zone showing all feasible combinations of P and e. The most economical design, using the minimum prestress, usually lies on the right hand side of the diagram, where the design is limited by the permissible tensile stresses.Plotting the eccentricity on the vertical axis allows direct comparison with the crosssection, as shown in Fig. 5. Inequalities (3) make no reference to the physical dimensions of the structure, but these practical cover limits can be shown as wellA good designer knows how changes to the design and the loadings alter the Magnel diagram. Changing both the maximum andminimum bending moments, but keeping the range the same, raises and lowers the feasible region. If the moments become more sagging the feasible region gets lower in the beam.In general, as spans increase, the dead load moments increase in proportion to the live load. A stage will be reached where the economic point (A on Fig.5) moves outside the physical limits of the beam; Guyon (1951a) denoted the limiting condition as the critical span. Shorter spans will be governed by tensile stresses in the two extreme fibres, while longer spans will be governed by the limiting eccentricity and tensile stresses in the bottom fibre. However, it does not take a large increase in moment ,at which point compressive stresses will govern in the bottom fibre under maximum moment.Only when much longer spans are required, and the feasible region moves as far down as possible, does the structure become governed by compressive stresses in both fibres.3、Continuous beamsThe design of statically determinate beams is relatively straightforward; the engineer can work on the basis of the design of individual cross-sections, as outlined above. A number of complications arise when the structure is indeterminate which means that the designer has to consider, not only a critical section,but also the behaviour of the beam as a whole. These are due to the interaction of a number of factors, such as Creep, Temperature effects and Construction Sequence effects. It is the development of these ideas whichforms the core of this paper. The problems of continuity were addressed at a conference in London (Andrew and Witt 1951). The basic principles, and nomenclature, were already in use, but to modern eyes concentration on hand analysis techniques was unusual, and one of the principle concerns seems to have been the difficulty of estimating losses of prestressing force.3.1 Secondary MomentsA prestressing cable in a beam causes the structure to deflect. Unlike the statically determinate beam, where this motion is unrestrained, the movement causes a redistribution of the support reactions which in turn induces additional moments. These are often termed Secondary Moments, but they are not always small, or Parasitic Moments, but they are not always bad.Freyssinet’s bridge across the Marne at Luzancy, started in 1941 but not completed until 1946, is often thought of as a simply supported beam, but it was actually built as a two-hinged arch (Harris 1986), with support reactions adjusted by means of flat jacks and wedges which were later grouted-in (Fig.6). The same principles were applied in the later and larger beams built over the same river.Magnel built the first indeterminate beam bridge at Sclayn, in Belgium (Fig.7) in 1946. The cables are virtually straight, but he adjusted the deck profile so that the cables were close to the soffit near mid-span. Even with straight cables the sagging secondary momentsare large; about 50% of the hogging moment at the central support caused by dead and live load.The secondary moments cannot be found until the profile is known but the cablecannot be designed until the secondary moments are known. Guyon (1951b) introduced the concept of the concordant profile, which is a profile that causes no secondary moments; es and ep thus coincide. Any line of thrust is itself a concordant profile.The designer is then faced with a slightly simpler problem; a cable profile has to be chosen which not only satisfies the eccentricity limits (3) but is also concordant. That in itself is not a trivial operation, but is helped by the fact that the bending moment diagram that results from any load applied to a beam will itself be a concordant profile for a cable of constant force. Such loads are termed notional loads to distinguish them from the real loads on the structure. Superposition can be used to progressively build up a set of notional loads whose bending moment diagram gives the desired concordant profile.3.2 Temperature effectsTemperature variations apply to all structures but the effect on prestressed concrete beams can be more pronounced than in other structures. The temperature profile through the depth of a beam (Emerson 1973) can be split into three components for the purposes of calculation (Hambly 1991). The first causes a longitudinal expansion, which is normally released by the articulation of the structure; the second causes curvature which leads to deflection in all beams and reactant moments in continuous beams, while the third causes a set of self-equilibrating set of stresses across the cross-section.The reactant moments can be calculated and allowed-for, but it is the self- equilibrating stresses that cause the main problems for prestressed concrete beams. These beams normally have high thermal mass which means that daily temperature variations do not penetrate to the core of the structure. The result is a very non-uniform temperature distribution across the depth which in turn leads to significant self-equilibrating stresses. If the core of the structure is warm, while the surface is cool, such as at night, then quite large tensile stresses can be developed on the top and bottom surfaces. However, they only penetrate a very short distance into the concrete and the potential crack width is very small. It can be very expensive to overcome the tensile stress by changing the section or the prestress。
外文文献翻译(含:英文原文及中文译文)英文原文Stamping technologyIntroductionIn the current fierce market competition, the product to market sooner or later is often the key to the success or failure. Mould is a product of high quality, high efficiency production tool, mold development cycle of the main part of the product development cycle. So the customer requirements for mold development cycle shorter, many customers put the mould delivery date in the first place, and then the quality and price. Therefore, how to ensure the quality, control the cost under the premise of processing mould is a problem worthy of serious consideration. Mold processing technology is an advanced manufacturing technology, has become an important development direction, in the aerospace, automotive, machinery and other industries widely used. Mold processing technology, can improve the comprehensive benefit and competitiveness of manufacturing industry. Research and establish mold process database, provide production enterprises urgently need to high speed cutting processing data, to the promotion of high-speed machining technology has very important significance. This article's main goal is to build a stamping die processing, mold manufacturing enterprises in theactual production combined cutting tool, workpiece and machine tool with the actual situation of enterprise itself accumulate to high speed cutting processing instance, process parameters and experience of high speed cutting database selectively to store data, not only can save a lot of manpower and material resources, financial resources, but also can guide the high speed machining production practice, to improve processing efficiency, reduce the tooling cost and obtain higher economic benefits.1. The concept, characteristics and application of stampingStamping is a pressure processing method that uses a mold installed on a press machine (mainly a press) to apply pressure to a material to cause it to separate or plastically deform, thereby obtaining a desired part (commonly referred to as a stamped or stamped part). Stamping is usually cold deformation processing of the material at room temperature, and the main use of sheet metal to form the required parts, it is also called cold stamping or sheet metal stamping. Stamping is one of the main methods of material pressure processing or plastic processing, and is affiliated with material forming engineering.The stamping die is called stamping die, or die. Dies are special tools for the batch processing of materials (metal or non-metallic) into the required stampings. Stamping is critical in stamping. There is no die that meets the requirements. Batch stamping production is difficult. Without advanced stamping, advanced stamping processes cannot be achieved.Stamping processes and dies, stamping equipment, and stamping materials constitute the three elements of stamping. Only when they are combined can stampings be obtained.Compared with other methods of mechanical processing and plastic processing, stamping processing has many unique advantages in both technical and economic aspects, and its main performance is as follows;(1) The stamping process has high production efficiency, easy operation, and easy realization of mechanization and automation. This is because stamping is accomplished by means of die and punching equipment. The number of strokes for ordinary presses can reach several tens of times per minute, and the high-speed pressure can reach hundreds or even thousands of times per minute, and each press stroke is Y ou may get a punch.(2) Since the die ensures the dimensional and shape accuracy of the stamping part during stamping, and generally does not destroy the surface quality of the stamping part, the life of the die is generally longer, so the stamping quality is stable, the interc hangeability is good, and it has “the same” Characteristics.(3) Stamping can process parts with a wide range of sizes and shapes, such as stopwatches as small as clocks, as large as automobile longitudinal beams, coverings, etc., plus the cold deformation hardening effect of materials during stamping, the strength of stamping and Thestiffness is high.(4) Stamping generally does not generate scraps, material consumption is less, and no other heating equipment is required. Therefore, it is a material-saving and energy-saving processing method, and the cost of stamping parts is low.However, the molds used for stamping are generally specialized, and sometimes a complex part requires several sets of molds for forming, and the precision of the mold manufacturing is high and the technical requirements are high. It is a technology-intensive product. Therefore, the advantages of stamping can only be fully realized in the case of large production volume of stamping parts, so as to obtain better economic benefits.Stamping is widely used in modern industrial production, especially in mass production. A considerable number of industrial sectors are increasingly using punching to process product components such as automobiles, agricultural machinery, instruments, meters, electronics, aerospace, aerospace, home appliances, and light industry. In these industrial sectors, the proportion of stamped parts is quite large, at least 60% or more, and more than 90%. Many of the parts that were manufactured in the past using forging = casting and cutting processes are now mostly replaced by light-weight, rigid stampings. Therefore, it can be said that if the stamping process cannot be adopted in production, it isdifficult for many industrial departments to increase the production efficiency and product quality, reduce the production cost, and quickly replace the product.2. Basic process and mould for stampingDue to the wide variety of stamped parts and the different shapes, sizes, and precision requirements of various parts, the stamping process used in production is also varied. Summarized, can be divided into two major categories of separation processes and forming processes; Separation process is to make the blank along a certain contour line to obtain a certain shape, size and section quality stamping (commonly referred to as blanking parts) of the process; forming process refers to The process of producing a stamped part of a certain shape and size by plastic deformation of the blank without breaking.The above two types of processes can be divided into four basic processes: blanking, bending, deep drawing and forming according to different basic deformation modes. Each basic process also includes multiple single processes.In actual production, when the production volume of the stamped part is large, the size is small and the tolerance requirement is small, it is not economical or even difficult to achieve the requirement if the stamping is performed in a single process. At this time, a centralized scheme is mostly used in the process, that is, two or more singleprocesses are concentrated in a single mold. Different methods are called combinations, and they can be divided into compound-graded and compound- Progressive three combinations.Composite stamping - A combination of two or more different single steps at the same station on the die in one press stroke.Progressive stamping - a combination of two or more different single steps on a single work station in the same mold at a single working stroke on the press.Composite - Progressive - On a die combination process consisting of composite and progressive two ways.There are many types of die structure. According to the process nature, it can be divided into blanking die, bending die, drawing die and forming die, etc.; the combination of processes can be divided into single-step die, compound die and progressive die. However, regardless of the type of die, it can be regarded as consisting of two parts: the upper die and the lower die. The upper die is fixed on the press table or the backing plate and is a fixed part of the die. During work, the blanks are positioned on the lower die surface by positioning parts, and the press sliders push the upper die downwards. The blanks are separated or plastically deformed under the action of the die working parts (ie, punch and die) to obtain the required Shape and size of punching pieces. When the upper mold is lifted, the unloading and ejecting device of the moldremoves or pushes and ejects the punching or scrap from the male and female molds for the next punching cycle.3. Current status and development direction of stamping technologyWith the continuous advancement of science and technology and the rapid development of industrial production, many new technologies, new processes, new equipment, and new materials continue to emerge, thus contributing to the constant innovation and development of stamping technology. Its main performance and development direction are as follows:(1) The theory of stamping and the stamping process The study of stamping forming theory is the basis for improving stamping technology. At present, the research on the stamping forming theory at home and abroad attaches great importance, and significant progress has been made in the study of material stamping performance, stress and strain analysis in the stamping process, study of the sheet deformation law, and the interaction between the blank and the mold. . In particular, with the rapid development of computer technology and the further improvement of plastic deformation theory, computer simulation techniques for the plastic forming process have been applied at home and abroad in recent years, namely the use of finite element (FEM) and other valuable analytical methods to simulate the plastic forming process of metals. According to the analysis results, the designer can predict the feasibility and possiblequality problems of a certain process scheme. By selecting and modifying the relevant parameters on the computer, the process and mold design can be optimized. This saves the cost of expensive trials and shortens the cycle time.Research and promotion of various pressing technologies that can increase productivity and product quality, reduce costs, and expand the range of application of stamping processes are also one of the development directions of stamping technology. At present, new precision, high-efficiency, and economical stamping processes, such as precision stamping, soft mold forming, high energy high speed forming, and dieless multi-point forming, have emerged at home and abroad. Among them, precision blanking is an effective method for improving the quality of blanking parts. It expands the scope of stamping processing. The thickness of precision blanking parts can reach 25mm at present, and the precision can reach IT16~17; use liquid, rubber, polyurethane, etc. Flexible die or die soft die forming process can process materials that are difficult to process with ordinary processing methods and parts with complex shapes, have obvious economic effects under specific production conditions, and adopt energy-efficient forming methods such as explosion for processing. This kind of sheet metal parts with complex dimensions, complex shapes, small batches, high strength and high precision has important practical significance; Superplastic forming of metal materialscan be used to replace multiple common stampings with one forming. Forming process, which has outstanding advantages for machining complex shapes and large sheet metal parts; moldless multi-point forming process is an advanced technology for forming sheet metal surfaces by replacing the traditional mold with a group of height adjustable punches. Independently designed and manufactured an international leading-edge moldless multi-point forming equipment, which solves the multi-point press forming method and can therefore be Changing the state of stress and deformation path, improving the forming limit of the material, while repeatedly using the forming technology may eliminate the residual stress within the material, the rebound-free molding. The dieless multi-point forming system takes CAD/CAM/CAE technology as the main means to quickly and economically realize the automated forming of three-dimensional surfaces.(2) Dies are the basic conditions for achieving stamping production. In the design and manufacture of stampings, they are currently developing in the following two aspects: On the one hand, in order to meet the needs of high-volume, automatic, precision, safety and other large-volume modern production, stamping is To develop high-efficiency, high-precision, high-life, multi-station, and multi-function, compared with new mold materials and heat treatment technologies, various high-efficiency, precision, CNC automatic mold processing machine toolsand testing equipment and molds CAD/CAM technology is also rapidly developing; On the other hand, in order to meet the needs of product replacement and trial production or small-batch production, zinc-based alloy die, polyurethane rubber die, sheet die, steel die, combination die and other simple die And its manufacturing technology has also been rapidly developed.Precision, high-efficiency multi-station and multi-function progressive die and large-scale complex automotive panel die represent the technical level of modern die. At present, the precision of the progressive die above 50 stations can reach 2 microns. The multifunctional progressive die can not only complete the stamping process, but also complete welding, assembly and other processes. Our country has been able to design and manufacture its own precision up to the international level of 2 to 5 microns, precision 2 to 3 microns into the distance, the total life of 100 million. China's major automotive mold enterprises have been able to produce complete sets of car cover molds, and have basically reached the international level in terms of design and manufacturing methods and means. However, the manufacturing methods and methods have basically reached the international level. The mold structure and function are also close to international Level, but there is still a certain gap compared with foreign countries in terms of manufacturing quality, accuracy, manufacturing cycle and cost.4. Stamping standardization and professional productionThe standardization and professional production of molds has been widely recognized by the mold industry. Because the die is a single-piece, small-volume production, the die parts have both certain complexity and precision, as well as a certain structural typicality. Therefore, only the standardization of the die can be achieved, so that the production of the die and the die parts can be professionalized and commercialized, thereby reducing the cost of the die, improving the quality of the die and shortening the manufacturing cycle. At present, the standard production of molds in foreign advanced industrial countries has reached 70% to 80%. Mould factories only need to design and manufacture working parts, and most of the mold parts are purchased from standard parts factories, which greatly increases productivity. The more irregular the degree of specialization of the mold manufacturing plant, the more and more detailed division of labor, such as the current mold factory, mandrel factory, heat treatment plant, and even some mold factories only specialize in the manufacture of a certain type of product or die The bending die is more conducive to the improvement of the manufacturing level and the shortening of the manufacturing cycle. China's stamp standardization and specialized production have also witnessed considerable development in recent years. In addition to the increase in the number of standard parts specialized manufacturers, the number ofstandard parts has also expanded, and the accuracy has also improved. However, the overall situation can not meet the requirements of the development of the mold industry, mainly reflected in the standardization level is not high (usually below 40%), the standard parts of the species and specifications are less, most standard parts manufacturers did not form a large-scale production, standard parts There are still many problems with quality. In addition, the sales, supply, and service of standard parts production have yet to be further improved.中文译文冲压模具技术前言在目前激烈的市场竞争中, 产品投入市场的迟早往往是成败的关键。
附录1 外文文献原文及译文原文:An evaluation of NDT methods for the location and sizing of forging discontinuitiesIn selecting an NDT method for flaw detection in forgings a number of variables must be considered:a) the type of discontinuity to be assessed;b) the method to be used for detection and evaluation, andc) the variables associated with the forging itselfThe variables in item a) will govern the location within the forging and its orientation with respect to a particular surface Item b) could include a considerable array of NDT methods, but for the purpose of this paper only the six most widely used are considered一visual testing (VT), penetrant inspection(PI), magnetic particle inspection(MI), eddy current testing (ET), radiographic inspection (RT) and ultrasonic inspection (UI). In the last item c) the component race include such things as condition, geometry access for inspection.a)Forging discontinuitiesThe location of the discontinuity will have a significant influence on the selection of the NDT method to be used and they are therefore grouped into three categories, to aid this selection:1. open to the surface: laps, seam, burst, slugs, cracks and inclusions2. slightly subsurface: seam, stringers, inclusions and grain structure variations3. internal: stringers, burst, lamination, grain structure, inclusions and pipingA brief review of these terms may be helpful:Lap: folded metal, flattened into the surface but not fusing with itSeam: linear flaws due to oxidized blow holes or ingot splashes, which are elongated by hot workingBurst:ruptures caused by failure of plastic deformation by processing at too low a temperature or excessive working of metalStringers: a bar stock defect, due to non metallic inclusions being squeezed out into long and thin stringsLamination: planar defect aligned parallel to surface, originating in the original ingot from rolled out pipingCracks: transgranular failure, due to localized stresses resulting from non-uniform heating or cooling and non-plastic deformationInclusions: impurities, such as slag, oxide and sulphides, often from the original molten stage in forming the billet used for forgingGrain structure: depending upon the extent of working, (deformation and recrystllisation) can be as small as 0.5mm or as large as 10mmPiping: a cavity at the centre of the ingot or billet, caused by shrinkage during solidification Slug: a piece of foreign matter that has been pressed or rolled into the surface of the material b)The NDT MethodVT—visual testing is the oldest of the NDT methods but still valid and widely used today The system is based upon observation, usually by a human observer, but now increasingly by digital/video cameras which use pattern recognition to locate dissimilar areas in a surface. The sensitivity will depend upon the method but typically a good observer with simplevisual aids can resolve 0.5mm differences aids will include magnifying glasses (up to x10), microscopes(up to x100) and fibred-optic bores copes and endoscopes for viewing internal details in hollow or complex sections. The system is used for surface inspection only with costs in the range $4 to $4000.PT一the surface is covered with brightly covered oil (typically red or fluorescent), which will penetrate any surface openings. After removal of excess, an absorbent, white powder is applied, which draws any trapped oil to the surface. This creates an indication of the presence of the surface opening. This process, like visual inspection, also requires visual acuity, but the indications are ‘enhanced’ by the process, since‘bleed-out’ spreads the visual image. Costs can range from as little as $4 for a couple of cans, to $8000 for a process ‘line’. Both VT and PT are surface inspection systems only arid will therefore detect only those discontinuities that have a definite surface opening Surface cleanliness is very important, particularly with PT.MT一ferromagnetic materials carrying a large flux density; retain the presence internally, with little external evidence other than at the poles. Any discontinuity in the material will disturb this uniform flux and create a small ‘leakage’ at the site of the discontinuity. This leakage can be detected by the fact that finely divided; ferromagnetic particles collect at the-site, creating an indication. As with PT, the particles can be colored, to increase contrast, which when viewed under suitable lighting, create a clear visual image of the discontinuity. However, unlike PT the leakage can pass through thin layers of paint or plating materials, so that the discontinuity does not have to be open to the surface. The system can therefore detect surface AND slight subsurface discontinuities. However this is only possible in ferromagnetic materials, such as iron, mild and tool steel, nickel, cobalt and martenstic stainless. It will not operate on Paramagnetic or Diamagnetic materials, such as copper, aluminum and austenitic stainless steel. A small electromagnet can cost as little as $200, but a large `bench type' machine can cost up to $10 000 and the cost of electricity can be substantial.ET一Direct current flowing in a coil, sets up a longitudinal magnetic, field through the coil, and exhibits a particular resistance to flow. If the current is alternating, then a further effect一inductive reactance, adds to this resistance, the total being impedance. This impedance also causes a lag between the current and the voltage, called a phase shift. This shift and impedance are characteristics of the coil.If the coil is now placed close to a conducting surface, the reversing magnetic field induces a reversing current in the conducting (eddy current) which opposes the inducing field. This opposition alters the impedance of the coil and a suitable instrument can detect these changes (both phase angle and/or impedance).For a given ,discontinuity-free surface , a specific alteration will be present which can be zeroed .If the coil now passes over a discontinuity, a change in induction will occur which will be registered by the instrument. However, a change in the conductivity of the material will also effect the induction, as will changes in permeability. Thus, non-uniform heat treatment, segregation and in homogeneities in material composition and structure will also effect the induction and create an ‘indication’. Another critical factor is the distance between the coil and the test surface. This ‘lift off’ can be used in a positive way to determine coating or paint thickness’, on conducting materials. But equally, differences in the coil/specimen gap can result in non-relevant signals. The system can therefore detect surface AND slight subsurface discontinuities. However this is only possible in conducting materials and the proximity of the test coil to the test surface is critical. This means that for any component (other than flat plate), special probes are usually designed to follow specific component contours. A small eddy current machine can cost as little as $2000, but a large automated machine can cost upto $20 000RT一Short wavelength, electromagnetic radiation will pass through many materials, depending upon density and thickness, and then create a range of exposures on either film or a fluoroscopic screen, to present a visual image of the internal composition of the item. Differences in absorption within the material due to such things as gas holes, cracks and bursts will create photographic density differences on the film or detector, which can be interpreted by trained personnel. The source of radiation can be an X-ray tube or a gamma source (such as Iridium or Cobalt) and the images can be generated on either film or as real-time images on fluoroscopic screens. Defect orientation is a vital factor in radiography since it is thickness differences, which the process detects. Hence, a lamination type defect, parallel to the film would be almost impossible to detect. On the other hand, a crack perpendicular to the film would almost certainly be detected. It is therefore often the case that a single component would have to be radio graphed from more than one direction, in order to detect most defects. Finally, the radiation used is highly hazardous and therefore any environment in which it is used, must suitably shielded, to prevent exposure of the operator. As well as shielding the use of X or gamma rays will also require, monitors, alarms, interlocks and personal dosimetry systems, which along with the film itself, adds to the cost.A basic X-ray set up would cost around $10000 and with ancillary equipment and film could cost $3000 per year to run.UT—At an interface between materials of differing acoustic impedance, a sound wave will have a proportion reflected and the remainder transmitted. Thus a gas hole or crack in a forging will reflect a sound beam because of their large difference in acoustic impedance with the metal structure containing them. Since ultrasound travels in a given material at a known (predictable) velocity, then the distance to a reflector will be a direct function of this time of flight of the pulse of sound. Its location can therefore be estimated .Since the amplitude of the returning signal is also related to the size of the reflector, then an approximation can be made of the extent of the reflector, in terms of length through-wall thickness and width. The data can be presented as an ‘A’scan, on a cathode ray tube (requiring skilled interpretation) or as a ‘B’ or ‘C’ scan, where the data are plotted on printers or strip charts as a permanent record. Depths of penetration can be adjusted (by calibration and probe selection) from 10mm to 3 meters in suitable, fine-grained material. However cast, or large grained forged material, could be attenuate signals to the extent that they are untestable. A typical portable flaw detector and probes would cost around $5000, a fully automated ‘C’ scan immersion system could cost $2000.c)The variables associated with the forging1.Surface conditionFor VT and PT surfaces better than 6.3um Ra would yield the best results. For MT a similar situation exists, where a confusing background could result from rough surfaces. ET also requires a smooth a surface for preference, since ‘lift-off’ effects could be unacceptable. For RT a surface roughness exceeding 1% of material thickness could result in a significant loon of sensitivity. However for UT, a suitably viscous ‘couplant’could assist in sound transmittance, but entry surface ‘noise’on the timebase and attenuation would reduce sensitivity.2.GeometryFlat surfaces are the simplest to inspect, by any method. However, PT is least influenced by geometry, being a liquid process. MT requires that the flux be at 90 to the discontinuity and thus, curved surfaces and hollow sections offer particular problems. VT may require special access equipment and ET will need specially designed probes for curved or irregular surfaces. Since RT relies on absorption differences, variations in thickness due to curvaturewill result in large variations in photographic density and a consequent loss of film contrast. In UT the probe has best transmittance when it is whole face is in direct contact with the surface. Any curvature will result in “rocking”of the probe and a consequent loss of “coupling” and reduced signal amplitude.plexityForged bar, billet, rod and plate offer simple shapes for inspection, but aircraft landing gear is an entirely different manner. PT is the least influenced by complex shapes when using the water washable system VT will require longer inspection periods and aids such as mirrors and bores copes. For MT, the more complex the shape, the more difficult it is to arrive at an all over procedure and individual flux/current tor the various sections ET will again require specially shaped probes and RT a larger number of film exposure and angled shots UT will need careful planning to ensure complete coverage and may not be possible if access is limited.4.ThicknessVT, ET, PT and MT are all unaffected by thickness since they are surface methods. RT has an approximate thickness limit of 300mm in steel and at 2% sensitivity (a typical value), will only record discontinuities of 6mm maximum section, in the plane of the radiation. UT is capable of inspecting beyond 2 meters in fine-grained material but is less effective below 10mm or so.5.Discontinuity OrientationVT and PT are unaffected by orientation. In MT, for maximum sensitivity the flux should be at right angles to the discontinuity. ET requires that the discontinuity be at right angles to the coil windings and RT has its maximum sensitivity when the discontinuity lies parallel to the radiation beam. UT has the maximum response when the reflector is at right angles to the sound beam.译文:对铸件缺陷位置和尺寸的无损检测方法的评价对铸件裂纹探测时,选择无损检测方法必须注意以下几点:a)评定缺陷类型;b)确定评定和探测缺陷的方法;c)铸件自身相关的变化。