外文翻译-SBR法及其研究进展(适用于毕业论文外文翻译+中英文对照)
- 格式:pdf
- 大小:1.85 MB
- 文档页数:19
企业资金管理中英文对照外文翻译文献(文档含英文原文和中文翻译)An Analysis of Working Capital Management Results Across IndustriesAbstractFirms are able to reduce financing costs and/or increase the fund s available for expansion by minimizing the amount of funds tied upin current assets. We provide insights into the performance of surv eyed firms across key components of working capital management by usi ng the CFO magazine’s annual Working CapitalManagement Survey. We discover that significant differences exist b etween industries in working capital measures across time.In addition.w e discover that these measures for working capital change significantl y within industries across time.IntroductionThe importance of efficient working capital management is indisputa ble. Working capital is the difference between resources in cash or readily convertible into cash (Current Assets) and organizational commi tments for which cash will soon be required (Current Liabilities). Th e objective of working capital management is to maintain the optimum balance of each of the working capital components. Business viabilit y relies on the ability to effectively manage receivables. inventory.a nd payables. Firms are able to reduce financing costs and/or increase the funds available for expansion by minimizing the amount of funds tied up in current assets. Much managerial effort is expended in b ringing non-optimal levels of current assets and liabilities back towa rd optimal levels. An optimal level would be one in which a balance is achieved between risk and efficiency.A recent example of business attempting to maximize working capita l management is the recurrent attention being given to the applicatio n of Six Sigma®methodology. Six S igma®methodologies help companies measure and ensure quality in all areas of the enterprise. When used to identify and rectify discrepancies.inefficiencies and erroneous tra nsactions in the financial supply chain. Six Sigma®reduces Days Sale s Outstanding (DSO).accelerates the payment cycle.improves customer sati sfaction and reduces the necessary amount and cost of working capital needs. There appear to be many success stories including Jennifertwon’s(2002) report of a 15percent decrease in days that sales are outstanding.resulting in an increased cash flow of approximately $2 million at Thibodaux Regional Medical Cenrer.Furthermore bad debts declined from 3.4millin to $6000000.However.Waxer’s(2003)study of multiple firms employing Six Sig ma®finds that it is really a “get rich slow”technique with a r ate of return hovering in the 1.2 – 4.5 percent range.Even in a business using Six Sigma®methodology. an “optimal”level of working capital management needs to be identified. Industry factors may impa ct firm credit policy.inventory management.and bill-paying activities. S ome firms may be better suited to minimize receivables and inventory. while others maximize payables. Another aspect of “optimal”is the extent to which poor financial results can be tied to sub-optimal pe rformance.Fortunately.these issues are testable with data published by CFO magazine. which claims to be the source of “tools and informati on for the financial executive.”and are the subject of this resear ch.In addition to providing mean and variance values for the working capital measures and the overall metric.two issues will be addressed in this research. One research question is. “are firms within a p articular industry clustered together at consistent levels of working capital measures?For instance.are firms in one industry able to quickl y transfer sales into cash.while firms from another industry tend to have high sales levels for the particular level of inventory . The other research question is. “does working capital management perform ance for firms within a given industry change from year-to-year?”The following section presents a brief literature review.Next.the r esearch method is described.including some information about the annual Working Capital Management Survey published by CFO magazine. Findings are then presented and conclusions are drawn.Related LiteratureThe importance of working capital management is not new to the f inance literature. Over twenty years ago. Largay and Stickney (1980) reported that the then-recent bankruptcy of W.T. Grant. a nationwide chain of department stores.should have been anticipated because the co rporation had been running a deficit cash flow from operations for e ight of the last ten years of its corporate life.As part of a stud y of the Fortune 500s financial management practices. Gilbert and Rei chert (1995) find that accounts receivable management models are used in 59 percent of these firms to improve working capital projects.wh ile inventory management models were used in 60 percent of the compa nies.More recently. Farragher. Kleiman and Sahu (1999) find that 55 p ercent of firms in the S&P Industrial index complete some form of a cash flow assessment. but did not present insights regarding account s receivable and inventory management. or the variations of any curre nt asset accounts or liability accounts across industries.Thus.mixed ev idence exists concerning the use of working capital management techniq ues.Theoretical determination of optimal trade credit limits are the s ubject of many articles over the years (e.g. Schwartz 1974; Scherr 1 996).with scant attention paid to actual accounts receivable management.Across a limited sample. Weinraub and Visscher (1998) observe a tend ency of firms with low levels of current ratios to also have low l evels of current liabilities. Simultaneously investigating accounts rece ivable and payable issues.Hill. Sartoris.and Ferguson (1984) find diffe rences in the way payment dates are defined. Payees define the date of payment as the date payment is received.while payors view paymen t as the postmark date.Additional WCM insight across firms.industries.a nd time can add to this body of research.Maness and Zietlow (2002. 51. 496) presents two models of value creation that incorporate effective short-term financial management acti vities.However.these models are generic models and do not consider uni que firm or industry influences. Maness and Zietlow discuss industry influences in a short paragraph that includes the observation that. “An industry a company is located in may have more influence on th at company’s fortunes than overall GNP”(2002. 507).In fact. a car eful review of this 627-page textbook finds only sporadic information on actual firm levels of WCM dimensions.virtually nothing on industr y factors except for some boxed items with titles such as. “Should a Retailer Offer an In-House Credit Card”(128) and nothing on WC M stability over time. This research will attempt to fill this void by investigating patterns related to working capital measures within industries and illustrate differences between industries across time.An extensive survey of library and Internet resources provided ver y few recent reports about working capital management. The most relev ant set of articles was Weisel and Bradley’s (2003) article on cash flow management and one of inventory control as a result of effect ive supply chain management by Hadley (2004).Research Method The CFO RankingsThe first annual CFO Working Capital Survey. a joint project with REL Consultancy Group.was published in the June 1997 issue of CFO (Mintz and Lezere 1997). REL is a London. England-based management co nsulting firm specializing in working capital issues for its global l ist of clients. The original survey reports several working capital b enchmarks for public companies using data for 1996. Each company is ranked against its peers and also against the entire field of 1.000 companies. REL continues to update the original information on an a nnual basis.REL uses the “cash flow from operations”value located on firm cash flow statements to estimate cash conversion efficiency (CCE). T his value indicates how well a company transforms revenues into cash flow. A “days of working capital”(DWC) value is based on the d ollar amount in each of the aggregate.equally-weighted receivables.inven tory.and payables accounts. The “days of working capital”(DNC) repr esents the time period between purchase of inventory on acccount fromvendor until the sale to the customer.the collection of the receiva bles. and payment receipt.Thus.it reflects the companys ability to fin ance its core operations with vendor credit. A detailed investigation of WCM is possible because CFO also provides firm and industry val ues for days sales outstanding (A/R).inventory turnover.and days payabl es outstanding (A/P).Research FindingsAverage and Annual Working Capital Management Performance Working capital management component definitions and average values for the entire 1996 –2000 period .Across the nearly 1.000 firms in the survey.cash flow from operations. defined as cash flow from operations divided by sales and referred to as “cash conversion ef ficiency”(CCE).averages 9.0 percent.Incorporating a 95 percent confide nce interval. CCE ranges from 5.6 percent to 12.4 percent. The days working capital (DWC). defined as the sum of receivables and invent ories less payables divided by daily sales.averages 51.8 days and is very similar to the days that sales are outstanding (50.6).because the inventory turnover rate (once every 32.0 days) is similar to the number of days that payables are outstanding (32.4 days).In all ins tances.the standard deviation is relatively small.suggesting that these working capital management variables are consistent across CFO report s.Industry Rankings on Overall Working Capital Management Perfo rmanceCFO magazine provides an overall working capital ranking for firms in its ing the following equation:Industry-based differences in overall working capital management are presented for the twenty-s ix industries that had at least eight companies included in the rank ings each year.In the typical year. CFO magazine ranks 970 companies during this period. Industries are listed in order of the mean ove rall CFO ranking of working capital performance. Since the best avera ge ranking possible for an eight-company industry is 4.5 (this assume s that the eight companies are ranked one through eight for the ent ire survey). it is quite obvious that all firms in the petroleum in dustry must have been receiving very high overall working capital man agement rankings.In fact.the petroleum industry is ranked first in CCE and third in DWC (as illustrated in Table 5 and discussed later i n this paper).Furthermore.the petroleum industry had the lowest standar d deviation of working capital rankings and range of working capital rankings. The only other industry with a mean overall ranking less than 100 was the Electric & Gas Utility industry.which ranked secon d in CCE and fourth in DWC. The two industries with the worst work ing capital rankings were Textiles and Apparel. Textiles rank twenty-s econd in CCE and twenty-sixth in DWC. The apparel industry ranks twenty-third and twenty-fourth in the two working capital measures ConclusionsThe research presented here is based on the annual ratings of wo rking capital management published in CFO magazine. Our findings indic ate a consistency in how industries “stack up”against each other over time with respect to the working capital measures.However.the wor king capital measures themselves are not static (i.e.. averages of wo rking capital measures across all firms change annually); our results indicate significant movements across our entire sample over time. O ur findings are important because they provide insight to working cap ital performance across time. and on working capital management across industries. These changes may be in explained in part by macroecono mic factors Changes in interest rates.rate of innovation.and competitio n are likely to impact working capital management. As interest rates rise.there would be less desire to make payments early.which would stretch accounts payable.accounts receivable.and cash accounts. The ra mifications of this study include the finding of distinct levels of WCM measures for different industries.which tend to be stable over ti me. Many factors help to explain this discovery. The improving econom y during the period of the study may have resulted in improved turn over in some industries.while slowing turnover may have been a signal of troubles ahead. Our results should be interpreted cautiously. Our study takes places over a short time frame during a generally impr oving market. In addition. the survey suffers from survivorship bias –only the top firms within each industry are ranked each year and the composition of those firms within the industry can change annua lly.Further research may take one of two lines.First.there could bea study of whether stock prices respond to CFO magazine’s publication of working capital management rating.Second,there could be a study of which if any of the working capital management components relate to share price performance.Given our results,there studies need to take industry membership into consideration when estimating stock price reaction to working capital management performance.对整个行业中营运资金管理的研究格雷格Filbeck.Schweser学习计划托马斯M克鲁格.威斯康星大学拉克罗斯摘要:企业能够降低融资成本或者尽量减少绑定在流动资产上的成立基金数额来用于扩大现有的资金。
Agriculture and Human Values, (2006), 23: 75–88 农民社区的连接和生态农业的未来作者:作者:Sonja Sonja Brodt1, Gail Feenstra2, Robin Kozloff3, Karen Klonsky4, and LauraTourte5Tourte5作者单位:1Division of Agriculture and Natural Resources, University of California, Davis, California, USA; 2Sustainable AgricultureResearch and Education Program, University of California, Davis, California, USA; 3Private Consultant, Davis, California, USA;4Department of Agricultural and Resource Economics, University of California, Davis, California, USA; 5Santa Cruz CountyAbstract .While questions about the environmental sustainability of contemporary farming practices and thesocioeconomic viability of rural communities are attracting increasing attention throughout the US, these two issuesare rarely considered together. This paper explores the current and potential connections between these two aspects ofsustainability, using data on community members ’ and farmers and farmers’’ views of agricultural issues in California agricultural issues in California’’s CentralValley. These views were collected from a series of individual and group interviews with biologically oriented andconventional farmers as well as community stakeholders. Local marketing, farmland preservation, and perceptions ofsustainable agriculture comprised the primary topics of discussion. The mixed results indicate that, while manyfarmers and community members have a strong interest in these topics, sustainable community development and theuse of sustainable farming practices are seldom explicitly linked. On the other hand, many separate efforts around theValley to increase increase local local marketing marketing and and agritourism, agritourism, improve improve improve public public education about agriculture, and organize grassrootsfarmland preservation initiatives were documented. We conclude that linking these efforts more explicitly tosustainable agriculture and promoting more engagement between ecologically oriented farmers and their communitiescould engender more economic and political support for these farmers, helping them and their communities to achievegreater sustainability in the long run.Key words: California Key words: California’’s Central Valley, Community development,Farmer-consumer connections, Farmland preservation,Local marketing, Sustainable agriculture摘 要:要: 虽然关于现代农业实践的环境可持续性和农村社区社会经济的可行性在全美国引虽然关于现代农业实践的环境可持续性和农村社区社会经济的可行性在全美国引起了越来越多的关注,但这两个问题很少在一起考虑过。
译文译文::建筑建筑防火设计防火设计防火设计拉格夫拉格夫摘要:这篇论文主要研究建筑的防火设计,火作用于建筑与重力荷载,风荷载,地震力等作用于建筑物结构上有很大不同。
火是由人类活动或者机械故障,建筑物内的电器引起的1.1.介绍介绍介绍其他论文,考虑建筑物的设计的重力荷载,风和地震等一系列问题。
建筑物针对这些负载的影响的设计是相当大的程度上涵盖了工程的标准参照了建筑法规。
几乎在同一程度上,万一发生火灾,事实并非如此。
相反,正是如澳大利亚建筑法那样的法规明确了建筑防火安全的标准,如用as3600,as4100的方法确定耐火构件。
本文的目的就是要从工程角度考虑建筑设计消防安全,(如目前所做的风力或地震等其他荷载),同时将这种方法应用于当前规范要求的环境之中。
首先需要指出的是,设计一幢防火大楼只考虑建设结构或者是否有足够的结构性是远远不够的。
这是因为火可以直接通过烟雾和热量影响住户,还可以蔓延增加严重性,而其它对楼房的影响不具备这一特征。
尽管有这些评论,本文的大部分重点仍将集中于建筑结构的设计问题。
本文将选择一栋大楼的两种情况作为讨论的对象。
图1所示的多层办公楼利用了转换结构,跨过了一条铁路路轨。
这是在假定了广泛的轨道交通利用这些轨道基础上,考虑到了运费和内燃机车。
我们将从从消防安全角度考虑第一种情况,即转换结构。
这是被称为情况1,其中的关键问题是: 哪一级耐火要求用这种转换结构?这种转换结构又如何确定?这种情况已经选定,因为它显然不属于大多数建筑法规的正常的监管范围。
我们需要的是一项工程性的而不是指令性的解决办法。
第二种火灾形势(称为情况2)相应的消防局内不同层次的建设和涵盖了建筑法规。
选择这种情况是因为它将促成工程学方法的讨论以及如何把这些建设规章相衔接,因为两种工程和指令性的办法皆是可行的。
2.火灾的独特性2.火灾的独特性火灾的独特性介绍2.1介绍设计师无法控制风和地震等"自然"的现象,因而只能根据历史记载更合理的选择建筑物的位置,或者提高建筑的负荷能力。
道路路桥工程中英文对照外文翻译文献Asphalt Mixtures: ns。
Theory。
and Principles1.nsXXX industry。
XXX。
The most common n of asphalt is in the n of XXX "flexible" XXX them from those made with Portland cement。
XXX2.XXXXXX the use of aggregates。
XXX。
sand。
or gravel。
and a binder。
XXX for the pavement。
XXX。
The quality of the asphalt XXX to the performance of the pavement。
as it must be able to XXX。
3.PrinciplesXXX。
with each layer XXX layers typically include a subgrade。
a sub-base。
a base course。
and a surface course。
The subgrade is the natural soil or rock upon which the pavement is built。
while the sub-base and base courses provide nal support for the pavement。
The surface course is the layer that comes into direct contact with traffic and is XXX。
In n。
the use of XXX.The n of flexible pavement can be subdivided into high and low types。
国际贸易外文的文献翻译《绿色贸易壁垒对中国对外贸易的影响》毕业论文中英对照Journal of Economic Surveys, 2006, 11: 24-25.Green Barriers Trade and its Influences on China'sForeign TradeThomas J. SargentABSTRACTIn recent years, green consumption has become a main trend of the consumption in many developed countries and these countries began to make strict standards to restrict the entry of foreign products below their standards of environmental protection.Key words:Green Barriers; products; TradeIn recent years, green consumption has become a main trend of the consumption in many developed countries and these countries began to make strict standards to restrict the entry of foreign products below their standards of environmental protection. These regulations have many unfavorable influences on the export of developing countries and are generally known as "Green Barriers to trade". In accordance with the provisions of the Agreement on Green Barriers to Trade of WTO, "Green Barriers to Trade" is defined as the compulsory and arbitrary Green regulations, standards and conformity assessment procedures of the importing countries in the name of the protection of human health and environment that actually form barriers to trade with an aim to protect its home market and domestic products.1. Analysis on the causes of formation of "Green Trade Barriers"Firstly, the worsening of ecology is the major reason for "Green Barriers". With the development of industry and technology, the economy increases very fast and the human life has been improved. But at the same time, the development of economy is at a cost of the destruction of environment. The environmental problems have aroused public attention and the international society has begun to make laws to protect environment. In June, 1972, the United Nations published the StockholmGreen StandardsGreen standards refer to those compulsory Green standards provided through legislation. With their superiority in economy and technology, developed countries tend to make higher Green standards with no consideration on the interests of the developing countries. Such high Green standards will in fact constitute a barrier to the products from developing countries which are inferior in technology.2.3 Package RequirementsCertain developed countries stress too much on the protection of environment and require the products should be packed with materials that will have no harm to the environment. If the products are not packed in this way, they will not be allowed to sell in the developed countries. If such requirements are unnecessarily strict, they will be a barrier to the international trade.2.4 Sanitary and quarantine inspection systemOn the excuse of the protection of the health of human, animals and plants, developed countries tend to use very strict sanitary and quarantine inspection to restrict the importation of the products from the developing countries and protect their domestic industries.3. Influences of Green barriers on China's foreign tradeChina has suffered great loss due to the "Green barriers". In 2002, vegetables from Taizhou were prevented from entering Japan because of Japanese strict inspection and the price was greatly cut down. Also in 2002, the aquatic products from Ningbo were restricted by European Union (EU) because they could not reach the sanitary standards of EU. Due to Green trade barriers, 60 kinds of Chinese agricultural chemists were banned by EU because they could not reach the Green standards of EU. In accordance with the statistics of United Nations, China has suffered a loss of $7.4 billion in 2002 due to "Green barriers trade". China's export to EU, Japan, Korea and other countries decreases notably. Generally speaking, agricultural products and foodstuff, textile products and mechanical and electronic products are the three main industries which suffer great loss because of the strict Green barriers. Since these three products constitute the majority of Chinese exportation, we can easily draw a conclusion: "Green barriers to trade" has becomeone of the major obstacles in Chinese exportation.4. Countermeasures to the Green barriers of the developed countriesAs mentioned above, it is a fact that the Chinese export products are facing Greenbarriers of the developed countries and has suffered great loss. Therefore Chinese exporters should think carefully about the countermeasures to eliminate the unfavorable influences of such measures. First, we should make full use of the preferential treatment to the developing countries stipulated in the Agreement of Green trade barrier. According to the provisions of the Agreement of Green trade barrier, developed countries should take account of the special development, financial and trade needs of developing country members with a view to ensuring that such Green regulations, standards and conformity assessment procedures do not create unnecessary obstacles to exports from developing countries. So, as a developing member of WTO, China is entitled to such preferential treatment. Secondly, China should make use of the Dispute Settlement System of WTO to protect her interests. Different from GATT, WTO has set up a powerful dispute settlement system to solve the disputes between the members of WTO. So, if our interests are harmed by the unfair Green barriers of other WTO members, we can resort to Dispute Settlement Body to settle this dispute and urge other members to change their unfair practices so as to protect our interests. Thirdly, China should stress the protection of environment and take measures to improve the quality and Green level of her export products to meet higher Green standards, which will fundamentally solve the problem of Green barriers.References[1]John, Smith. 2007, Green trade protectionism to Chinese agricultural product export influence Economics ,4,34-56.[2] Anderson, J.L., 2001, The Greening of World Trade Issues, Journal of Marketing Research, 24, 347-356.[3] Gallagher, R., 2003, International Trade in Agricultural Products, Journal of General Management, 3, 1, 43-62.- 1 -经济研究杂志, 2006, 11: 24-27.绿色贸易壁垒及其对中国对外贸易的影响萨金特莱斯大学经济管理学院摘要:近年来,绿色消费在许多发达国家中已成为一个主要的消费趋势,这些发达国家开始采取严格的措施来限制一些国家的产品进入其国内市场。
中英文对照外文翻译(文档含英文原文和中文翻译)Bridge research in EuropeA brief outline is given of the development of the European Union, together withthe research platform in Europe. The special case of post-tensioned bridges in the UK is discussed. In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio: relating to the identification of voids in post-tensioned concrete bridges using digital impulse radar.IntroductionThe challenge in any research arena is to harness the findings of different research groups to identify a coherent mass of data, which enables research and practice to be better focused. A particular challenge exists with respect to Europe where language barriers are inevitably very significant. The European Community was formed in the 1960s based upon a political will within continental Europe to avoid the European civil wars, which developed into World War 2 from 1939 to 1945. The strong political motivation formed the original community of which Britain was not a member. Many of the continental countries saw Britain’s interest as being purelyeconomic. The 1970s saw Britain joining what was then the European Economic Community (EEC) and the 1990s has seen the widening of the community to a European Union, EU, with certain political goals together with the objective of a common European currency.Notwithstanding these financial and political developments, civil engineering and bridge engineering in particular have found great difficulty in forming any kind of common thread. Indeed the educational systems for University training are quite different between Britain and the European continental countries. The formation of the EU funding schemes —e.g. Socrates, Brite Euram and other programs have helped significantly. The Socrates scheme is based upon the exchange of students between Universities in different member states. The Brite Euram scheme has involved technical research grants given to consortia of academics and industrial partners within a number of the states—— a Brite Euram bid would normally be led by partners within a number of the statesan industrialist.In terms of dissemination of knowledge, two quite different strands appear to have emerged. The UK and the USA have concentrated primarily upon disseminating basic research in refereed journal publications: ASCE, ICE and other journals. Whereas the continental Europeans have frequently disseminated basic research at conferences where the circulation of the proceedings is restricted.Additionally, language barriers have proved to be very difficult to break down. In countries where English is a strong second language there has been enthusiastic participation in international conferences based within continental Europe —e.g. Germany, Italy, Belgium, The Netherlands and Switzerland. However, countries where English is not a strong second language have been hesitant participants }—e.g. France.European researchExamples of research relating to bridges in Europe can be divided into three types of structure:Masonry arch bridgesBritain has the largest stock of masonry arch bridges. In certain regions of the UK up to 60% of the road bridges are historic stone masonry arch bridges originally constructed for horse drawn traffic. This is less common in other parts of Europe as many of these bridges were destroyed during World War 2.Concrete bridgesA large stock of concrete bridges was constructed during the 1950s, 1960s and 1970s. At the time, these structures were seen as maintenance free. Europe also has a large number of post-tensioned concrete bridges with steel tendon ducts preventing radar inspection. This is a particular problem in France and the UK.Steel bridgesSteel bridges went out of fashion in the UK due to their need for maintenance as perceived in the 1960s and 1970s. However, they have been used for long span and rail bridges, and they are now returning to fashion for motorway widening schemes in the UK.Research activity in EuropeIt gives an indication certain areas of expertise and work being undertaken in Europe, but is by no means exhaustive.In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio. The example relates to the identification of voids in post-tensioned concrete bridges, using digital impulse radar.Post-tensioned concrete rail bridge analysisOve Arup and Partners carried out an inspection and assessment of the superstructure of a 160 m long post-tensioned, segmental railway bridge in Manchester to determine its load-carrying capacity prior to a transfer of ownership, for use in the Metrolink light rail system..Particular attention was paid to the integrity of its post-tensioned steel elements.Physical inspection, non-destructive radar testing and other exploratory methods were used to investigate for possible weaknesses in the bridge.Since the sudden collapse of Ynys-y-Gwas Bridge in Wales, UK in 1985, there has been concern about the long-term integrity of segmental, post-tensioned concrete bridges which may b e prone to ‘brittle’ failure without warning. The corrosion protection of the post-tensioned steel cables, where they pass through joints between the segments, has been identified as a major factor affecting the long-term durability and consequent strength of this type of bridge. The identification of voids in grouted tendon ducts at vulnerable positions is recognized as an important step in the detection of such corrosion.Description of bridgeGeneral arrangementBesses o’ th’ Barn Bridge is a 160 m long, three span, segmental, post-tensionedconcrete railway bridge built in 1969. The main span of 90 m crosses over both the M62 motorway and A665 Bury to Prestwick Road. Minimum headroom is 5.18 m from the A665 and the M62 is cleared by approx 12.5 m.The superstructure consists of a central hollow trapezoidal concrete box section 6.7 m high and 4 m wide. The majority of the south and central spans are constructed using 1.27 m long pre-cast concrete trapezoidal box units, post-tensioned together. This box section supports the in site concrete transverse cantilever slabs at bottom flange level, which carry the rail tracks and ballast.The center and south span sections are of post-tensioned construction. These post-tensioned sections have five types of pre-stressing:1. Longitudinal tendons in grouted ducts within the top and bottom flanges.2. Longitudinal internal draped tendons located alongside the webs. These are deflected at internal diaphragm positions and are encased in in site concrete.3. Longitudinal macalloy bars in the transverse cantilever slabs in the central span .4. Vertical macalloy bars in the 229 mm wide webs to enhance shear capacity.5. Transverse macalloy bars through the bottom flange to support the transverse cantilever slabs.Segmental constructionThe pre-cast segmental system of construction used for the south and center span sections was an alternative method proposed by the contractor. Current thinkingire suggests that such a form of construction can lead to ‘brittle’ failure of the ententire structure without warning due to corrosion of tendons across a construction joint,The original design concept had been for in site concrete construction.Inspection and assessmentInspectionInspection work was undertaken in a number of phases and was linked with the testing required for the structure. The initial inspections recorded a number of visible problems including:Defective waterproofing on the exposed surface of the top flange.Water trapped in the internal space of the hollow box with depths up to 300 mm.Various drainage problems at joints and abutments.Longitudinal cracking of the exposed soffit of the central span.Longitudinal cracking on sides of the top flange of the pre-stressed sections.Widespread sapling on some in site concrete surfaces with exposed rusting reinforcement.AssessmentThe subject of an earlier paper, the objectives of the assessment were:Estimate the present load-carrying capacity.Identify any structural deficiencies in the original design.Determine reasons for existing problems identified by the inspection.Conclusion to the inspection and assessmentFollowing the inspection and the analytical assessment one major element of doubt still existed. This concerned the condition of the embedded pre-stressing wires, strands, cables or bars. For the purpose of structural analysis these elements、had been assumed to be sound. However, due to the very high forces involved,、a risk to the structure, caused by corrosion to these primary elements, was identified.The initial recommendations which completed the first phase of the assessment were:1. Carry out detailed material testing to determine the condition of hidden structural elements, in particularthe grouted post-tensioned steel cables.2. Conduct concrete durability tests.3. Undertake repairs to defective waterproofing and surface defects in concrete.Testing proceduresNon-destructi v e radar testingDuring the first phase investigation at a joint between pre-cast deck segments the observation of a void in a post-tensioned cable duct gave rise to serious concern about corrosion and the integrity of the pre-stress. However, the extent of this problem was extremely difficult to determine. The bridge contains 93 joints with an average of 24 cables passing through each joint, i.e. there were approx. 2200 positions where investigations could be carried out. A typical section through such a joint is that the 24 draped tendons within the spine did not give rise to concern because these were protected by in site concrete poured without joints after the cables had been stressed.As it was clearly impractical to consider physically exposing all tendon/joint intersections, radar was used to investigate a large numbers of tendons and hence locate duct voids within a modest timescale. It was fortunate that the corrugated steel ducts around the tendons were discontinuous through the joints which allowed theradar to detect the tendons and voids. The problem, however, was still highly complex due to the high density of other steel elements which could interfere with the radar signals and the fact that the area of interest was at most 102 mm wide and embedded between 150 mm and 800 mm deep in thick concrete slabs.Trial radar investigations.Three companies were invited to visit the bridge and conduct a trial investigation. One company decided not to proceed. The remaining two were given 2 weeks to mobilize, test and report. Their results were then compared with physical explorations.To make the comparisons, observation holes were drilled vertically downwards into the ducts at a selection of 10 locations which included several where voids were predicted and several where the ducts were predicted to be fully grouted. A 25-mm diameter hole was required in order to facilitate use of the chosen horoscope. The results from the University of Edinburgh yielded an accuracy of around 60%.Main radar sur v ey, horoscope verification of v oids.Having completed a radar survey of the total structure, a baroscopic was then used to investigate all predicted voids and in more than 60% of cases this gave a clear confirmation of the radar findings. In several other cases some evidence of honeycombing in the in site stitch concrete above the duct was found.When viewing voids through the baroscopic, however, it proved impossible to determine their actual size or how far they extended along the tendon ducts although they only appeared to occupy less than the top 25% of the duct diameter. Most of these voids, in fact, were smaller than the diameter of the flexible baroscopic being used (approximately 9 mm) and were seen between the horizontal top surface of the grout and the curved upper limit of the duct. In a very few cases the tops of the pre-stressing strands were visible above the grout but no sign of any trapped water was seen. It was not possible, using the baroscopic, to see whether those cables were corroded.Digital radar testingThe test method involved exciting the joints using radio frequency radar antenna: 1 GHz, 900 MHz and 500 MHz. The highest frequency gives the highest resolution but has shallow depth penetration in the concrete. The lowest frequency gives the greatest depth penetration but yields lower resolution.The data collected on the radar sweeps were recorded on a GSSI SIR System 10.This system involves radar pulsing and recording. The data from the antenna is transformed from an analogue signal to a digital signal using a 16-bit analogue digital converter giving a very high resolution for subsequent data processing. The data is displayed on site on a high-resolution color monitor. Following visual inspection it isthen stored digitally on a 2.3-gigabyte tape for subsequent analysis and signal processing. The tape first of all records a ‘header’ noting the digital radar settings together with the trace number prior to recording the actual data. When the data is played back, one is able to clearly identify all the relevant settings —making for accurate and reliable data reproduction.At particular locations along the traces, the trace was marked using a marker switch on the recording unit or the antenna.All the digital records were subsequently downloaded at the University’s NDT laboratory on to a micro-computer.(The raw data prior to processing consumed 35 megabytes of digital data.) Post-processing was undertaken using sophisticated signal processing software. Techniques available for the analysis include changing the color transform and changing the scales from linear to a skewed distribution in order to highlight、突出certain features. Also, the color transforms could be changed to highlight phase changes. In addition to these color transform facilities, sophisticated horizontal and vertical filtering procedures are available. Using a large screen monitor it is possible to display in split screens the raw data and the transformed processed data. Thus one is able to get an accurate indication of the processing which has taken place. The computer screen displays the time domain calibrations of the reflected signals on the vertical axis.A further facility of the software was the ability to display the individual radar pulses as time domain wiggle plots. This was a particularly valuable feature when looking at individual records in the vicinity of the tendons.Interpretation of findingsA full analysis of findings is given elsewhere, Essentially the digitized radar plots were transformed to color line scans and where double phase shifts were identified in the joints, then voiding was diagnosed.Conclusions1. An outline of the bridge research platform in Europe is given.2. The use of impulse radar has contributed considerably to the level of confidence in the assessment of the Besses o’ th’ Barn Rail Bridge.3. The radar investigations revealed extensive voiding within the post-tensioned cable ducts. However, no sign of corrosion on the stressing wires had been foundexcept for the very first investigation.欧洲桥梁研究欧洲联盟共同的研究平台诞生于欧洲联盟。
企业利润质量分析中英文对照外文翻译文献摘要本文翻译了一篇关于企业利润质量分析的外文文献,旨在探讨企业利润质量分析在中英两国间的差异和相似之处。
文献中提供了关于企业利润质量分析的定义、方法和实证结果等内容,为读者理解和应用该领域的理论与实践提供了重要参考。
引言企业利润质量分析是一个重要的财务领域研究方向,它关注企业盈利能力的稳定性、可靠性和可持续性。
随着全球经济一体化的深入发展,中英两国的企业利润质量分析研究也日益活跃。
本文选取了多篇与企业利润质量分析相关的英文文献,并对其内容进行了翻译,旨在为读者了解中英两国在该领域的研究进展提供便利。
企业利润质量分析的定义据文献分析,企业利润质量分析是一种通过财务报表、财务指标等量化数据进行的评估企业盈利活动的方法。
它的目的是揭示企业利润数据的真实性、可靠性以及相关风险因素,为投资者、管理者和监管机构提供决策依据。
企业利润质量可由多个维度进行分析,如利润的稳定性、准确性、增长率和来源等。
中英两国在利润质量分析的定义上存在一些差异,在具体指标选取和计算方法上也有一定差异。
企业利润质量分析的方法文献中介绍了多种方法用于企业利润质量分析,包括财务比率分析、财务模型建立和统计分析等。
这些方法通过分析企业的财务数据、经营环境和行业特征等因素,评估企业盈利数据的质量。
中英两国的企业利润质量分析方法较为相似,都包括多种定量分析工具和技术。
然而,在具体的分析模型和指标选取上可能存在差异,这受到了两国财务会计规范和监管要求的影响。
企业利润质量分析的实证结果文献中总结了一些对企业利润质量分析的实证结果。
这些结果揭示了企业利润质量与企业绩效、风险管理以及信息透明度之间的关系。
中英两国的实证结果在某些方面存在一定的差异,这可能是由于两国的经济、财务和法律环境不同所致。
然而,文献也指出了一些共同的趋势,为中英两国企业利润质量分析研究提供了重要参考。
结论通过对企业利润质量分析相关文献的翻译,我们可以了解到中英两国在该领域的研究进展和差异。
机器人技术发展趋势作者:Jim Pinto,加利福利亚州圣迭亚哥·美国谈到机器人,现实仍落后于科幻小说。
但是,仅仅因为机器人在过去的几十年没有实现它们的承诺,并不意味着机器人的时代不会到来,或早或晚。
事实上,多种先进技术的影响已经使得机器人的时代变得更近——更小、更便宜、更实用和更具成本效益。
肌肉、骨骼和大脑任何一个机器人都有三方面:·肌肉——有效联系有关物理荷载以便于机器人运动。
·骨骼——一个机器人的物理结构取决于它所做的工作;它的尺寸大小和重量则取决于它的物理荷载。
·大脑——机器人智能;它能独立思考和做什么;需要多少人工互动。
由于机器人在科幻世界中所被描绘过的方式,很多人希望机器人在外型上与人类相似。
但事实上,机器人的外形更多地取决于它所做的工作或具备的功能。
很多一点儿也不像人的机器也被清楚地归为机器人。
同样,很多看起来像人的机器却还是仅仅属于机械结构和玩具。
很多早期的机器人是除了有很大力气而毫无其他功能的大型机器。
老式的液压动力机器人已经被用来执行3-D任务即平淡、肮脏和危险的任务。
由于第一产业技术的进步,完全彻底地改进了机器人的性能、业绩和战略利益。
比如,20世纪80年代,机器人开始从液压动力转换成为电动单位。
精度和性能也提高了。
工业机器人已经在工作时至今日,全世界机器人的数量已经接近100万,其中超过半数的机器人在日本,而仅仅只有15%在美国。
几十年前,90%的机器人是服务于汽车生产行业,通常用于做大量重复的工作。
现在,只有50%的机器人用于汽车制造业,而另一半分布于工厂、实验室、仓库、发电站、医院和其他的行业。
机器人用于产品装配、危险物品处理、油漆喷雾、抛光、产品的检验。
用于清洗下水道,探测炸弹和执行复杂手术的各种任务的机器人数量正在稳步增加,在未来几年内将继续增长。
机器人智能即使是原始的智力,机器人已经被证明了在生产力、效率和质量方面都能够创造良好的效益。
中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:研究钢弧形闸门的动态稳定性摘要由于钢弧形闸门的结构特征和弹力,调查对参数共振的弧形闸门的臂一直是研究领域的热点话题弧形弧形闸门的动力稳定性。
在这个论文中,简化空间框架作为分析模型,根据弹性体薄壁结构的扰动方程和梁单元模型和薄壁结构的梁单元模型,动态不稳定区域的弧形闸门可以通过有限元的方法,应用有限元的方法计算动态不稳定性的主要区域的弧形弧形闸门工作。
此外,结合物理和数值模型,对识别新方法的参数共振钢弧形闸门提出了调查,本文不仅是重要的改进弧形闸门的参数振动的计算方法,但也为进一步研究弧形弧形闸门结构的动态稳定性打下了坚实的基础。
简介低举升力,没有门槽,好流型,和操作方便等优点,使钢弧形闸门已经广泛应用于水工建筑物。
弧形闸门的结构特点是液压完全作用于弧形闸门,通过门叶和主大梁,所以弧形闸门臂是主要的组件确保弧形闸门安全操作。
如果周期性轴向载荷作用于手臂,手臂的不稳定是在一定条件下可能发生。
调查指出:在弧形闸门的20次事故中,除了极特殊的破坏情况下,弧形闸门的破坏的原因是弧形闸门臂的不稳定;此外,明显的动态作用下发生破坏。
例如:张山闸,位于中国的江苏省,包括36个弧形闸门。
当一个弧形闸门打开放水时,门被破坏了,而其他弧形闸门则关闭,受到静态静水压力仍然是一样的,很明显,一个动态的加载是造成的弧形闸门破坏一个主要因素。
因此弧形闸门臂的动态不稳定是造成弧形闸门(特别是低水头的弧形闸门)破坏的主要原是毫无疑问。
基于弧形闸门结构和作用力的特点,研究钢弧形闸门专注于研究弧形闸门臂的动态不稳定。
在1980年的,教授闫世武,教授张继光公认的参数振动引起的弧形闸门臂动态不稳定的是原因之一。
他们提出了一个简单的分析方法,近年来,在一些文献中广泛地被引用进来调查。
然而,这些调查的得到都基于模型,弧形闸门臂被视为平面简单的梁,由于弧形弧形闸门是一个复杂的空间结构,三维效果非常明显,平面简单的梁的模型无法揭示这个空间效果,并不能精确的体现弧形闸门臂的动态不稳定性,本文提出一种计算方法用于分析弧形闸门的动态不稳定。
附件1:外文资料翻译译文葡萄牙的钢结构发展——帕特里克道林的影响安东尼雷萨诺·加西亚莱曼摘要:本文基于大学教研和葡萄牙协会对钢结构和复合结构的创造,主要讲述了帕特里克道林教授对作者职业生涯和葡萄牙钢结构发展的影响。
关键词:葡萄牙;钢;结构;研究1.导言截至钢筋混凝土的出现和第一次世界大战的到来,在钢铁的应用方面葡萄牙紧随欧洲大多数国家的脚步。
此后,铸造厂虽然存在,但是为了保护葡萄牙和其殖民地的正在发展中的水泥业,葡萄牙在进口结构产品方面收取很高的税收。
第一钢铁厂坐落在赛沙尔,建于1961年,横跨了起源与里斯本的河流并从安哥拉获取铁矿石和煤炭,但是这项大的投资并没有制止结构钢的使用下降,然而由于这家工厂主要生产工业建筑用钢以及轧制小型材,结构钢的使用下降几乎限制了建筑行业的发展。
钢结构规范反映了这一衰退,而且直至的19世纪80年代葡萄牙一直使用着过时的规范。
确切的说,大学中对钢结构研究和教学的缺失是导致结构钢使用量下降的原因之一,而结构钢使用量下降加剧了这一恶性循环。
在混凝土行业情况大为不同,在这个领域,葡萄牙的土木工程国家实验室(LNEC)是世界领先机构,而且葡萄牙市行政首长协调会和编写混凝土规范的积极参与者。
葡萄牙地处地震风险区,国家实验室在混凝土结构抗震性方面做的研究和先进规范的适用性都给予混凝土行业以信心和优势。
因此所有土木工程课程都把重点放在了混凝土的教学上,而忽略了钢结构和复合结构的教学。
在1974年民主革命的几年前,土木工程行业发生了一次深刻的教育系统的变革。
一个培养新型研究人员的大型机构成立了,这一机构专门培养没有被各大学和公共实验室涵盖的研究人员并提供出国学习的资金。
里斯本技术大学工程学院是一高级技术研究所(IST),它是最早受益于这一大型机构的机构之一,并且钢结构被明智地确定为土木工程学院新型职工培训的主要要求。
由于正在进行的殖民战争,我在刚刚服完长期的兵役之后,获得了一笔助学金是我可以在皇家学院学习钢结构课程。
双向可控硅整流器中英文对照外文翻译文献双向可控硅整流器中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Investigation on Bi-directional SCR ESD Protection Devicesin a 0.18μm RF CMOS ProcessAbstract: Based on the bi-directional silicon controlled rectifier (SCR), two novel electrostatic discharge (ESD) protection devices have been proposed, which can prevent ESD stresses on both the positive and the negative directions. While the conventional dual-direction SCR ESD protection device is usually triggered by the avalanche break downbetween N-well and P-well, the two proposed devices use the embedded NMOS/PMOS as the triggering structure to decrease the trigger voltage. Both the modified structures are implemented in a 0.18μm RF CMOS process and examined by the transmission line pulse testing system. Experimental results indicate that the proposed devices have lower trigger voltage, smaller leakage current (~n A), a protection level passing 2kV of human body model, and a high holding voltage (>3.3V), making them immune to the latch-up in 1.8V or 3.3V I/O ESD protection applications. IntroductionElectrostatic discharge (ESD) has become one of the most serious reliability concerns in current integrated circuits (ICs). With the continuous decreasing of device size and increasing of circuit complexity, modern ICs are more susceptible to ESD stress. Providing adequate ESD protection for ICs has thus become an important and challenging task. Traditional ESD snapback protection devices such as the bipolar junction transistor (BJT), the grounded-gate NMOS (GGNMOS), and the low triggering voltage silicon controlled rectifier (SCR), can usually protect circuits in one direction. They provide a forward current shunt path for the positive ESD stress, and rely on the body diode protecting against the negative ESD stress. However, it is also necessary to have the dual-directional protection capability for ESD protection devices used in some circuits, such as the column drivers in liquid crystal displays, RFinputs, interface applications and digital-analog converters.The most area-efficient ESD protection solution is the dual-direction silicon controlled rectifier (DDSCR), which can form the snapback in two directions to protect against both the positive and negative stresses. Unfortunately, the trigger voltage of DDSCR is quite high due to its inherent triggering mechanism. For example, since the break down voltage of gate oxide is less than 20V in the 0.18μm CMOS process, a trigger voltage as high as about 15V makes DDSCR inappropriate to provide the effective ESD protection for modern IC cores. To reduce the trigger voltage, two new devices based on DDSCR are developed and realized in the 0.18μm RF CMOS process in this paper. The transmission line pulse (TLP) testing results indicate that the improved devices possess lower trigger voltage and smaller leakage current.1 Conventional DDSCRThe cross-section of a conventional DDSCR containing two embedded symmetrical SCRs (SCR1 and SCR2) is shown in Fig.1. When a positive ESD stress is applied to the Terminal 1, the parasitic transistor T2 is off due to the reversely biased N-well/P-well. When the ESD stress reaches the avalanche breakdown voltage of N-well/P-well, significant electron-hole pairs are generated. The current flows from the N-well to the P-well, and the parasitic resistance R4 in the current path results in an electrical potential drop, helping the base junction of the transistor T3build up the potential . When this potential is greater than 0.7V, T3 is turned on. Then SCR1 is successfully triggered with the positive-feed back regeneration and driven in to the deep snapback region with a low holding voltage. An active discharging path with low impendence is formed to shunt the huge current in the forward direction, and the I/O PAD voltage is clamped to a low level. When a negative ESD stress is applied to the Terminal 1, SCR2 is triggered similarly as SCR1. DDSCR is off when the circuit works under normal conditions. So DDSCR can provide dual-directional protection, and possess snapback characteristics against both the positive and negative ESD stresses.Fig.1 Cross section of conventional DDSCR2 NMOS/PMOS Modified DDSCRs2.1 NMOS Modified DDSCRThe trigger voltage of a conventional DDSCR is basically determined by the avalanche breakdown voltage of N-well/P-well, whichis usually too high to prevent ESD damage with a thin gate oxide in the 0.18μm RF CMOS process.The triggering voltage can be reduced significantly by changing the triggering mechanism. In this paper, we propose an NMOS-modified DDSCR (NMDDSCR) to reduce the triggering voltage, as shown in Fig.2, where extra mask, T-well, is used to isolate NMOS from the substrate bias condition in the RF CMOS process. When a positive ESD stress is applied to the Terminal 1, the channel inversion region of NMOS1 can be formed. The large current can then pass through the channel of NMOS 1 to NMOS 2, and NMOS 1 serves as a resistor. The GGNMOS consisting of NMOS 2 is the triggering component of this proposed device. As a result, the trigger voltage is decrea sed to about 7V in the 0.18μm RF CMOS process, much lower than that of a conventional DDSCR.Fig.2 Cross section of NMDDSCRWhen the GGNMOS is turned on, the SCR consisting of T1 and T2is triggered. A low resistance path is then formed to shunt the huge current and provide a clamp for I/O PAD. When a negative ESD is applied to the Terminal 1, NMDDSCR can be triggered in the same way due to its symmetric structure. NMDDSCR is off when the circuit works under normal conditions.According to the above analysis, the trigger voltage of NMDDSCR is mainly determined by that of the embedded GGNMOS. The triggering mode has changed from the avalanche breakdown between N-well/P-well to that between 错误!未找到引用源。
中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Effects Of Working Capital Management On Sme ProfitabilityThe corporate finance literature has traditionally focused on the study of long-term financial decisions. Researchers have particularly offered studies analyzing investments, capital structure, dividends or company valuation, among other topics. But the investment that firms make in short-term assets, and the resources used with maturities of under one year, represent the main share of items on a firm’s balance sheet. In fact, in our sample the current assets of small and medium-sized Spanish firms represent 69.48 percent of their assets, and at the same time their current liabilities represent more than 52.82 percent of their liabilities.Working capital management is important because of its effects on the firm’s profitability and risk, and consequently its value (Smith, 1980). On the one hand, maintaining high inventory levels reduces the cost of possible interruptions in the production process, or of loss of business due to the scarcity of products, reducessupply costs, and protects against price fluctuations, among other advantages (Blinder and Manccini, 1991). On the other, granting trade credit favors the firm’s sales in various ways. Trade credit can act as an effective price cut (Brennan, Maksimovic and Zechner,1988; Petersen and Rajan, 1997), incentivizes customers to acquire merchandise at times of low demand (Emery, 1987), allows customers to check that the merchandise they receive is as agreed (quantity and quality) and to ensure that the services contracted are carried out (Smith, 1987), and helps firms to strengthen long-term relationships with their customers (Ng, Smith and Smith, 1999). However, firms that invest heavily in inventory and trade credit can suffer reduced profitability. Thus,the greater the investment in current assets, the lower the risk, but also the lower the profitability obtained.On the other hand, trade credit is a spontaneous source of financing that reduces the amount required to finance the sums tied up in the inventory and customer accounts. But we should bear in mind that financing from suppliers can have a very high implicit cost if early payment discounts are available. In fact the opportunity cost may exceed 20 percent, depending on the discount percentage and the discount period granted (Wilner,2000; Ng, Smith and Smith, 1999). In this respect, previous studies have analyzed the high cost of trade credit, and find that firms finance themselves with seller credit when they do not have other more economic sources of financing available (Petersen and Rajan, 1994 and 1997).Decisions about how much to invest in the customer and inventory accounts, and how much credit to accept from suppliers, are reflected in the firm’s cash conve rsion cycle, which represents the average number of days between the date when the firm must start paying its suppliers and the date when it begins to collect payments from its customers. Some previous studies have used this measure to analyze whether shortening the cash conversion cycle has positive or negative effects on the firm’s profitability.Specifically, Shin and Soenen (1998) analyze the relation between the cash conversion cycle and profitability for a sample of firms listed on the US stock exchange during the period 1974-1994. Their results show that reducing the cash conversion cycle to a reasonable extent increases firms’ profitability. More recently,Deloof (2003) analyzes a sample of large Belgian firms during the period 1992-1996. His results confirm that Belgian firms can improve their profitability by reducing the number of days accounts receivable are outstanding and reducing inventories. Moreover, he finds that less profitable firms wait longer to pay their bills.These previous studies have focused their analysis on larger firms. However, the management of current assets and liabilities is particularly important in the case of small and medium-sized companies. Most of these companies’ assets are in the form of current assets. Also, current liabilities are one of their main sources of external finance in view of their difficulties in obtaining funding in the long-term capital markets(Petersen and Rajan, 1997) and the financing constraints that they face (Whited, 1992; Fazzari and Petersen, 1993). In this respect, Elliehausen and Woken (1993), Petersen and Rajan (1997) and Danielson and Scott (2000) show that small and medium-sized US firms use vendor financing when they have run out of debt. Thus, efficient working capital management is particularly important for smaller companies (Peel and Wilson,1996).In this context, the objective of the current work is to provide empirical evidence about the effects of working capital management on profitability for a panel made up of 8,872 SMEs during the period 1996-2002. This work contributes to the literature in two ways. First, no previous such evidence exists for the case of SMEs. We use a sample of Spanish SMEs that operate within the so-called continental model, which is characterized by its less developed capital markets (La Porta, López-de-Silanes, Shleifer, and Vishny, 1997), and by the fact that most resources are channeled through financial intermediaries (Pampillón, 2000). All this suggests that Spanish SMEs have fewer alternative sources of external finance available, which makes them more dependent on short-term finance in general, and on trade credit in particular. As Demirguc-Kunt and Maksimovic (2002) suggest, firms operating in countries with more developed banking systems grant more trade credit to their customers, and at the same time they receive more finance from their own suppliers. The second contribution is that, unlike the previous studies by Shin and Soenen (1998) and Deloof (2003), in the current work we have conducted tests robust to the possible presence ofendogeneity problems. The aim is to ensure that the relationships found in the analysis carried out are due to the effects of the cash conversion cycle on corporate profitability and not vice versa.Our findings suggest that managers can create value by reducing their firm’s number of days accounts receivable and inventories. Similarly, shortening the cash conversion cycle also improves the firm’s profitability.We obtained the data used in this study from the AMADEUS database. This database was developed by Bureau van Dijk, and contains financial and economic data on European companies.The sample comprises small and medium-sized firms from Spain. The selection of SMEs was carried out according to the requirements established by the European Commission’s recommendation 96/280/CE of 3 April, 1996, on the definition of small and medium-sized firms. Specifically, we selected those firms meeting the following criteria for at least three years: a) have fewer than 250 employees; b) turn over less than €40 million; and c) possess less than €27 million of total assets.In addition to the application of those selection criteria, we applied a series of filters. Thus, we eliminated the observations of firms with anomalies in their accounts, such as negative values in their assets, current assets, fixed assets, liabilities, current liabilities, capital, depreciation, or interest paid. We removed observations of entry items from the balance sheet and profit and loss account exhibiting signs that were contrary to reasonable expectations. Finally, we eliminated 1 percent of the extreme values presented by several variables. As a result of applying these filters, we ended up with a sample of 38,464 observations.In order to introduce the effect of the economic cycle on the levels invested in working capital, we obtained information about the annual GDP growth in Spain from Eurostat.In order to analyze the effects of working capital management on the firm’s profitability, we used the return on assets (ROA) as the dependent variable. We defined this variable as the ratio of earnings before interest and tax to assets.With regards to the independent variables, we measured working capitalmanagement by using the number of days accounts receivable, number of days of inventory and number of days accounts payable. In this respect, number of days accounts receivable (AR) is calculated as 365 ×[accounts receivable/sales]. This variable represents the average number of days that the firm takes to collect payments from its customers. The higher the value, the higher its investment in accounts receivable.We calculated the number of days of inventory (INV) as 365 ×[inventories/purchases]. This variable reflects the average number of days of stock held by the firm. Longer storage times represent a greater investment in inventory for a particular level of operations.The number of days accounts payable (AP) reflects the average time it takes firms to pay their suppliers. We calculated this as 365 × [accounts payable/purchases]. The higher the value, the longer firms take to settle their payment commitments to their suppliers.Considering these three periods jointly, we estimated the cash conversion cycle(CCC). This variable is calculated as the number of days accounts receivable plus thenumber of days of inventory minus the number of days accounts payable. The longerthe cash conversion cycle, the greater the net investment in current assets, and hence the greater the need for financing of current assets.Together with these variables, we introduced as control variables the size of the firm, the growth in its sales, and its leverage. We measured the size (SIZE) as the logarithm of assets, the sales growth (SGROW) as (Sales1 –Sales0)/Sales0, the leverage(DEBT) as the ratio of debt to liabilities. Dellof (2003) in his study of large Belgian firms also considered the ratio of fixed financial assets to total assets as a control variable. For some firms in his study such assets are a significant part of total assets.However our study focuses on SMEs whose fixed financial assets are less important. In fact, companies in our sample invest little in fixed financial assets (a mean of 3.92 percent, but a median of 0.05 percent). Nevertheless, the results remain unaltered whenwe include this variable.Furthermore, and since good economic conditions tend to be reflected in a firm’sprofitability, we controlled for the evolution of the economic cycle using the variable GDPGR, which measures the annual GDP growth.Current assets and liabilities have a series of distinct characteristics according to the sector of activity in which the firm operates. Thus, Table I reports the return on assets and number of days accounts receivable, days of inventory, and days accounts payable by sector of activity. The mining industry and services sector are the two sectors with the highest return on their assets, with a value of 10 percent. Firms that are dedicated to agriculture, trade (wholesale or retail), transport and public services, are some way behind at 7 percent.With regard to the average periods by sector, we find, as we would expect, that the firms dedicated to the retail trade, with an average period of 38 days, take least time to collect payments from their customers. Construction sector firms grant their customers the longest period in which to pay –more than 145 days. Next, we find mining sector firms, with a number of days accounts receivable of 116 days. We also find that inventory is stored longest in agriculture, while stocks are stored least in the transport and public services sector. In relation to the number of days accounts payable, retailers (56 days) followed by wholesalers (77 days) pay their suppliers earliest. Firms are much slower in the construction and mining sectors, taking more than 140 days on average to pay their suppliers. However, as we have mentioned, these firms also grant their own customers the most time to pay them. Considering all the average periods together, we note that the cash conversion cycle is negative in only one sector – that of transport and public services. This is explained by the short storage times habitual in this sector. In this respect, agricultural and manufacturing firms take the longest time to generate cash (95 and 96 days, respectively), and hence need the most resources to finance their operational funding requirements.Table II offers descriptive statistics about the variables used for the sample as a whole. These are generally small firms, with mean assets of more than €6 milli on; their return on assets is around 8 percent; their number of days accounts receivable is around 96 days; and their number of days accounts payable is very similar: around 97 days. Together with this, the sample firms have seen their sales grow by almost 13percent annually on average, and 24.74 percent of their liabilities is taken up by debt. In the period analyzed (1996-2002) the GDP has grown at an average rate of 3.66 percent in Spain.Source: Pedro Juan García-Teruel and Pedro Martínez-Solano ,2006.“Effects of Working Capital Management on SME Profitability” .International Journal of Managerial Finance ,vol. 3, issue 2, April,pages 164-167.译文:营运资金管理对中小企业的盈利能力的影响公司理财著作历来把注意力集中在了长期财务决策研究,研究者详细的提供了投资决策分析、资本结构、股利分配或公司估值等主题的研究,但是企业投资形成的短期资产和以一年内到期方式使用的资源,表现为公司资产负债表的有关下昂目的主要部分。
中英文资料中英文资料外文翻译文献原文:How a garbage collector works of Java LanguageIf you come from a programming language where allocating objects on the heap is expensive, you may naturally assume that Java’s scheme of allocating everything (except primitives) on the heap is also expensive. However, it turns out that the garbage collector can have a significant impact on increasing the speed of object creation. This might sound a bit odd at first—that storage release affects storage allocation—but it’s the way some JVMs work, and it means that allocating storage for heap objects in Java can be nearly as fast as creating storage on the stack in other languages.For example, you can think of the C++ heap as a yard where each stakes out its own piece of turf object. This real estate can become abandoned sometime later and must be reused. In some JVMs, the Java heap is quite different; it’s more like a conveyor belt that moves forwardevery time you allocate a new object. This means that object storage allocation is remarkab ly rapid. The “heap pointer” is simply moved forward into virgin territory, so it’s effectively the same as C++’s stack allocation. (Of course, there’s a little extra overhead for bookkeeping, but it’s nothing like searching for storage.)You might observ e that the heap isn’t in fact a conveyor belt, and if you treat it that way, you’ll start paging memory—moving it on and off disk, so that you can appear to have more memory than you actually do. Paging significantly impacts performance. Eventually, after you create enough objects, you’ll run out of memory. The trick is that the garbage collector steps in, and while it collects the garbage it compacts all the objects in the heap so that you’ve effectively moved the “heap pointer” closer to the beginning of the conveyor belt and farther away from a page fault. The garbage collector rearranges things and makes it possible for the high-speed, infinite-free-heap model to be used while allocating storage.To understand garbage collection in Java, it’s helpful le arn how garbage-collection schemes work in other systems. A simple but slow garbage-collection technique is called reference counting. This means that each object contains a reference counter, and every time a reference is attached to that object, the reference count is increased. Every time a reference goes out of scope or is set to null, the reference count isdecreased. Thus, managing reference counts is a small but constant overhead that happens throughout the lifetime of your program. The garbage collector moves through the entire list of objects, and when it finds one with a reference count of zero it releases that storage (however, reference counting schemes often release an object as soon as the count goes to zero). The one drawback is that if objects circularly refer to each other they can have nonzero reference counts while still being garbage. Locating such self-referential groups requires significant extra work for the garbage collector. Reference counting is commonly used to explain one kind of g arbage collection, but it doesn’t seem to be used in any JVM implementations.In faster schemes, garbage collection is not based on reference counting. Instead, it is based on the idea that any non-dead object must ultimately be traceable back to a reference that lives either on the stack or in static storage. The chain might go through several layers of objects. Thus, if you start in the stack and in the static storage area and walk through all the references, you’ll find all the live objects. For each reference that you find, you must trace into the object that it points to and then follow all the references in that object, tracing into the objects they point to, etc., until you’ve moved through the entire Web that originated with the reference on the stack or in static storage. Each object that you move through must still be alive. Note that there is no problem withdetached self-referential groups—these are simply not found, and are therefore automatically garbage.In the approach described here, the JVM uses an adaptive garbage-collection scheme, and what it does with the live objects that it locates depends on the variant currently being used. One of these variants is stop-and-copy. This means that—for reasons that will become apparent—the program is first stopped (this is not a background collection scheme). Then, each live object is copied from one heap to another, leaving behind all the garbage. In addition, as the objects are copied into the new heap, they are packed end-to-end, thus compacting the new heap (and allowing new storage to simply be reeled off the end as previously described).Of course, when an object is moved from one place to another, all references that point at the object must be changed. The reference that goes from the heap or the static storage area to the object can be changed right away, but there can be other references pointing to this object Initialization & Cleanup that will be encountered later during the “walk.” These are fixed up as they are found (you could imagine a table that maps old addresses to new ones).There are two issues that make these so-called “copy collectors” inefficient. The first is the idea that you have two heaps and you slosh all the memory back and forth between these two separate heaps,maintaining twice as much memory as you actually need. Some JVMs deal with this by allocating the heap in chunks as needed and simply copying from one chunk to another.The second issue is the copying process itself. Once your program becomes stable, it might be generating little or no garbage. Despite that, a copy collector will still copy all the memory from one place to another, which is wasteful. To prevent this, some JVMs detect that no new garbage is being generated and switch to a different scheme (this is the “adaptive” part). This other scheme is called mark-and-sweep, and it’s what earlier versions of Sun’s JVM used all the time. For general use, mark-and-sweep is fairly slow, but when you know you’re generating little or no garbage, it’s fast. Mark-and-sweep follows the same logic of starting from the stack and static storage, and tracing through all the references to find live objects.However, each time it finds a live object, that object is marked by setting a flag in it, but the object isn’t collected yet.Only when the marking process is finished does the sweep occur. During the sweep, the dead objects are released. However, no copying happens, so if the collector chooses to compact a fragmented heap, it does so by shuffling objects around. “Stop-and-copy”refers to the idea that this type of garbage collection is not done in the background; Instead, the program is stopped while the garbage collection occurs. In the Sun literature you’llfind many references to garbage collection as a low-priority background process, but it turns out that the garbage collection was not implemented that way in earlier versions of the Sun JVM. Instead, the Sun garbage collector stopped the program when memory got low. Mark-and-sweep also requires that the program be stopped.As previously mentioned, in the JVM described here memory is allocated in big blocks. If you allocate a large object, it gets its own block. Strict stop-and-copy requires copying every live object from the source heap to a new heap before you can free the old one, which translates to lots of memory. With blocks, the garbage collection can typically copy objects to dead blocks as it collects. Each block has a generation count to keep track of whether it’s alive. In the normal case, only the blocks created since the last garbage collection are compacted; all other blocks get their generation count bumped if they have been referenced from somewhere. This handles the normal case of lots of short-lived temporary objects. Periodically, a full sweep is made—large objects are still not copied (they just get their generation count bumped), and blocks containing small objects are copied and compacted.The JVM monitors the efficiency of garbage collection and if it becomes a waste of time because all objects are long-lived, then it switches to mark-and sweep. Similarly, the JVM keeps track of how successful mark-and-sweep is, and if the heap starts to becomefragmented, it switches back to stop-and-copy. This is where the “adaptive” part comes in, so you end up with a mouthful: “Adaptive generational stop-and-copy mark-and sweep.”There are a number of additional speedups possible in a JVM. An especially important one involves the operation of the loader and what is called a just-in-time (JIT) compiler. A JIT compiler partially or fully converts a program into native machine code so that it doesn’t need to be interpreted by the JVM and thus runs much faster. When a class must be loaded (typically, the first time you want to create an object of that class), the .class file is located, and the byte codes for that class are brought into memory. At this point, one approach is to simply JIT compile all the code, but this has two drawbacks: It takes a little more time, which, compounded throughout the life of the program, can add up; and it increases the size of the executable (byte codes are significantly more compact than expanded JIT code), and this might cause paging, which definitely slows down a program. An alternative approach is lazy evaluation, which means that the code is not JIT compiled until necessary. Thus, code that never gets executed might never be JIT compiled. The Java Hotspot technologies in recent JDKs take a similar approach by increasingly optimizing a piece of code each time it is executed, so the more the code is executed, the faster it gets.译文:Java垃圾收集器的工作方式如果你学下过一种因为在堆里分配对象所以开销过大的编程语言,很自然你可能会假定Java 在堆里为每一样东西(除了primitives)分配内存资源的机制开销也会很大。
PID controllerFrom Wikipedia, the free encyclopediaA proportional–integral–derivative controller (PID controller) is a generic .control loop feedback mechanism widely used in industrial control systems.A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly.The PID controller calculation (algorithm) involves three separate parameters; the Proportional, the Integral and Derivative values. The Proportional value determines the reaction to the current error, the Integral determines the reaction based on the sum of recent errors and the Derivative determines the reaction to the rate at which the error has been changing. The weightedsum of these three actions is used to adjust the process via a control element such as the position of a control valve or the power supply of a heating element.By "tuning" the three constants in the PID controller algorithm the PID can provide control action designed for specific process requirements. The response of the controller can be described in terms of the responsiveness of the controller to an error, the degree to which the controller overshoots the setpoint and the degree of system oscillation. Note that the use of the PID algorithm for control does not guarantee optimal control of the system or system stability.Some applications may require using only one or two modes to provide the appropriate system control. This is achieved by setting the gain of undesired control outputs to zero. A PID controller will be called a PI, PD, P or I controller in the absence of the respective control actions. PI controllers are particularly common, since derivative action is very sensitive to measurement noise, and the absence of an integral value may prevent the system from reaching its target value due to the control action.A block diagram of a PID controllerNote: Due to the diversity of the field of control theory and application, many naming conventions for the relevant variables are in common use.1.Control loop basicsA familiar example of a control loop is the action taken to keep one's shower water at the ideal temperature, which typically involves the mixing of two process streams, cold and hot water. The person feels the water to estimate its temperature. Based on this measurement they perform a control action: use the cold water tap to adjust the process. The person would repeat this input-output control loop, adjusting the hot water flow until the process temperature stabilized at the desired value.Feeling the water temperature is taking a measurement of the process value or process variable (PV). The desired temperature is called the setpoint (SP). The output from the controller and input to the process (the tap position) is called the manipulated variable (MV). The difference between the measurement and the setpoint is the error (e), too hot or too cold and by how much.As a controller, one decides roughly how much to change the tap position (MV) after one determines the temperature (PV), and therefore the error. This first estimate is the equivalent of the proportional action of a PID controller. The integral action of a PID controller can be thought of as gradually adjusting the temperature when it is almost right. Derivative action can be thought of as noticing the water temperature is getting hotter or colder, and how fast, and taking that into account when deciding how to adjust the tap.Making a change that is too large when the error is small is equivalent to a high gain controller and will lead toovershoot. If the controller were to repeatedly make changes that were too large and repeatedly overshoot the target, this control loop would be termed unstable and the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. A human would not do this because we are adaptive controllers, learning from the process history, but PID controllers do not have the ability to learn and must be set up correctly. Selecting the correct gains for effective control is known as tuning the controller.If a controller starts from a stable state at zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that impact on the process, and hence on the PV. Variables that impact on the process other than the MV are known as disturbances and generally controllers are used to reject disturbances and/or implement setpoint changes. Changes in feed water temperature constitute a disturbance to the shower process.In theory, a controller can be used to control any process which has a measurable output (PV), a known ideal value for that output (SP) and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, flow rate, chemical composition, speed and practically every other variable for which a measurement exists. Automobile cruise control is an example of a process which utilizes automated control.Due to their long history, simplicity, well grounded theory and simple setup and maintenance requirements, PID controllers are the controllers of choice for many of these applications.2.PID controller theoryNote: This section describes the ideal parallel or non-interacting form of the PID controller. For other forms please see the Section "Alternative notation and PID forms".The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). Hence:where Pout, Iout, and Dout are the contributions to the output from the PID controller from each of the three terms, as defined below.2.1. Proportional termThe proportional term makes a change to the output that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain.The proportional term is given by:WherePout: Proportional outputKp: Proportional Gain, a tuning parametere: Error = SP − PVt: Time or instantaneous time (the present)Change of response for varying KpA high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (See the section on Loop Tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive (or sensitive) controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances.In the absence of disturbances, pure proportional control will not settle at its target value, but will retain a steady state error that is a function of the proportional gain and the process gain. Despite the steady-state offset, both tuning theory and industrial practice indicate that it is the proportional term that should contribute the bulk of the output change.2.2.Integral termThe contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. Summing the instantaneous error over time (integrating the error) gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain and added to the controller output. The magnitude of the contribution of the integral term to the overall control action is determined by the integral gain, Ki.The integral term is given by:Change of response for varying KiWhereIout: Integral outputKi: Integral Gain, a tuning parametere: Error = SP − PVτ: Time in the past contributing to the integral responseThe integral term (when added to the proportional term) accelerates themovement of the process towards setpoint and eliminates the residual steady-state error that occurs with a proportional only controller. However, since the integral term is responding to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (cross over the setpoint and then create a deviation in the other direction). For further notes regarding integral gain tuning and controller stability, see the section on loop tuning.2.3 Derivative termThe rate of change of the process error is calculated by determining the slope of the error over time (i.e. its first derivative with respect to time) and multiplying this rate of change by the derivative gain Kd. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, Kd.The derivative term is given by:Change of response for varying KdWhereDout: Derivative outputKd: Derivative Gain, a tuning parametere: Error = SP − PVt: Time or instantaneous time (the present)The derivative term slows the rate of change of the controller output and this effect is most noticeable close to the controller setpoint. Hence, derivative control isused to reduce the magnitude of the overshoot produced by the integral component and improve the combined controller-process stability. However, differentiation of a signal amplifies noise and thus this term in the controller is highly sensitive to noise in the error term, and can cause a process to become unstable if the noise and the derivative gain are sufficiently large.2.4 SummaryThe output from the three terms, the proportional, the integral and the derivative terms are summed to calculate the output of the PID controller. Defining u(t) as the controller output, the final form of the PID algorithm is:and the tuning parameters areKp: Proportional Gain - Larger Kp typically means faster response since thelarger the error, the larger the Proportional term compensation. An excessively large proportional gain will lead to process instability and oscillation.Ki: Integral Gain - Larger Ki implies steady state errors are eliminated quicker. The trade-off is larger overshoot: any negative error integrated during transient response must be integrated away by positive error before we reach steady state.Kd: Derivative Gain - Larger Kd decreases overshoot, but slows down transient response and may lead to instability due to signal noise amplification in the differentiation of the error.3. Loop tuningIf the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable, i.e. its output diverges, with or without oscillation, and is limited only by saturation or mechanical breakage. Tuning a control loop is the adjustment of its control parameters (gain/proportional band, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response.The optimum behavior on a process change or setpoint change varies depending on the application. Some processes must not allow an overshoot of the processvariable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint. Generally, stability of response (the reverse of instability) is required and the process must not oscillate for any combination of process conditions and setpoints. Some processes have a degree of non-linearity and so parameters that work well at full-load conditions don't work when the process is starting up from no-load. This section describes some traditional manual methods for loop tuning.There are several methods for tuning a PID loop. The most effective methods generally involve the development of some form of process model, then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively inefficient.The choice of method will depend largely on whether or not the loop can be taken "offline" for tuning, and the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters.Choosing a Tuning MethodMethodAdvantagesDisadvantagesManual TuningNo math required. Online method.Requires experiencedpersonnel.Ziegler–NicholsProven Method. Online method.Process upset, sometrial-and-error, very aggressive tuning.Software ToolsConsistent tuning. Online or offline method. May includevalve and sensor analysis. Allow simulation before downloading.Some costand training involved.Cohen-CoonGood process models.Some math. Offline method. Only good for first-order processes.3.1 Manual tuningIf the system must remain online, one tuning method is to first set the I and D values to zero. Increase the P until the output of the loop oscillates, then the P shouldbe left set to be approximately half of that value for a "quarter amplitude decay" type response. Then increase D until any offset is correct in sufficient time for the process. However, too much D will cause instability. Finally, increase I, if required, until the loop is acceptably quick to reach its reference after a load disturbance. However, too much I will cause excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case an "over-damped" closed-loop system is required, which will require a P setting significantly less than half that of the P setting causing oscillation.3.2Ziegler –Nichols methodAnother tuning method is formally known as the Ziegler –Nichols method, introduced by John G . Ziegler and Nathaniel B. Nichols. As in the method above, the I and D gains are first set to zero. The "P" gain is increased until it reaches the "critical gain" Kc at which the output of theloop starts to oscillate. Kc and the oscillation period Pc are used to set the gains as shown:3.3 PID tuning softwareMost modern industrial facilities no longer tune loops using the manualcalculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages will gather the data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.Mathematical PID loop tuning induces an impulse in the system, and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can literally take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.Other formulas are available to tune the loop according to different performance criteria.4 Modifications to the PID algorithmThe basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.One common problem resulting from the ideal PID implementations is integralwindup. This can be addressed by:Initializing the controller integral to a desired valueDisabling the integral function until the PV has entered the controllable region Limiting the time period over which the integral error is calculatedPreventing the integral term from accumulating above or below pre-determined boundsMany PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or a deadband in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the changewould be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous "step" increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change.5. Limitations of PID controlWhile PID controllers are applicable to many control problems, they can perform poorly in some applications.PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or "hunt" about the control setpoint value. The control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be "fed forward" and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller can then be used primarily to respond to whatever difference or "error" remains between the setpoint (SP) and the actual value of the process variable (PV). Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response and stability.For example, in most motion control systems, in order to accelerate a mechanical load under control, more force or torque is required from the prime mover, motor, or actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force or torque being applied by the prime mover, then it is beneficial to take the instantaneous acceleration desired for the load, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the prime mover regardless of the feedback value. The PID loop in this situation uses the feedback information to effect any increase or decrease of the combined output in order to reduce the remaining difference between theprocess setpoint and thefeedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive, stable and reliable control system.Another problem faced with PID controllers is that they are linear. Thus, performance of PID controllers in non-linear systems (such as HV AC systems) is variable. Often PID controllers are enhanced through methods such as PID gain scheduling or fuzzy logic. Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance.A problem with the Derivative term is that small amounts of measurement or process noise can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise components. However, low-pass filtering and derivative control can cancel each other out, so reducing noise by instrumentation means is a much better choice. Alternatively, the differential band can be turned off in many systems with little loss of control. This is equivalent to using the PID controller as a PI controller.6. Cascade controlOne distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. In cascade control there are two PIDs arranged with one PID controlling the set point of another. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as set point, usually controlling a more rapid changing parameter, flowrate or accelleration. It can be mathematically proved that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controller.[vague]7. Physical implementation of PID controlIn the early history of automatic process control the PID controller was implemented as a mechanical device. These mechanical controllers used a lever, spring and a mass and were often energized by compressed air. These pneumatic controllers were once the industry standard.Electronic analog controllers can be made from a solid-state or tube amplifier, a capacitor and a resistance. Electronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive, the power conditioning of a power supply, or even the movement-detection circuit of a modern seismometer. Nowadays, electronic controllers have largely been replaced by digital controllers implemented with microcontrollers or FPGAs.Most modern PID controllers in industry are implemented in software in programmable logic controllers (PLCs) or as a panel-mounted digital controller. Software implementations have the advantages that they are relatively cheap and are flexible with respect to the implementation of the PID algorithm.8.Alternative nomenclature and PID forms8.1 PseudocodeHere is a simple software loop that implements the PID algorithm:8.2 Ideal versus standard PID formThe form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the "standard form". In this form the Kp gain is applied to the Iout, and Dout terms, yielding:WhereTi is the Integral TimeTd is the Derivative TimeIn the ideal parallel form, shown in the Controller Theory sectionthe gain parameters are related to the parameters of the standard formthroughand Kd = KpTd. This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the least physical interpretation and is generally reserved for theoretical treatment of the PID controller. The "standard" form, despite being slightly more complex mathematically, is more common in industry.8.3Laplace form of the PID controllerSometimes it is useful to write the PID regulator in Laplace transform form:Having the PID controller written in Laplace form and having the transfer function of the controlled system, makes it easy to determine the closed-loop transfer function of the system.8.4Series / interacting formAnother representation of the PID controller is the series, or "interacting" form. This form essentially consists of a PD and PI controller in series, and it made early (analog) controllers easier to build. When the controllers later became digital, many kept using the interacting form.[edit] ReferencesLiptak, Bela (1995). Instrument Engineers' Handbook: Process Control. Radnor, Pennsylvania: Chilton Book Company, 20-29. ISBN 0-8019-8242-1.Van, Doren, Vance J. (July 1, 2003). "Loop Tuning Fundamentals". Control Engineering. Red Business Information.Sellers, David. An Overview of Proportional plus Integral plus Derivative Control and Suggestions for Its Successful Application and Implementation (PDF). Retrieved on 2007-05-05.Articles, Whitepapers, and tutorials on PID controlGraham, Ron (10/03/2005). FAQ on PID controller tuning. Retrieved on2007-05-05.PID控制器比例积分微分控制器(PID调节器)是一个控制环,广泛地应用于工业控制系统里的反馈机制。
中英文翻译(文档含英文原文和中文翻译)附件1:翻译译文热处理对豆奶(豆腥味)过氧化脂质含量的影响豆腥味是导致豆奶风味不理想的重要因素,为了以最大限度的降低豆奶的豆腥味,我们研究了热处理对过氧化脂质的影响,是影响豆腥味的一个重要因素。
我们还以豆奶为原料并在制作过程中使用加热工序制备了各种甜点,从而通过感官实验来评价加热对其的影响。
经过浸泡和在75℃热处理的肿涨的大豆在相对湿度80-90%处理十分钟的过氧化脂质的含量比大豆中缺乏脂氧合酶和14%或更少的热处理的大豆制备出的豆奶的过氧化脂质含量要大大降低。
此外,设计热烫浸泡和肿胀的大豆在沸水中处理了30秒的豆奶的过氧化脂质含量可以与缺失脂肪氧合酶的大豆制作出的豆奶过氧化脂质含量相媲美。
蛋奶布丁,巴伐利亚奶油以及经过热处理的大豆制作出的豆腐其中的豆腥味都得到了显著的改善。
关键词:大豆豆奶过氧化脂质豆腥味大豆长期以来都是作为高营养食品代名词在日本人的饮食文化中具有具足轻重的作用。
最近的研究表明,大豆蛋白具有降低胆固醇的作用(爱德森等人,1995),大豆皂苷具有抗癌活性(肯尼迪,1995),以及大豆异黄酮对乳腺癌和前列腺癌具有一定的抑制作用(彼得森&贝尔内斯,1991;彼得森G&贝尔内斯S,1993),以及对于骨质疏松症(土田等人,1999)具有一定的预防作用。
根据以上情况可知,由大豆制成的加工食品的价值就是作为人体异黄酮的来源。
大豆被用于很多的食品中,包括豆腐,纳豆,味精,酱油,豆浆。
豆奶作为一种可利用的饮料,可以广泛应用于果冻,蛋奶布丁等甜品的制作原料。
然而,脂肪氧合酶产生的独特的豆腥味对消费者的喜好产生了重大的影响。
因此尽可能的减少豆腥味是能够使豆浆脱颖而出并广泛推广的关键性的挑战。
有几种用于激活脂肪氧合酶的方法已经被提出:温水处理研磨的方法(越后等,1991),其中,大豆在70℃热水中浸泡,然后用95℃的热水进行匀浆;热烫(赛斯&纳特,1988)的方法,用99.3℃的热水进行处理,以及微波加热的方法(王&托莱多,1987)。
智能医疗系统毕业论文中英文资料外文翻译文献AbstractThe field of healthcare has greatly benefited from the advances in technology, particularly with the development of intelligent healthcare systems. These systems utilize artificial intelligence (AI) to improve the quality and efficiency of healthcare services. This literature review aims to provide an overview of the current state of intelligent healthcare systems and their applications in the medical field.IntroductionKey Features of Intelligent Healthcare Systems1. Real-Time Monitoring: Intelligent healthcare systems facilitate real-time monitoring of patients' vital signs, allowing healthcare professionals to detect any abnormalities and intervene in a timely manner.Real-Time Monitoring: Intelligent healthcare systems facilitatereal-time monitoring of patients' vital signs, allowing healthcare professionals to detect any abnormalities and intervene in a timely manner.2. Predictive Analytics: By analyzing vast amounts of patient data, these systems can identify patterns and trends to predict potential health risks or deteriorations, enabling proactive interventions.Predictive Analytics: By analyzing vast amounts of patient data, these systems can identify patterns and trends to predict potential health risks or deteriorations, enabling proactive interventions.3. Personalized Medicine: Intelligent healthcare systems can utilize patient-specific data to provide personalized treatment plans, taking into account individual characteristics and medical history.Personalized Medicine: Intelligent healthcare systems can utilize patient-specific data to provide personalized treatment plans, taking into account individual characteristics and medical history.4. Remote Patient Management: Through the use of telemedicine technologies, intelligent healthcare systems enable remote patientmonitoring and virtual consultations, enhancing access to healthcare services and reducing the need for in-person visits.Remote Patient Management: Through the use of telemedicine technologies, intelligent healthcare systems enable remote patient monitoring and virtual consultations, enhancing access to healthcare services and reducing the need for in-person visits.Applications of Intelligent Healthcare Systems1. Chronic Disease Management: Intelligent healthcare systems can aid in the management of chronic conditions such as diabetes, cardiovascular diseases, and respiratory disorders. They provide patients with tools for self-management and assist healthcare professionals in monitoring the disease progression.Chronic Disease Management: Intelligent healthcare systems can aid in the management of chronic conditions such as diabetes, cardiovascular diseases, and respiratory disorders. They provide patients with tools for self-management and assist healthcare professionals in monitoring the disease progression.2. Hospital Workflow Optimization: These systems can optimize hospital workflows by automating administrative tasks, streamlining patient admission and discharge processes, and improving resource allocation.Hospital Workflow Optimization: These systems can optimize hospital workflows by automating administrative tasks, streamlining patient admission and discharge processes, and improving resource allocation.3. Drug Safety and Adherence: Intelligent healthcare systems can help prevent medication errors and improve patient adherence to prescribed treatments through reminders, alerts, and medication tracking.Drug Safety and Adherence: Intelligent healthcare systems can help prevent medication errors and improve patient adherence to prescribed treatments through reminders, alerts, and medication tracking.4. Healthcare Data Analysis: By collecting and analyzing large volumes of healthcare data, these systems can provide valuable insights for medical research, disease surveillance, and public health planning.Healthcare Data Analysis: By collecting and analyzing largevolumes of healthcare data, these systems can provide valuable insights for medical research, disease surveillance, and public health planning.ConclusionReferences[1] Smith, J. (2019). The role of intelligent healthcare systems in transforming healthcare delivery. Journal of Healthcare Technology,8(2), 45-56.[2] Johnson, A., et al. (2018). Applications of artificial intelligence in healthcare: Current trends and future prospects. International Journal of Medical Informatics, 115, 1-7.[3] Rodrigues, J., et al. (2020). Intelligent healthcare systems: State-of-the-art and future challenges. Journal of Medical Systems, 44(3), 1-15.。