外文资料翻译(原文和译文)_刘海平
- 格式:doc
- 大小:292.00 KB
- 文档页数:14
外文文献原稿和译文原稿Sodium Polyacrylate:Also known as super-absorbent or “SAP”(super absorbent polymer), Kimberly Clark used to call it SAM (super absorbent material). It is typically used in fine granular form (like table salt). It helps improve capacity for better retention in a disposable diaper, allowing the product to be thinner with improved performance and less usage of pine fluff pulp. The molecular structure of the polyacrylate has sodium carboxylate groups hanging off the main chain. When it comes in contact with water, the sodium detaches itself, leaving only carboxylions. Being negatively charged, these ions repel one another so that the polymer also has cross-links, which effectively leads to a three-dimensional structure. It has hige molecular weight of more than a million; thus, instead of getting dissolved, it solidifies into a gel. The Hydrogen in the water (H-O-H) is trapped by the acrylate due to the atomic bonds associated with the polarity forces between the atoms. Electrolytes in the liquid, such as salt minerals (urine contains 0.9% of minerals), reduce polarity, thereby affecting superabsorbent properties, especially with regard to the superabsorbent capacity for liquid retention. This is the main reason why diapers containing SAP should never be tested with plain water. Linear molecular configurations have less total capacity than non-linear molecules but, on the other hand, retention of liquid in a linear molecule is higher than in a non-linear molecule, due to improved polarity. For a list of SAP suppliers, please use this link: SAP, the superabsorbent can be designed to absorb higher amounts of liquids (with less retention) or very high retentions (but lower capacity). In addition, a surface cross linker can be added to the superabsorbent particle to help it move liquids while it is saturated. This helps avoid formation of "gel blocks", the phenomenon that describes the impossibility of moving liquids once a SAP particle gets saturated.History of Super Absorbent Polymer ChemistryUn til the 1980’s, water absorbing materials were cellulosic or fiber-based products. Choices were tissue paper, cotton, sponge, and fluff pulp. The water retention capacity of these types of materials is only 20 times their weight – at most.In the early 1960s, the United States Department of Agriculture (USDA) was conducting work on materials to improve water conservation in soils. They developed a resin based on the grafting of acrylonitrile polymer onto the backbone of starch molecules (i.e. starch-grafting). The hydrolyzed product of the hydrolysis of this starch-acrylonitrile co-polymer gave water absorption greater than 400 times its weight. Also, the gel did not release liquid water the way that fiber-based absorbents do.The polymer came to be known as “Super Slurper”.The USDA gave the technical know how several USA companies for further development of the basic technology. A wide range of grating combinations were attempted including work with acrylic acid, acrylamide and polyvinyl alcohol (PVA).Since Japanese companies were excluded by the USDA, they started independent research using starch, carboxy methyl cellulose (CMC), acrylic acid, polyvinyl alcohol (PVA) and isobutylene maleic anhydride (IMA).Early global participants in the development of super absorbent chemistry included Dow Chemical, Hercules, General Mills Chemical, DuPont, National Starch & Chemical, Enka (Akzo), Sanyo Chemical, Sumitomo Chemical, Kao, Nihon Starch and Japan Exlan.In the early 1970s, super absorbent polymer was used commercially for the first time –not for soil amendment applications as originally intended –but for disposable hygienic products. The first product markets were feminine sanitary napkins and adult incontinence products.In 1978, Park Davis (d.b.a. Professional Medical Products) used super absorbent polymers in sanitary napkins.Super absorbent polymer was first used in Europe in a baby diaper in 1982 when Schickendanz and Beghin-Say added the material to the absorbent core. Shortly thereafter, UniCharm introduced super absorbent baby diapers in Japan while Proctor & Gamble and Kimberly-Clark in the USA began to use the material.The development of super absorbent technology and performance has been largely led by demands in the disposable hygiene segment. Strides in absorption performance have allowed the development of the ultra-thin baby diaper which uses a fraction of the materials – particularly fluff pulp – which earlier disposable diapers consumed.Over the years, technology has progressed so that there is little if any starch-grafted super absorbent polymer used in disposable hygienic products. These super absorbents typically are cross-linked acrylic homo-polymers (usually Sodium neutralized).Super absorbents used in soil amendments applications tend to be cross-linked acrylic-acrylamide co-polymers (usually Potassium neutralized).Besides granular super absorbent polymers, ARCO Chemical developed a super absorbent fiber technology in the early 1990s. This technology was eventually sold to Camelot Absorbents. There are super absorbent fibers commercially available today. While significantly more expensive than the granular polymers, the super absorbent fibers offer technical advantages in certain niche markets including cable wrap, medical devices and food packaging.Sodium polyacrylate, also known as waterlock, is a polymer with the chemical formula [-CH2-CH(COONa)-]n widely used in consumer products. It has the ability to absorb as much as 200 to 300 times its mass in water. Acrylate polymers generally are considered to possess an anionic charge. While sodium neutralized polyacrylates are the most common form used in industry, there are also other salts available including potassium, lithium and ammonium.ApplicationsAcrylates and acrylic chemistry have a wide variety of industrial uses that include: ∙Sequestering agents in detergents. (By binding hard water elements such as calcium and magnesium, the surfactants in detergents work more efficiently.) ∙Thickening agents∙Coatings∙Fake snowSuper absorbent polymers. These cross-linked acrylic polymers are referred to as "Super Absorbents" and "Water Crystals", and are used in baby diapers. Copolymerversions are used in agriculture and other specialty absorbent applications. The origins of super absorbent polymer chemistry trace back to the early 1960s when the U.S. Department of Agriculture developed the first super absorbent polymer materials. This chemical is featured in the Maximum Absorbency Garment used by NASA.译文聚丙烯酸钠聚丙烯酸钠,又可以称为超级吸收剂或者又叫高吸水性树脂,凯博利克拉克教授曾经称它为SAM即:超级吸收性物质。
毕业论文外文资料翻译专业:给水排水工程姓名:李松松学号: 080403108外文出处: water research 24(2004) 23—28附件: 1.外文资料翻译译文;2.外文原文。
指导教师评语:该同学选择外文资料与课题有一定相关性,翻译基本准确,语句较通顺,能够掌握资料的中心内容,基本地完成了外文翻译工作。
签名:李贵霞2012年3月19日(用外文写)附件1:外文资料翻译译文用詹姆逊细胞技术去除营养化池塘中的藻类和磷1 除藻技术概况在澳大利亚池塘系统广泛应用于生活污水和工业废水的处理,它可以作为多步骤处理系统中的最后一个环节,例如生物滤池、滴滤池等,还可以作为唯一的处理工艺。
成熟池塘为污水提供了被天然紫外线消毒的停留时间,同时池塘中同样包含了多种微生物,可以对污水中的营养物质进行消化处理。
此外,再多及处理工艺中,池塘工艺具有重要的技术优势。
因此,它既可以作为二次处理后的深度处理,又可以作为在上游处理系统失效情况下的应急处理。
不幸的是,成熟池塘很容易使藻类增殖,导致出水中悬浮物浓度和酸碱度值普遍偏高。
因此,处理水的排放会引起环境和健康问题。
所以,要将池塘技术进行改善,使之发展成为一个符合排放要求,具有良好去除效果的处理工艺。
多年以来,科学家实验了很多除藻的方法,并且取得了不同程度的结果,然而,现有的处理工艺都有各自的优缺点。
在此主要介绍詹姆逊细胞技术与低速混合头合并成的新技术。
詹姆逊的细胞技术已经被大量地应用于选矿、溶剂萃取等行业,并且,该工艺在工业和城市污水处理方面也得到了迅猛的发展。
在Jetflote有限公司(澳大利亚环境集团有限公司分公司)的不断开发研究之下,于成熟池塘中同时去除藻类和磷这一技术得到了较好的发展。
该工艺涉及到藻类表面化学知识,并运用较为高端的化学理论,使得更有效的发生诱导浮选,将成熟池塘中的藻类和磷能够同时去除。
2 詹姆逊的细胞技术在废水处理中的应用詹姆逊的细胞技术是一个专利。
S3 外文资料翻译()淮阴工学院毕业设计(论文)外文资料翻译系(院):专业:姓名:学号:经管学院国际经济与贸易胡婷 3062102109外文出处:Workshop”Empircal studies of innovation(用外文写)in Europe”,Urbino 1-2 December 1.外文资料翻译译文;2.外文原文。
附件:指导教师评语:签名:月日年注:请将该封面与附件装订成册。
附件1:外文资料翻译译文“欧洲的实证研究创新” 研讨会,乌尔比诺12月1-2日生产者服务的国际竞争力摘要:本文调查了在90年代国际竞争力和以经合组织国家为样本的生产者服务业的国际专业化的决定因素。
我们发现了国家制造业基础与生产者服务业发展的重要联系。
国内专业化的制造业的生产者服务的大用户明确有助于生产者服务的出口。
我们也发现了在信息与通信技术上的花费对生产者服务出口有积极的影响。
关键词:生产者服务,信息与通信技术,制造业与服务业的联系 JEL代码:L80,F10,O33 1 引言1965年福克斯提出了20世纪50年代中期以来美国劳动力的唯一的一个少数民族涉及有形商品的生产,因此认识服务经济的概念。
最近大多数服务业,特别是一些生产者服务类别(例如通讯和商业服务)被列入增长最快的经济领域中。
普雷特和博斯沃思(____)指出,商业服务的比重在过去____年翻了一番。
传统服务被认为是无形的,它们的消费不能从产品中分割。
结果它们被当做非流通股(福克斯,1968)。
但是巴格瓦蒂(1984)等人指出,由于电信和信息技术的发展,提供服务的物理接近要求可能有所减少,因此提高了服务可交易性的价值。
事实上,自80年代初,国际服务贸易迅速增长和快于商品贸易,以至于在1990年全球服务贸易已达到全球贸易的20%(Hoekman和普里莫布拉加,____)。
随着服务贸易的增长,一些生产者服务贸易的理论模型和服务贸易的自由化的经济结论被马库森(1989),梅尔文(1989年),法国(1990年a,1990年b),琼斯和瑞恩(1990年),Van Marrewijk(____年)等,德瓦尔和范德伯格(____年),迪尔朵夫(____年)提出。
((英文参考文献及译文)二〇一六年六月本科毕业论文 题 目:STATISTICAL SAMPLING METHOD, USED INTHE AUDIT学生姓名:王雪琴学 院:管理学院系 别:会计系专 业:财务管理班 级:财管12-2班 学校代码: 10128 学 号: 201210707016Statistics and AuditRomanian Statistical Review nr. 5 / 2010STATISTICAL SAMPLING METHOD, USED IN THE AUDIT - views, recommendations, fi ndingsPhD Candidate Gabriela-Felicia UNGUREANUAbstractThe rapid increase in the size of U.S. companies from the earlytwentieth century created the need for audit procedures based on the selectionof a part of the total population audited to obtain reliable audit evidence, tocharacterize the entire population consists of account balances or classes oftransactions. Sampling is not used only in audit – is used in sampling surveys,market analysis and medical research in which someone wants to reach aconclusion about a large number of data by examining only a part of thesedata. The difference is the “population” from which the sample is selected, iethat set of data which is intended to draw a conclusion. Audit sampling appliesonly to certain types of audit procedures.Key words: sampling, sample risk, population, sampling unit, tests ofcontrols, substantive procedures.Statistical samplingCommittee statistical sampling of American Institute of CertifiedPublic Accountants of (AICPA) issued in 1962 a special report, titled“Statistical sampling and independent auditors’ which allowed the use ofstatistical sampling method, in accordance with Generally Accepted AuditingStandards (GAAS). During 1962-1974, the AICPA published a series of paperson statistical sampling, “Auditor’s Approach to Statistical Sampling”, foruse in continuing professional education of accountants. During 1962-1974,the AICPA published a series of papers on statistical sampling, “Auditor’sApproach to Statistical Sampling”, for use in continuing professional educationof accountants. In 1981, AICPA issued the professional standard, “AuditSampling”, which provides general guidelines for both sampling methods,statistical and non-statistical.Earlier audits included checks of all transactions in the period coveredby the audited financial statements. At that time, the literature has not givenparticular attention to this subject. Only in 1971, an audit procedures programprinted in the “Federal Reserve Bulletin (Federal Bulletin Stocks)” includedseveral references to sampling such as selecting the “few items” of inventory.Statistics and Audit The program was developed by a special committee, which later became the AICPA, that of Certified Public Accountants American Institute.In the first decades of last century, the auditors often applied sampling, but sample size was not in related to the efficiency of internal control of the entity. In 1955, American Institute of Accountants has published a study case of extending the audit sampling, summarizing audit program developed by certified public accountants, to show why sampling is necessary to extend the audit. The study was important because is one of the leading journal on sampling which recognize a relationship of dependency between detail and reliability testing of internal control.In 1964, the AICPA’s Auditing Standards Board has issued a report entitled “The relationship between statistical sampling and Generally Accepted Auditing Standards (GAAS)” which illustrated the relationship between the accuracy and reliability in sampling and provisions of GAAS.In 1978, the AICPA published the work of Donald M. Roberts,“Statistical Auditing”which explains the underlying theory of statistical sampling in auditing.In 1981, AICPA issued the professional standard, named “Audit Sampling”, which provides guidelines for both sampling methods, statistical and non-statistical.An auditor does not rely solely on the results of a single procedure to reach a conclusion on an account balance, class of transactions or operational effectiveness of the controls. Rather, the audit findings are based on combined evidence from several sources, as a consequence of a number of different audit procedures. When an auditor selects a sample of a population, his objective is to obtain a representative sample, ie sample whose characteristics are identical with the population’s characteristics. This means that selected items are identical with those remaining outside the sample.In practice, auditors do not know for sure if a sample is representative, even after completion the test, but they “may increase the probability that a sample is representative by accuracy of activities made related to design, sample selection and evaluation” [1]. Lack of specificity of the sample results may be given by observation errors and sampling errors. Risks to produce these errors can be controlled.Observation error (risk of observation) appears when the audit test did not identify existing deviations in the sample or using an inadequate audit technique or by negligence of the auditor.Sampling error (sampling risk) is an inherent characteristic of the survey, which results from the fact that they tested only a fraction of the total population. Sampling error occurs due to the fact that it is possible for Revista Română de Statistică nr. 5 / 2010Statistics and Auditthe auditor to reach a conclusion, based on a sample that is different from the conclusion which would be reached if the entire population would have been subject to audit procedures identical. Sampling risk can be reduced by adjusting the sample size, depending on the size and population characteristics and using an appropriate method of selection. Increasing sample size will reduce the risk of sampling; a sample of the all population will present a null risk of sampling.Audit Sampling is a method of testing for gather sufficient and appropriate audit evidence, for the purposes of audit. The auditor may decide to apply audit sampling on an account balance or class of transactions. Sampling audit includes audit procedures to less than 100% of the items within an account balance or class of transactions, so all the sample able to be selected. Auditor is required to determine appropriate ways of selecting items for testing. Audit sampling can be used as a statistical approach and a non- statistical.Statistical sampling is a method by which the sample is made so that each unit consists of the total population has an equal probability of being included in the sample, method of sample selection is random, allowed to assess the results based on probability theory and risk quantification of sampling. Choosing the appropriate population make that auditor’ findings can be extended to the entire population.Non-statistical sampling is a method of sampling, when the auditor uses professional judgment to select elements of a sample. Since the purpose of sampling is to draw conclusions about the entire population, the auditor should select a representative sample by choosing sample units which have characteristics typical of that population. Results will not extrapolate the entire population as the sample selected is representative.Audit tests can be applied on the all elements of the population, where is a small population or on an unrepresentative sample, where the auditor knows the particularities of the population to be tested and is able to identify a small number of items of interest to audit. If the sample has not similar characteristics for the elements of the entire population, the errors found in the tested sample can not extrapolate.Decision of statistical or non-statistical approach depends on the auditor’s professional judgment which seeking sufficient appropriate audits evidence on which to completion its findings about the audit opinion.As a statistical sampling method refer to the random selection that any possible combination of elements of the community is equally likely to enter the sample. Simple random sampling is used when stratification was not to audit. Using random selection involves using random numbers generated byRomanian Statistical Review nr. 5 / 2010Statistics and Audit a computer. After selecting a random starting point, the auditor found the first random number that falls within the test document numbers. Only when the approach has the characteristics of statistical sampling, statistical assessments of risk are valid sampling.In another variant of the sampling probability, namely the systematic selection (also called random mechanical) elements naturally succeed in office space or time; the auditor has a preliminary listing of the population and made the decision on sample size. “The auditor calculated a counting step, and selects the sample element method based on step size. Step counting is determined by dividing the volume of the community to sample the number of units desired. Advantages of systematic screening are its usability. In most cases, a systematic sample can be extracted quickly and method automatically arranges numbers in successive series.”[2].Selection by probability proportional to size - is a method which emphasizes those population units’recorded higher values. The sample is constituted so that the probability of selecting any given element of the population is equal to the recorded value of the item;Stratifi ed selection - is a method of emphasis of units with higher values and is registered in the stratification of the population in subpopulations. Stratification provides a complete picture of the auditor, when population (data table to be analyzed) is not homogeneous. In this case, the auditor stratifies a population by dividing them into distinct subpopulations, which have common characteristics, pre-defined. “The objective of stratification is to reduce the variability of elements in each layer and therefore allow a reduction in sample size without a proportionate increase in the risk of sampling.” [3] If population stratification is done properly, the amount of sample size to come layers will be less than the sample size that would be obtained at the same level of risk given sample with a sample extracted from the entire population. Audit results applied to a layer can be designed only on items that are part of that layer.I appreciated as useful some views on non-statistical sampling methods, which implies that guided the selection of the sample selecting each element according to certain criteria determined by the auditor. The method is subjective; because the auditor selects intentionally items containing set features him.The selection of the series is done by selecting multiple elements series (successive). Using sampling the series is recommended only if a reasonable number of sets used. Using just a few series there is a risk that the sample is not representative. This type of sampling can be used in addition to other samples, where there is a high probability of occurrence of errors. At the arbitrary selection, no items are selected preferably from the auditor, Revista Română de Statistică nr. 5 / 2010Statistics and Auditthat regardless of size or source or characteristics. Is not the recommended method, because is not objective.That sampling is based on the auditor’s professional judgment, which may decide which items can be part or not sampled. Because is not a statistical method, it can not calculate the standard error. Although the sample structure can be constructed to reproduce the population, there is no guarantee that the sample is representative. If omitted a feature that would be relevant in a particular situation, the sample is not representative.Sampling applies when the auditor plans to make conclusions about population, based on a selection. The auditor considers the audit program and determines audit procedures which may apply random research. Sampling is used by auditors an internal control systems testing, and substantive testing of operations. The general objectives of tests of control system and operations substantive tests are to verify the application of pre-defined control procedures, and to determine whether operations contain material errors.Control tests are intended to provide evidence of operational efficiency and controls design or operation of a control system to prevent or detect material misstatements in financial statements. Control tests are necessary if the auditor plans to assess control risk for assertions of management.Controls are generally expected to be similarly applied to all transactions covered by the records, regardless of transaction value. Therefore, if the auditor uses sampling, it is not advisable to select only high value transactions. Samples must be chosen so as to be representative population sample.An auditor must be aware that an entity may change a special control during the course of the audit. If the control is replaced by another, which is designed to achieve the same specific objective, the auditor must decide whether to design a sample of all transactions made during or just a sample of transactions controlled again. Appropriate decision depends on the overall objective of the audit test.Verification of internal control system of an entity is intended to provide guidance on the identification of relevant controls and design evaluation tests of controls.Other tests:In testing internal control system and testing operations, audit sample is used to estimate the proportion of elements of a population containing a characteristic or attribute analysis. This proportion is called the frequency of occurrence or percentage of deviation and is equal to the ratio of elements containing attribute specific and total number of population elements. WeightRomanian Statistical Review nr. 5 / 2010Statistics and Audit deviations in a sample are determined to calculate an estimate of the proportion of the total population deviations.Risk associated with sampling - refers to a sample selection which can not be representative of the population tested. In other words, the sample itself may contain material errors or deviations from the line. However, issuing a conclusion based on a sample may be different from the conclusion which would be reached if the entire population would be subject to audit.Types of risk associated with sampling:Controls are more effective than they actually are or that there are not significant errors when they exist - which means an inappropriate audit opinion. Controls are less effective than they actually are that there are significant errors when in fact they are not - this calls for additional activities to establish that initial conclusions were incorrect.Attributes testing - the auditor should be defining the characteristics to test and conditions for misconduct. Attributes testing will make when required objective statistical projections on various characteristics of the population. The auditor may decide to select items from a population based on its knowledge about the entity and its environment control based on risk analysis and the specific characteristics of the population to be tested.Population is the mass of data on which the auditor wishes to generalize the findings obtained on a sample. Population will be defined compliance audit objectives and will be complete and consistent, because results of the sample can be designed only for the population from which the sample was selected.Sampling unit - a unit of sampling may be, for example, an invoice, an entry or a line item. Each sample unit is an element of the population. The auditor will define the sampling unit based on its compliance with the objectives of audit tests.Sample size - to determine the sample size should be considered whether sampling risk is reduced to an acceptable minimum level. Sample size is affected by the risk associated with sampling that the auditor is willing to accept it. The risk that the auditor is willing to accept lower, the sample will be higher.Error - for detailed testing, the auditor should project monetary errors found in the sample population and should take into account the projected error on the specific objective of the audit and other audit areas. The auditor projects the total error on the population to get a broad perspective on the size of the error and comparing it with tolerable error.For detailed testing, tolerable error is tolerable and misrepresentations Revista Română de Statistică nr. 5 / 2010Statistics and Auditwill be a value less than or equal to materiality used by the auditor for the individual classes of transactions or balances audited. If a class of transactions or account balances has been divided into layers error is designed separately for each layer. Design errors and inconsistent errors for each stratum are then combined when considering the possible effect on the total classes of transactions and account balances.Evaluation of sample results - the auditor should evaluate the sample results to determine whether assessing relevant characteristics of the population is confirmed or needs to be revised.When testing controls, an unexpectedly high rate of sample error may lead to an increase in the risk assessment of significant misrepresentation unless it obtained additional audit evidence to support the initial assessment. For control tests, an error is a deviation from the performance of control procedures prescribed. The auditor should obtain evidence about the nature and extent of any significant changes in internal control system, including the staff establishment.If significant changes occur, the auditor should review the understanding of internal control environment and consider testing the controls changed. Alternatively, the auditor may consider performing substantive analytical procedures or tests of details covering the audit period.In some cases, the auditor might not need to wait until the end audit to form a conclusion about the effectiveness of operational control, to support the control risk assessment. In this case, the auditor might decide to modify the planned substantive tests accordingly.If testing details, an unexpectedly large amount of error in a sample may cause the auditor to believe that a class of transactions or account balances is given significantly wrong in the absence of additional audit evidence to show that there are not material misrepresentations.When the best estimate of error is very close to the tolerable error, the auditor recognizes the risk that another sample have different best estimate that could exceed the tolerable error.ConclusionsFollowing analysis of sampling methods conclude that all methods have advantages and disadvantages. But the auditor is important in choosing the sampling method is based on professional judgment and take into account the cost / benefit ratio. Thus, if a sampling method proves to be costly auditor should seek the most efficient method in view of the main and specific objectives of the audit.Romanian Statistical Review nr. 5 / 2010Statistics and Audit The auditor should evaluate the sample results to determine whether the preliminary assessment of relevant characteristics of the population must be confirmed or revised. If the evaluation sample results indicate that the relevant characteristics of the population needs assessment review, the auditor may: require management to investigate identified errors and likelihood of future errors and make necessary adjustments to change the nature, timing and extent of further procedures to take into account the effect on the audit report.Selective bibliography:[1] Law no. 672/2002 updated, on public internal audit[2] Arens, A şi Loebbecke J - Controve …Audit– An integrate approach”, 8th edition, Arc Publishing House[3] ISA 530 - Financial Audit 2008 - International Standards on Auditing, IRECSON Publishing House, 2009- Dictionary of macroeconomics, Ed C.H. Beck, Bucharest, 2008Revista Română de Statistică nr. 5 / 2010Statistics and Audit摘要美国公司的规模迅速增加,从第二十世纪初创造了必要的审计程序,根据选定的部分总人口的审计,以获得可靠的审计证据,以描述整个人口组成的帐户余额或类别的交易。
英汉翻译教案Unit 1 翻译概述1.1 翻译的定义与分类翻译是把一种语言文字所表达的思维内容用另一种语言表达出来的双语转换过程或结果。
翻译是一种语言活动过程,又是该活动的结果。
翻译是创造性实践活动,它形成于社会、文化和语言现实之中,同时又为促进社会、文化和语言发展服务。
语言是文化的载体,翻译过程涉及两种语言、两种社会文化;是通过语言机制的转换连接或沟通自身文化和异国文化的桥梁;是具有不同语言文化背景的人互相交际、交流思想,达到相互了解的媒介。
翻译属于交叉学科,它与语言学、符号学、修辞学、心理学、人类学等有着密切联系。
它正在发展成为一个自成体系的独立学科——翻译学。
翻译又是一门艺术。
翻译美学是翻译学的不可分割的组成部分。
翻译是一种融理论、技能、艺术于一体的语言实践活动。
翻译一般分为三类:(1)按翻译手段可分为:口译(interpretation)、笔译(translation)和机器翻译(machine translation)。
(2)按源出语(sourced language)和目的语(target language)的关系,可分为:a)语际翻译(interlingual translation:指发生在不同语言之间的翻译活动,诸如英汉互译、法英互译等。
)b)语内翻译(intralingual translation:指同一语言内部进行的翻译,如方言与民族共同语,古语与现代语等。
)c)符际翻译(intersemiotic translation:指不同符号之间进行的翻译,此类翻译往往只限于通讯及保密等工作。
)(3)按翻译题材,可分为政论翻译、应用文翻译、科技翻译、文学翻译等。
不同类型的翻译有着不同的要求,学习时应注意它们的共性和个性。
1.2 翻译的过程1.3 翻译的标准Unit 2 翻译标准练习练习1: It may be safely assumed that, two thousand years ago, before Caesar set foot in southern Britain, the whole countryside visible from the windows of the room, in which I write, was in what is called ‘the state of nature’.(1)赫胥黎独处一室之中,在英伦之南,背山而面野,槛外诸境,历历如在目下。
Characterization of bio-oil from induction-heating pyrolysis of food-processing sewage sludges using chromatographic analysis (可搜到原文)利用色谱法分析感应加热热解食品加工污泥中生物油特征Wen-Tien Tsai a,*, Mei-Kuei Lee b, Jeng-Hung Chang b, Ting-Yi Su b, Yuan-Ming Chang aa Graduate Institute of Bioresources, National Pingtung University of Science and Technology, Pingtung 912, Taiwanb Department of Environmental Engineering and Science, Chia Nan University of Pharmacy and Science, Tainan 717,Taiwan摘要:在这项研究中,利用气相色谱分析-质谱法(GC-MS)分析了热解生物油和来自感应加热技术热解工业污水污泥产生的气体的分数。
使用从25到500℃加热速率为300℃/min,低温冷凝氮气中的挥发分得到了液体产品。
分析结果表明:热解生物油是非常复杂的有机化合物的混合物和含有大量的氮氧化物/或含氧化合物,如脂肪烃类物质、酚类化合物、吡啶类、吡咯、胺类、酮类,等等。
目前污水污泥中的微生物含有来自蛋白质和核酸纹理的有机碳氢化合物含有氮/或氧。
不凝性挥发组分由氮氮氧化物和含氧的化合物组成,但所载小分数为酚类化合物、1H-吲哚和脂肪羧酸。
另一方面,通过气相色谱法-热导检测器(GC-TCD)进行分析得出不凝气体产品中的成分主要是二氧化碳、一氧化碳和甲烷。
关键字:污水污泥;热解油;气相色谱-质谱法介绍为了应对全球气候变暖和化石燃料价格飙升,近年来生物质资源的能源利用已引起广泛关注,因为这种替代能源将难以放出有害空气污染物(如硫氧化物和有毒重金属)和不作为化石燃料相比增加净温室气体(即CO2)排放到大气中。
广东工业大学华立学院本科毕业设计(论文)外文参考文献译文及原文系部城建学部专业土木工程年级 2011级班级名称 11土木工程9班学号 23031109000学生姓名刘林指导教师卢集富2015 年5 月目录一、项目成本管理与控制 0二、Project Budget Monitor and Control (1)三、施工阶段承包商在控制施工成本方面所扮演的作用 (2)四、The Contractor's Role in Building Cost Reduction After Design (4)一、外文文献译文(1)项目成本管理与控制随着市场竞争的激烈性越来越大,在每一个项目中,进行成本控制越发重要。
本文论述了在施工阶段,项目经理如何成功地控制项目预算成本。
本文讨论了很多方法。
它表明,要取得成功,项目经理必须关注这些成功的方法。
1.简介调查显示,大多数项目会碰到超出预算的问……功控制预算成本。
2.项目控制和监测的概念和目的Erel and Raz (2000)指出项目控制周期包括测量成……原因以及决定纠偏措施并采取行动。
监控的目的就是纠偏措施的...标范围内。
3.建立一个有效的控制体系为了实现预算成本的目标,项目管理者需要建立一……被监测和控制是非常有帮助的。
项目成功与良好的沟通密...决( Diallo and Thuillier, 2005)。
4.成本费用的检测和控制4.1对检测的优先顺序进行排序在施工阶段,很多施工活动是基于原来的计……用完了。
第四,项目管理者应该检测高风险活动,高风险活动最有...重要(Cotterell and Hughes, 1995)。
4.2成本控制的方法一个项目的主要费用包括员工成本、材料成本以及工期延误的成本。
为了控制这些成本费用,项目管理者首先应该建立一个成本控制系统:a)为财务数据的管理和分析工作落实责任人员b)确保按照项目的结构来合理分配所有的……它的变化--在成本控制线上准确地记录所有恰...围、变更、进度、质量)相结合由于一个工程项目......虑时间价值影响后的结果。
Adsorption char acter istics of copper , lead, zinc and cadmium ions by tourmaline(环境科学学报英文版) 电气石对铜、铅、锌、镉离子的吸附特性JIANG Kan1,*, SUN Tie-heng1,2 , SUN Li-na2, LI Hai-bo2(1. School of Municipal and Environmental Engineering, Harbin Institute of Technology, Harbin 150090, China. jiangkan522@; 2. Key Laboratory of Environmental Engineering of Shenyang University, Shenyang 110041, China)摘要:本文研究了电气石对Cu2+、Pb2+、Zn2+和Cd2+的吸附特性,建立了吸附平衡方程。
研究四种金属离子的吸附等温线以及朗缪尔方程。
结果表明电气石能有效地去除水溶液中的重金属且具有选择性:Pb2+> Cu2+> Cd2+> Zn2+。
电气石对金属离子吸附量随着介质中金属离子的初始浓度的增加而增加。
电气石也可以增加金属溶液的pH值;发现电气石对Cu2+、Pb2+、Zn2+和Cd2+的最大吸附量为78.86、154.08、67.25和66.67mg/g;温度在25-55℃对电气石的吸附量影响很小。
此外研究了Cu2+、Pb2+、Zn2+和Cd2+的竞争吸附。
同时观察到电气石对单一金属离子的吸附能力为Pb>Cu>Zn>Cd,在两种金属系统中抑制支配地位是Pb>Cu,Pb>Zn,Pb>Cd,Cu>Zn,Cu>Cd,和Cd>Zn。
关键字:吸附;重金属含量;朗缪尔等温线;电气石介绍重金属是来自不同行业排出的废水,如电镀,金属表面处理,纺织,蓄电池,矿山,陶瓷,玻璃。
[1]雷斌,肖建庄.再生混凝土抗碳化性能的研究[J].建筑材料学报,2008(10):605-611[1] LeiBin ,XiaoJianZhuang. Research on Carbonation Resistance of Recycle Aggregate Concret [J]. Journal of building materials, 2008 (10) : 605-611.[2]张雷顺,王娟,黄秋风,邓宇.再生混凝土抗冻耐久性实验研究[J].工业建筑,2005(9):64-66 [2] Zhang Leishun, wang Juan, Huang Qiufeng Deng Yu. Study on frost resistance frost durability of recycled aggregate concrete. [J]. Industrial construction, 2005 (9) : 64-66.[3]叶青,纳米复合水泥结构材料的研究与开发[ J].新型建筑材料,2002(1):15-19[3] Ye Qing, the research and development of nanometer composite cement structure [J]. Journal of new building materials, 2002 (1) : 15 to 19[4]黄功学,谢晓鹏,纳米SiO2对水工混凝土耐久性影响试验研究[J].人民黄河,2011(70): 138-140[4]Huang Gongxue, Xie Xiaopeng, Experimental Study on the Effect of Nano-SiO_2 to Durability in Hydraulic Concrete.[J]. The people of the Yellow River, 2011 (70) : 138-140[5]肖建庄,刘琼,李文贵,Vivian Tam.再生混凝土细微观结构和破坏机理研究[J].青岛理工大学学报,2009(4):24-30[5] Xiao Jianzhuang, Liu Qiong Li Wengui, Vivian Tam, On the Micro-and Meso-Struture and Failure Mechanism of Recycle Concrete. [J]. journal of qingdao technological university , 2009 (4) : 24 to 30[6]杨青,钱晓倩,钱匡亮,王章夫,周堂贵.再生混凝土纳米复合强化实验[J].材料科学与工程学报,2011(10):66-69[6]Yang qing ,Qian Xiaoqian, Qian Kuangliang, Wang Zhangfu, Zhou, Tanggui. Recycled Concrete Intensified by Nano-materious [J]. Journal of materials science and engineering, 2011 (10) : 66-69[7]7]J. Camiletti,A. M. Soliman, M. L. Nehdi..Effects of nano- and micro-limestone addition on early-age properties of ultra-high-performance concrete[J]. Materials and Structures,2012(10) [8][8]P. Hosseini,A. Booshehrian,A. Madari.Developing Concrete Recycling Strategies by Utilizationof Nano-SiO2 Particles[J]. Waste Biomass Valor,2011(2):347–355[9]杜江涛.再生混凝土细观结构研究综述[M].广东建材,2010(4):55-57[9] Du Jiangtao. The Review of recycled concrete mesoscopic structure review [M]. Guangdong building materials, 2010 (4) : 55 to 57[10]肖建庄,兰阳,李佳彬,王军龙.再生混凝土长期使用性能研究进展[J].结构工程师,2005(6): 72-76[10]Xiao jianzhuang. lan-yang, Li Jiabin Wang Junlong. The Research of Long-term recycled concrete.[J]. Journal of structural engineers, 2005 (6) : 72-76[11]Laila Raki,James Beaudoin,Rouhollah Alizadeh,Jon Makar.Cement and Concrete Nanoscience and Nanotechnology[J]. Materials 2010(3):918-942[12]谢德文,纳米材料在混凝土中的应用研究[J].能源技术与管理,2008(5):105-113[12]Xie Dewen. The application of nano material in the concrete [J]. Energy technology and management, 2008 (5) : 105-113。
淮阴工学院毕业设计(论文)外文资料翻译系部:计算机工程专业:计算机科学与技术姓名:刘海平学号: 10213120外文出处:Digital Avionics SystemsConference,2005. DASC 2005.The 24th附件: 1.外文资料翻译译文;2.外文原文。
注:请将该封面与附件装订成册。
附件1:外文资料翻译译文解决嵌入式OPENGL难题-使标准、工具和APIS能在高度嵌入和安全的环境中一起工作摘要作为定义和表现屏幕图象来说,嵌入式的HMIS正在使用OpenGL来表现API.由于图形加速子系统和商业驱动的出现,这一趋势能被很好的支持。
同时,嵌入的图形工具和软件厂商已经在他们的API中支持OpenGL。
因为其高度的嵌入和关键的安全环境,完整的OpenGL不是一个狭窄的标准。
为了能获得低价格/低功耗的硬件设备和减少获得关键安全证书的驱动的复杂性,必须包含OpenGL的子集。
近些年,移动图形工业已经从定义合适的OpenGL子集的工业联盟的努力中获得利益。
这些子集,或外形,存在于趋向为广泛的不同的嵌入式市场的应用的不同版本提供服务。
它很清楚如此定义明确的标准罐子和将会有一种在嵌入式和关键安全的图形业上的有益的影响,提供空前的便携和简单的HMI程序. 图形工具和软件厂商正在支持新的标准的水平是不清晰的。
对于终端开发者来说,这些要求是非常高的,就像既不支持或很难的保证的API的可靠性。
这篇论文在对厂商和开发者征税方面提出了些建议,获得用户接口和用OPENGL标准来确保工程的成功和HMI软件的广泛调度的建议。
背景图形处理单元(GPUs)在过去 10 年内, 嵌入式的系统经历了基本的变化的平台显示技术。
这些变化已经主要被两个相似技术所控制,使用了OPENGL的显示硬件和高级的以光栅为基础的EGS系统。
平面显示已经在支持嵌入式尺寸和宽度限制方面有了很大的提高。
以光栅为基础的EGS已经在解决增强的方法方面提供了足够的马力,特别是建立在日常的OPENGL硬件上。
那些渲染引擎或图形芯片是处理图形和创建或渲染图形的移动处理设备的一部分。
在桌面系统方面,硬件渲染引擎起处于统治地位,导致了两个高性能的处理器的分离,这两个处理器目前存在于大部分的系统中。
一个是一般计算,一个是处理和显示图形。
GPU的发展已经在很大程度上被较好的游戏能力和好的工作站和桌面图形处理的需求所控制。
GPU技术在嵌入式系统中寻得了一片生存空间,能提供在高级的在线的显示系统中很难或不可能达到的显示能力。
这些嵌入式的GPU被嵌入到不同的桌面式的图形卡片中,是GPU的特征:随身携带的存储器,硬件加速光设备,转换设备,光栅设备。
大型的桌面式图形公司提供有特色的硬件,在军事方面得到了广泛的使用。
一个嵌入式的GPU如下:目前大部分的GPU技术被用到嵌入式的系统中,它已经在桌面式图形加速方面占据一席之地。
对于GPU有个大胆的设想:将其功耗限制在5~15瓦。
这些设计在嵌入式的环境中能提供一个相等的功耗给桌面式或手提式的设备,并能提供一个可用的软件驱动程序。
OpenGL是目前大部分的共同的标准所提供的驱动程序。
作为一个嵌入式标准的OpenGLGPU的出现是伴随着那些新设计的标准而出现的,这些新设计的标准的出现是为了适应能充分利用硬件优势的图形程序的发展。
OpenGL是一个比较底层的应用程序接口,能提供支持2D和3D的几何绘图的函数的软件接口。
一些主要的OpenGL所支持的头文件如下:几何矩阵变换视口和裁剪变换纹理变换图形传递途径状态的管理几何变换缓存这些函数得到了能被GPU执行的逻辑的传递途径所支持。
这个传递途径在三角形、点、线和变换、裁剪颜色、纹理信息方面期望有一个几何规范,这些三角形、点、线和变换、裁剪颜色、纹理信息通常被用来转换几何图形,使这些几何图形能变成一种绘图模式存于帧缓存中。
在较新的GPU中,那些代表标准几何图形处理的固定的函数传递途径已经增强了顶点和像素明暗的操作,能允许更多传递途径函数的可编程性。
应用程序接口同OpenGL类似,是伴随着其它流行的标准而发展起来的,比如Mircrosoft Direct3D,能在GPU传递途径方面提供可以画图的软件接口。
OpenGL是一个经过多年发展的标准的应用程序接口,最初是通过图形工业先锋Silicon Graphics TM的努力。
在模拟、游戏、计算机辅助制造和专业图形处理市场方面,已经获得了广泛的应用。
此外它还成为嵌入式应用的事实上的标准,在许多平台上是非常有用的。
OpenGL还打算提供一个标准接口到多图形绘图设备上,允许一个应用能伴随着厂商在图形芯片的信心而运行。
OpenGL能在嵌入式电子设备市场上成为一个关键的标准,主要归功于其超强的处理能力和跨平台的的特性。
作为一个应用程序接口,OpenGL经历了15年的发展。
当更多的应用去扩展能力途径时,OpenGL的经成功版本的发行,给这个标准带来了更多的需要和更多的复杂性。
OpenGL是一个典型的用驱动体系结构来执行的。
OpenGL给绘图设备封装了一个底层接口,给那些需要使用硬件特性的应用程序提供一个高级的接口。
当OpenGL在发展时,它的驱动程序也随着发展。
为一个桌面式高级的终端图形硬件提供的现代OpenGL驱动使其能轻松地运行上百万条的直线的代码。
嵌入式变量之所以能变得越来越小,主要得益于OpenGL的子集的划分。
OpenGL子集在下一个移动GPU技术的浪潮中是一个关键的技术。
它的目标是更多的集成芯片市场。
OpenGL和移动的GPU在最近几年,一些具有高级绘图技术和低功耗的GPU的移动计算机开始在市场上出现。
这些设备是目前移动技术的一个主要的发展领域,它们的目标是单个的电话、移动游戏系统、PDA、汽车行业、医药行业以及其他的深入的嵌入式的应用。
当移动游戏发掘出其潜在的市场时,移动设备制造商开始从事GPU驱动程序的开发。
通常一个标准的图形应用程序接口,太大或太昂贵而不能在这些设备上得到应用,比如OpenGL,所以检测设备驱动和设备制造商使用这些应用程序接口的子集。
这些提供应用程序接口子集制造商的目标是明确的市场。
这些应用必须被写进带有较小的子集的工作中去。
这通常意味着它们不能被轻易从一个子集环境插入到另一个子集环境中去。
SoC设计的出现是因为明确定义的应用程序接口子集的移动GPU技术的出现。
在SoC设计中,GPU用来和处理器一起使用的,这个处理器可以将数字媒体处理核心封装到一个芯片上。
这样一个设计能被集成到一个很小的、低廉的应用上,比如:手提式医疗设备,电话,自动通信显示等等。
移动的GPU在嵌入式和关健的安全市场有很明显的应用。
通常,在系统设计中大胆的设想和简单的设计是关键因素。
移动GPU技术将在这些领域有很大的影响。
当新类型的设备开始出现并且能提供OpenGL性能的时候,必须从事针对这些设备的潜在的关键安全的OpenGL的开发。
介绍应用程序接口标准化OpenGL对于关键安全系统是一个很好的标准,并且应该被使用,就像API一样快速的发展。
标准化的子集能对OpenGL在关键安全和高度嵌入的环境中的应用起到一个关键作用。
Khronos所定义的OpenGL扩展标准对于OpenGL子集的标准化是一个重要的发展。
Khronos是一个将OpenGL融入到嵌入式和多媒体市场的一个工业联盟。
Khronos 被所有的大型的图形芯片制造商、移动电话制造商和移动软件发展协会所支持。
当清除多余的性能和着重强调简单和小的足迹时,扩展的OpenGL是定义明确的OpenGL 子集,包括移动设备。
用来提供一个需要嵌入式平台的高级图形的有用的子集。
既然全部的OpenGL规范是庞大的而且支持全部规范的图形子集系统是资源透彻的,需要一个定义明确的子集来提供一个对于嵌入应用的目标绘图能力。
扩展的OpenGL就是这样的一个子集。
工具和应用程序接口OpenGL图形开发者使用几个策略去成功创建OpenGL软件,包括以工具为基础的发展和手写代码。
这两种途径都能在系统中利用软件模块性、高层接口的封装性。
比如,一个数字映射库,用户接口库,或在系统中重用的字体渲染库。
这些工具和SDK能使支持的特性的假设成为潜在的OpenGL驱动和图形设备。
如果一个应用是为了适合驱动程序能力被限制的环境中,这些假设当然大部分需要接受挑战。
许多工具使用代码生成来操作。
显示定义被放入了用户接口中,并且这些工具使用代码生成来创建OpenGL软件执行显示操作。
这样OpenGL软件能包含成千上万的直线代码,其中大部分是几何定义。
对于所有的OpenGL环境来说,这个代码的有用性将被限制除非能足够灵活地去考虑所有输出需要支持的OpenGL子集。
除了工具输出限制其灵活性外,如果一个可重用的OpenGL库已经被写入一个特定结构中,比如OpenGL显示列表或标准的glBegin-glEnd范例,它在驱动程序不支持这个范例的平台上将是无用的。
有些技术途径能被用来考虑减轻这些问题。
结论嵌入式OpenGL是一个关键技术,它已经并且将继续被用在关键安全的嵌入式系统中。
从桌面和工作站市场上引来的GPU技术已经被广泛的应用。
使用了OpenGL的新的嵌入式的芯片的出现将增加它的使用领域。
到时功耗、价格、重量的壁垒将被打破。
为了能在所有的应用领域利用OpenGL的优点,OpenGL子集的广泛使用带来了编程和集成方面的挑战。
这些子集代表标准化OpenGL的努力和提供一个共同的可依赖的子集应用。
虽然处理不同的OpenGL子集的方法被使用,但这些方法正经历着性能和复杂性方面的考验。
一个更灵活的方法是考虑几何规范,像数据。
不注意OpenGL和其子集的结果是关键安全应用发展将付出巨大的代价。
当不同的OpenGL环境被需要,或者应用低代价的SoC或移动绘图GPU被需要,必须严厉地审察OpenGL策略以确保以得到便携。
有两种选择:一种是花大代价去重新应用,另一种是利用不同的标准。
参考文献[1] OpenGL ES Safety Critical Profile Specification, V 1.0. The Khronos Group, 2005[2] OpenGL Common/Common-Lite Profile Specification, V 1.0.02, The Knronos Group, 2004[3] Bennet, Paul A., Applications of Display Prototyping and Rehosting Tools to the Development of Simulator, Flight Training Device, and Cockpit Displays, American Institute if Aeronautics and Astronautics, 1997[4] Snyder, Mark I., A Data-based Paradigm for Rapid Development of Advanced Avionics Displays, Proc. Digital Avionics Systems Conference, Indianapolis, IN, 2003[5] Storey, Neil, and Faulkner, Alastair, Data Management in Data Driven Safety-Related Systems, Proc, 20th Systems Safety Conference, Denver, CO., 2002附件2:外文原文SOLVING THE EMBEDDED OPENGL PUZZLE –MAKING STANDARDS, TOOLS, AND APIS WORK TOGETHER IN HIGHLY EMBEDDED AND SAFETY CRITICAL ENVIRONMENTSMark Snyder, Quantum3D, Glendale, AZAbstractEmbedded graphical Human Machine Interfaces (HMIs) are increasingly making use of the OpenGL rendering API as a standard for defining and rendering screen graphics. This trend is supported by the emergence of hardware accelerated graphics subsystems and commercially available driver software. Meanwhile, embedded graphics tool and software vendors have adopted OpenGL in various forms as the rendering API they support. For highly embedded and safety critical environments, however, full OpenGL is not a narrow enough standard. In order to achieve low-cost/low power hardware implementations and reduce driver complexity to achieve safety-critical certification, OpenGL subsets must be embraced.In recent years, the mobile graphics industry has benefited from the efforts of industry consortiums to define capable OpenGL subsets. These subsets, or profiles, exist in various versions intended to facilitate the development of applications for widely differing embedded markets, from cell phone graphics to safety critical high-powered embedded graphics subsystems. It is clear that such well-defined standards can and will have a beneficial impact on the embedded and safety-critical graphics industries, offering unprecedented portability and simplicity for HMI applications. What is not as clear is the level to which graphics tool and software vendors are supporting the new standards. The stakes are high for the end developer, as reliance on API capabilities that are either unsupported or difficult to certify can present serious system integration and certification pitfalls.This paper presents recommendations in such areas as tool selection, standards to levy on vendors and developers, approaches for achieving user interfaces and font rendering using the OpenGL standards, and recommendations to ensure the successful engineering and wide deployment of HMI software.BackgroundGraphical Processing Units (GPUs)Over the past 10 years, display rendering technology for platform embedded systems has undergone fundamental changes. These changes have been driven primarily by two twin technological thrusts –flat-panel display hardware and advanced raster-based EGS systems using OpenGL. Flat panels have enabled an increase in display resolution while still supporting embedded size and weight constraints. Raster based EGS, particularly based on commodity OpenGL hardware, has provided the horsepower to drive the increased resolution.The rendering engine, or graphics chip, is the part of the mobile computing device that processes graphics and creates or renders the display. On the desktop, hardware rendering engines dominate, resulting in two separate high performance processors being present in most systems –one for general computing, and one for processing and displaying graphics. The development of the GPU, has been largely driven by the desire for better video gaming capability, but also by the desire for better workstation and desktop graphical processing.GPU technology has found a niche in embedded systems, providing advanced display capabilities that were difficult or even impossible to achieve in legacy graphical display systems. These embedded GPUs are embedded variants of desktop or laptop graphics cards, featuring GPUs, onboard texture memory, and hardware accelerated lighting, transformation, and rasterization. Offerings featuring hardware from major desktop graphics companies are being widely used in military applications. An embedded GPU is shown in Figure 1.Figure 1. Embedded GPUMost GPU technology deployed in embedded systems today has its roots in desktop or laptop based graphics accelerators. Power consumption for the GPU alone can range from 5 to 15W. These designs can provide power equivalent to a desktop or laptop within an embedded environment, provided the supporting driver software is available. OpenGL is by far the most commonly used standard to supply these drivers.OpenGL as an Embedded StandardThe advent of the GPU has been accompanied by widespread use of new standards designed to facilitate development of graphical applications that take advantage of the hardware. One such lowlevel Applications Programming Interface (API) is OpenGL. OpenGL provides a software interface that supports 2D and 3D definition of geometry and rendering functions. Some of the major functions OpenGL supports include: Matrix-based geometry transformationsViewport and clipping regionsTextured geometryGraphics pipeline state managementGeometry cachingThese functions are supported through a logical pipeline that the GPU implements. The pipeline expects geometry specification in the form of triangles, points, and lines, along with transformation, clipping, color, and texture information used to convert thegeometry into the form rendered into the frame buffer. In newer GPUs, the fixed function pipeline which represents standard methods of processing geometry has been augmented with vertex and pixel shader operations, which allow more programmability of the pipeline functions. APIs like OpenGL, along with other popular standards such as Microsoft Direct3D, provide software interfaces to draw graphics in the GPU pipeline. The GPU pipeline for OpenGL is shown in Figure 2, where the blue API bubble on the left represents OpenGL.Figure 2. OpenGL PipelineOpenGL is a standardized API that has evolved over many years, initially through the efforts of graphics industry pioneer Silicon GraphicsTM. It has achieved widespread adoption in the simulation, gaming, CAD, and professional graphics markets, and is the de-facto standard for embedded applications. It is widely available on many platforms. OpenGL is also meant to provide a standard interface to multiple graphics rendering devices, allowing an application to run with confidence on graphics chips from multiple vendors. It is a key standard in the embedded avionics market due to its power and cross platform nature.OpenGL as an API has undergone much growth in the past 15 years or so. As more classes of applications sought to exploit its capabilities, successive versions of thestandard have evolved, bringing more calls and more complexity to the standard. OpenGL is typically implemented using a driver architecture. OpenGL drivers encapsulates a low-level interface to the rendering hardware, and presents a high-level interface to applications that need to use the hardware’s featur es. As OpenGL has grown, so have its drivers. A modern OpenGL driver for a desktop high-end graphics card can easily run into millions of lines of code. Embedded variants can be smaller, depending on what subset of OpenGL they support. OpenGL subsets are a key technology in the next wave of mobile GPU technology, targeted for more integrated markets.OpenGL and Mobile GPUsIn the last few years, mobile computers featuring advanced rendering technology using low-power GPUs have begun to appear on the market. These devices, targeted for cell phone, mobile game systems, PDAs, automotive uses, medical uses, and other deeply embedded applications, are currently one of the major development areas in mobile technology. As mobile gaming reaches its market potential, mobile device manufacturers have begun to address the GPU in their device development. Often a standard graphics API, such as OpenGL, is too large or costly to implement on these devices, so COTS driver and device manufacturers rely on subsets of the API. These subsets manufacturers to offer capabilities targeted to specific markets. Applications must be written to work with the smaller subsets, which often means they cannot be ported easily from one subset environment to another.Mobile GPU technology enabled by well-defined API subsets has led to the emergence of System On Chip (SoC) designs. In a SoC design, the GPU is combined with the processor to encapsulate a complete general purpose and digital media processing core on a single chip. Such a design can be integrated into very small, low cost applications such as handheld medical equipment, cellular phones, automotive telematics displays, etc.Mobile GPUs have an obvious application in embedded and safety-critical markets. Oftentimes, power consumption, weight, and simplicity of design are key factors in system design, and mobile GPU technology will have a big impact in these areas. Asentirely new classes of devices begin to emerge and offer OpenGL capabilities, the potential usage of OpenGL by safety-critical applications targeting these devices must be addressed.RecommendationsEmbracing API StandardizationWhile OpenGL is a good standard for safety criticalsystems and should be used, it has grown large as an API. Standardized subsets provide a key to using OpenGL in the safety-critical and deeply embedded environments.The OpenGL ES standard by the Khronos Group is an important development in standardized special purpose subsets of OpenGL. Khronos is an industry consortium designed to foster the adoption of OpenGL into embedded and multimedia markets. Khronos is supported by all major graphics chip manufacturers, mobile phone manufacturers, and the mobile software development community. OpenGL ES is a well-defined subset of OpenGL that is designed to provide a capable subset for advanced graphics on demanding embedded platforms, including mobile devices while eliminating redundant capability and stressing simplicity and small footprint.. Since the full OpenGL specification is large and graphics subsystems that support the full spec are resource intensive, a well-defined subset is required to provide a target rendering capability for embedded applications. OpenGL ES is that subset.Tools and APIsOpenGL graphics developers typically employ several strategies to successfully create the OpenGL software to draw screens, including tool-based development and hand-code. Both approaches can make use of software modularity, encapsulating higher level interfaces, such as a digital map library, user interface library, or font rendering library into SDK’s for reuse in the system. These tools and SDKs may make assumptions about supported features of the underlying OpenGL driver and graphics hardware. If a goal of the application is to be portable to environments where driver capability may be limited, these assumptions will almost certainly need to be challenged.Many tools operate using code generation to generate code representing a display definition. Display definitions are entered into a user interface, and the tool then employs code generation to create OpenGL software implementing the display [3]. Such OpenGL software can encompass tens or even hundreds of thousands of lines of code, most of it devoted to geometry specification. Usefulness of this code for all OpenGL environments may be limited unless its code generation is flexible enough to take into account all OpenGL subsets the output might need to support. The code generation approach is illustrated in Figure 4.In addition to tool output limiting flexibility, if a reusable OpenGL library has been written to rely on certain constructs, such as OpenGL display lists or the standard glBegin-glEnd paradigm, it will not be useful on platforms where the driver does not support this paradigm. There are some technical approaches that can be considered to alleviate some of these problems.ConclusionsEmbedded OpenGL is a key technology that has been and will continue to be used on safety-critical embedded systems. GPU technology borrowed from the desktop and workstation markets has largely been used for these applications. The advent of new embedded chipsets employing OpenGL will increase this usage and potentially extend it into new areas where cost, power, and weight barriers are being broken down.In order to take advantage of OpenGL in all these application areas, the widespread usage of OpenGL subsets presents a programming and integration challenge. Subsets, such as the OpenGL ES Safety-Critical profile from the Khronos group, represent efforts to standardize OpenGL and provide a common subset applications can rely on.Software approaches to handling differing OpenGL subsets can be employed, but these approaches can suffer performance and complexity issues. A more flexible approach is to consider geometry specification as dataThe result of failing to pay attention to OpenGL and its subsets can be costly for a safety-critical application development. When portability to differing OpenGL environments is desired, or the ability to employ low cost SoC or mobile rendering GPUs is needed, the OpenGL strategy must be rigorously scrutinized to ensure such portabilitycan be achieved. The alternative is costly rework of the application to address differing standards.References[1] OpenGL ES Safety Critical Profile Specification, V 1.0. The Khronos Group, 2005[2] OpenGL Common/Common-Lite Profile Specification, V 1.0.02, The Knronos Group, 2004[3] Bennet, Paul A., Applications of Display Prototyping and Rehosting Tools to the Development of Simulator, Flight Training Device, and Cockpit Displays, American Institute if Aeronautics and Astronautics, 1997[4] Snyder, Mark I., A Data-based Paradigm for Rapid Development of Advanced Avionics Displays, Proc. Digital Avionics Systems Conference, Indianapolis, IN, 2003 [5] Storey, Neil, and Faulkner, Alastair, Data Management in Data Driven Safety-Related Systems, Proc, 20th Systems Safety Conference, Denver, CO., 2002Email Addresses Mark Snyder, msnyder@24th Digital Avionics Systems ConferenceOctober 30, 2005。