Load Distribution in a CORBA Environment
- 格式:pdf
- 大小:251.77 KB
- 文档页数:9
如何解决水资源分配不均英文作文英文回答:Water resource allocation is a global issue that requires immediate attention. Unequal distribution of water resources poses a significant challenge to sustainable development and the well-being of communities around the world. There are several ways to address this problem.Firstly, improving water infrastructure is crucial. Many regions lack proper water storage facilities, pipelines, and treatment plants. By investing in infrastructure development, we can ensure that water is efficiently distributed to areas that need it the most. For example, in my hometown, we recently constructed a new reservoir and upgraded the water distribution network, which has greatly improved access to clean water for everyone.Secondly, promoting water conservation and efficiencyis essential. People often take water for granted and engage in wasteful practices. By raising awareness and implementing water-saving measures, we can reduce water consumption and alleviate the strain on available resources. In my community, we have launched a campaign called "Every Drop Counts," which encourages residents to fix leaky faucets, use water-efficient appliances, and adopt responsible irrigation practices. As a result, we have seen a significant decrease in water usage.Furthermore, effective water management strategies are needed. Governments and local authorities should implement policies that prioritize equitable water distribution. This includes establishing fair pricing mechanisms, enforcing water rights, and implementing water allocation plans based on the needs of different sectors. For instance, in my country, farmers are provided with subsidized water for irrigation purposes to support agricultural production,while industries are required to implement water recycling systems to minimize their impact on water resources.Lastly, international cooperation is crucial inaddressing water resource allocation disparities. Many countries share transboundary water sources, and conflicts can arise when one country monopolizes the resources. By fostering collaboration and negotiation, countries can work together to ensure fair and sustainable water allocation. The example of the Nile River basin countries, which have established the Nile Basin Initiative to promotecooperation and equitable sharing of water resources, demonstrates the importance of international partnerships.中文回答:水资源分配不均是一个全球性的问题,需要立即解决。
General English Admission Test For Non-English MajorPh.D. program(Harbin Institute of Technology)Passage OneQuestions 1-7 are based on the following passage:According to a recent theory, Archean-age gold-quartz vein systems were formed over two billion years ago from magmatic fluids that originated from molten granitelike bodies deep beneath the surface of the Earth. This theory is contrary to the widely held view that the systems were deposited from metamorphic fluids, that is, from fluids that formed during the dehydration of wet sedimentary rocks. The recently developed theory has considerable practical importance. Most of the gold deposits discovered during the original gold rushes were exposed at the Earth’s surface and were found because they had shed trails of alluvial gold that were easily traced by simple prospecting methods. Although these same methods still leas to an occasional discovery, most deposits not yet discovered have gone undetected because they are buried and have no surface expression.The challenge in exploration is therefore to unravel the subsurface geology of an area and pinpoint the position of buried minerals. Methods widely used today include analysis of aerial images that yield abroad geological overview, geophysical techniques that provide data on the magnetic, electrical, and mineralogical properties of the rocks being investigated, and sensitive chemical tests that are able to detect : the subtle chemical halos that often envelop mineralization. However, none of these high-technology methods are of any value if the sites to which they are applied have never mineralized, and to maximize the chances of discovery the explorer must therefore pay particular attention to selecting the ground formations most likely to be mineralized. Such ground selection relies to varying degrees on conceptual models, which take into account theoretical studies of relevant factors.These models are constructed primarily from empirical observations of known mineral deposits and from theories of ore-forming processes. The explorer uses the models to identify those geological features that are critical to the formation of the mineralization being modeled, and then tries to select areas for exploration that exhibit as many of the critical features as possible.1. The author is primarily concerned with .A. advocating a return to an older methodology.B. explaining the importance of a recent theory.C. enumerating differences between two widely used methodsD. describing events leading to a discovery2. According to passage, the widely held view of Archean-age gold-quartz vein systems is that such systemsA were formed from metamorphic fluids.B originated in molten granitelike bodiesC were formed from alluvial depositsD generally have surface expression3. The passage implies that which of the following steps would bethe first performed by explorers who wish to maximize their chances of discovering gold?A Surveying several sites known to have been formed morethan two billion years ago.B Limiting exploration to sites known to have been formedform metamorphic fluid.C Using an appropriate conceptual model to select a site forfurther exploration.D Using geophysical methods to analyze rocks over a broadarea.4. Which of the following statements about discoveries of gold deposits is supported by information in the passage?A The number of gold discover made annually has increasedbetween the time of the original gold rushes and the presentB New discoveries of gold deposits are likely to be the resultof exploration techniques designed to locate buriedmineralizationC It is unlikely that newly discovered gold deposits will everyield as much as did those deposits discovered during theoriginal gold rushes.D Modern explorers are divided on the question of the utilityof simple prospecting methods as a source of newdiscoveries of gold deposits.5. It can be inferred from the passage that which of the following is easiest to detect?A A gold-quartz vein system originating in magma tic fluids.B A gold-quartz vein system originating in metamorphic fluids.C A gold deposit that is mixed with granite.D A gold deposit that has shed alluvial gold.6. The theory mentioned in line I relates to the conceptualmodels discussed in the passage in which of the followingways?A It may furnish a valid account of ore-forming processes,and hence, can support conceptual models that have greatpractical significance.B It suggests that certain geological formations, long believedto be mineralized, are in fact mineralized thus confirming current conceptual models.C. It suggests that there may not be enough similarity acrossArchean-age gold-quartz vein systems to warrant the formulation of conceptual models.D It corrects existing theories about the chemical halos ofgold deposits, and thus provides a basis for correcting current conceptual models.7. According to the passage methods of exploring for gold thatare widely used today are based on which of the following facts?A Most of the Earth’s remaining gold deposits are stillmolten.B Most of the Earth’s remaining gold deposits are exposedat the surface.C Most of the Earth’s remaining gold deposits are buriedand have no surface expressionD Only one type of gold deposit warrants exploration. sincethe other types of gold deposits are found in regions difficult to reachPassage TwoQuestions 8-15 are based on the following passage:In choosing a method for determining climatic conditions that existed in the past, paleoclimatologists invoke four principal criteria. First, the material—rocks, lakes, vegetation, etc.—on which the method relies must be widespread enough to provide plenty of information, since analysis of material that is rarely encountered will not permit correlation with other regions or with other periods of geological history. Second in the process of formation, the material must have received an environmental signal that reflects a change in climate and that can be deciphered by modern physical or chemical means. Third, at least some of the material must have retained the signal unaffected by subsequent changes in the environment. Fourth, it must be possible to determine the time at which the inferred climatic conditions held. This last criterion is more easily met in dating marine sediments, because dating of only a small number of layers in a marine sequence allows the age of other layers to be estimated fairly reliably by extrapolation and interpolation. By contrast, because sedimentation is much less continuous in continental regions, estimating the age of a continental bed from the known ages of beds above and below is more risky.One very old method used in the investigation of past climatic conditions involves the measurement of water levels in ancient lakes.In temperate regions, there are enough lakes for correlations between them to give us a tenable picture. In arid and semiarid regions, on the other hand, the small number of lakes and the great distances between them reduce the possibilities for correlation. Moreover, since lake levels are controlled by rates of evaporation as well as by precipitation, the interpretation of such levels is ambiguous. For instance, the fact that lake levels in the semiarid southwestern United States appear to have been higher during the last ice age than they are now was at one time attributed to increased precipitation. On the basis of snowline elevations, however, it has been concluded that the climate then was not necessarily wetter than it is now, but rather that both summers and winters were cooler, resulting in reduced evaporationAnother problematic method is to reconstruct former climates on the basis of pollen profiles. The type of vegetation in a specific region is determined by identifying and counting the various pollen grains found there. Although the relationship between vegetation and climate is not as direct as the relationship between climate and lake levels, the method often works well in the temperate zones. In arid and semiarid regions in which there is not much vegetation, however, small changes in one or a few plant types can change the picture traumatically, making accurate correlations between neighboring areas difficult to obtain.8. Which of the following statements about the difference betweenmarine and continental sedimentation is supported by information in the passage?A.Data provided by dating marine sedimentation is moreconsistent with researchers’ findings in other disciplines thanis data provided by dating continental sedimentation.B.It is easier to estimate the age of a layer in a sequence ofcontinental sedimentation than it is to estimate the age of alayer in a sequence of marine sedimentation.C.Marine sedimentation is much less widespread than continentalsedimentationD.Marine sedimentation is much more continuous than iscontinental sedimentation.9. Which of the following statements best describes the organization of the passage as a whole?A.The author describes a method for determining past climaticconditions and then offers specific examples of situations inwhich it has been used.B.The author discusses the method of dating marine andcontinental sequences and then explains how dating is moredifficult with lake levels than with pollen profiles.C.The author describes the common requirements of methodsfor determining past climatic conditions and then discusses examples of such methods.D.The author describes various ways of choosing a material fordetermining past climatic conditions and then discusses how two such methods have yielded contradictory data.10. It can be inferred from the passage that paleoclimatologistshave concluded which of the following on the basis of their study of snow-line elevations in the southwest6ern United States?A.There is usually more precipitation during an ice age because ofincreased amounts of evaporationB.There was less precipitation during the last ice age than there istoday.ke levels in the semiarid southwestern United States werelower during the last ice age than they are today.D.The high lake levels during the last ice age may have been aresult of less evapo9ration rather than more precipitation.11. Which of the following would be the most likely topic for aparagraph that logically continues the passage?A.The kinds of plants normally found in arid regions.B.The effect of variation in lake levels on pollen distribution.C.The material best suited to preserving signal of climaticchanges.D.A third method fro investigating past climatic conditions.12. the author discusses lake levels in the southwestern United States in order toA.illustrate the mechanics of the relationship between lake level,evaporation, and precipitationB.provide an example of the uncertainty involved in interpretinglake levels.C.Prove that there are not enough ancient lakes with which tomake accurate correlationsD.Explain the effects of increased rates of evaporation on levelsof precipitation.13. It can be inferred from the passage that an environmental signalfound in geological material would no be useful to paleoclimatologists if it .A.had to be interpreted by modern chemical meansB.reflected a change in climate rather than a long-term climaticconditionC.was incorporated into a material as the material was formingD.also reflected subsequent environmental changes.14. According to the passage the material used to determine pastclimatic conditions must be widespread for which of thefollowing reasons?Ⅰ.Paleoclimatologists need to make comparisons between periods of geological history.Ⅱ. Paleoclimatologists need to compare materials that have supporteda wide variety of vegetationⅢ. Paleoclimatologists need to make comparisons with data collected in other regions.A.I onlyB.ⅡonlyC.I and ⅡonlyD.I and Ⅲonly15. Which of the following can be inferred from the passage aboutthe study of past climates in arid and semiarid regions?A.It is sometimes more difficult to determine past climaticconditions in arid and semiarid regions than in temperateregionsB.Although in the past more research has been done ontemperate regions, paleoclimatologists have recently turnedtheir attention to arid and semiarid regions.C.Although more information about past climates can begathered in arid and semiarid than in temperate regions, datingthis information is more difficult.D.It is difficult to study the climatic history of arid and semiaridregions because their climates have tended to vary more thanthose of temperate regions.Passage ThreeQuestions 16-22 are based on the following passage:While there is no blueprint for transforming a largely government-controlled economy into a free one, the experience of the United Kingdom since 1979 clearly shows one approach that works: privatization, in which state-owned industries are sold to private companies. By 1979, the total borrowings and losses of state-owned industries were running at about £3 billion a year. By selling many of these industries, the government has decreased these borrowings and losses, gained over £34 billion from the sales, and now receives tax revenues from the newly privatized companies. Along with a dramatically improved overall economy, the government has been able to repay 12.5 percent of the net national debt over a two-year period.In fact privatization has not only rescued individual industries and a whole economy headed for disaster, but has also raised the level of performance in every area. At British Airways and British Gas, for example, productivity per employee has risen by 20 percent. At associated British Ports. labor d isruptions common in the 1970’s and early 1980’s have now virtually disappeared. At British Telecom,there is no longer a waiting list—as there always was before privatization—to have a telephone installed.Part of this improved productivity has come about because the employees of privatized industries were given the opportunity to buy shares in their own companies. They responded enthusiastically to the offer of shares; at British Aerospace 89 percent of the eligible work force bought shares; at Associated British Ports 90 percent; and at British Telecom 92 percent. When people have a personal stake in something, they think about it, care about it, work to make it prosper. At the National Freight Consortium, the new employee-owners grew so concerned about t heir company’s profits that during wage negotiations they actually pressed their union to lower its wage demands. Some economists have suggested that giving away free shares would provide a needed acceleration of the privatization process. Yet they miss Th omas Paine’s point that “what we obtain too cheap we esteem too lightly” In order for the far-ranging benefits of individual ownership to be achieved by owners, companies, and countries, employees and other individuals must make their own decisions to buy, and they must commit some of their own resources to the choice.16. According to the passage all of the following were benefits ofprivatizing state owned industries in the United KingdomEXCEPTA.Privatized industries paid taxes to the governmentB.The government gained revenue from selling state-ownedindustriesC.The government repaid some of its national debtD.Profits from industries that were still state-owned increased17. According to the passage, which of the following resulted inincreased productivity in companies that have been privatized?A.A large number of employees chose to purchase shares in theircompanies.B.Free shares were widely distributed to individual shareholders.C.The government ceased to regulate major industries.D.Unions conducted wage negotiations fro employees.18. It can be inferred from the passage that the author considers labor disruptions to beA.an inevitable problem in a weak national economyB.a positive sign of employee concern about a companyC.a predictor of employee reactions to a company’s offer to sellshares to themD.a deterrence to high performance levels in an industry.19. The passage supports which of the following statements aboutemployees buying shares in their won companies?A.At three different companies, approximately nine out ten of theworkers were eligible to buy shares in their companies.B.Approximately 90%of the eligible workers at three differentcompanies chose to buy shares in their companies. C.The opportunity to buy shares was discouraged by at least somelabor unions.panies that demonstrated the highest productivity were thefirst to allow their employees the opportunity to buy shares. 20. Which of the following statements is most consistent with the principle described in L25-26?A.A democratic government that decides it is inappropriate toown a particular industry has in no way abdicated its responsibilities as guardian of the public interest.B.The ideal way for a government to protect employee interests isto force companies to maintain their share of a competitive market without government subsidies.C.The failure to harness the power of self-interest is an importantreason that state-owned industries perform poorlyernments that want to implement privatization programsmust try to eliminate all resistance to the free-market system. 21. Which of the following can be inferred from the passage aboutthe privatization process in the United Kingdom?A.It depends to a potentially dangerous degree on individualownership of shares.B.It conforms in its mos t general outlines to Thomas Paine’sprescription for business ownership.C.It was originally conceived to include some giving away of freeshares.D.It is taking place more slowly than some economists suggest isnecessary.22. The quotation in L32-33 is most probably used to .A.counter a position that the author of the passage believes isincorrect.B.State a solution to a problem described in the previous sentence.C.Show how opponents of the viewpoint of the author of thepassage have supported their arguments.D.point out a paradox contained in a controversial viewpoint.Passage FourQuestions 23-30 are based on the following passage:Historians of women’s labor in the United States at first largely disregarded the story of female service workers—women earning wages in occupations such as salesclerk, domestic servant, and office secretary. These historians focused instead on factory work, primarily because it seemed so different from traditional,unpaid “women’s work ”in the home, and because the underlying economic forces of industrialism were presumed to be gender-blind and hence emancipation in effect. Unfortunately, emancipation has been less profound than expected, for not even industrial wage labor has escaped continued sex segregation in the workplace.To explain this unfinished revolution in the status of women, historians have recently begun to emphasize the way a prevailing definition of femininity often determines the kinds of work allocated to women, even when such allocation is inappropriate to new conditions. For instance, early textile-mill entrepreneurs, in justifying women’s employment in wage labor, made much of the assumption that women were by nature skillful at detailed tasks and patient in carrying out repetitive chores; the mill owners thus imported into the new industrial order hoary stereotypes associated with the homemaking activities they presumed to have been the purview of women. Because women accepted the more unattractive new industrial tasks more readily than did men, such jobs came to be regarded as female jobs. And employers, who assumed that women’s “real” aspirations were for marriage and family life, declined to pay women wages commensurate with those of men. Thus many lower-skilled, lower-paid, less secure jobs came to beperceived as “female.”More remarkable than the origin has been the persistence of such sex segregation in twentieth-century industry. Once an occupation came to be perceived as “female”, employers showed surprisingly little interest in changing that perception, even when higher profits beckoned. And despite the urgent need of the United States during the Second World War to mobilize its human resources fully, job segregation by sex characterized even he most important war industries. Moreover, once the war ended, employers quickly returned to men most of the “male” jobs that women had been permitted to master.23. According to the passage, job segregation by sex in the United States was.A.greatly diminlated by labor mobilization during the SecondWorld War.B.perpetuated by those textile-mill owners who argued in favorof women’s employment in wage laborC.one means by which women achieved greater job securityD.reluctantly challenged by employers except when theeconomic advantages were obvious24. According to the passage, historians of women’s laborfocused on factory work as a more promising area ofresearch than service-sector work because factory workA.involved the payment of higher wagesB.required skill in detailed tasksC.was assumed to be less characterized by sex segregationD.was more readily accepted by women than by men25. It can be inferred from the passage the early historians ofwomen’s labor in the United States paid little attention to women’s employment in the service sector of the economy becauseA.fewer women found employment in the service sector than infactory workB.the wages paid to workers in the service sector were muchmore short-term than in factory workC.women’s employment in the service sector tended to bemuch more short-term than in factory workD.employment in the service sector seemed to have much incommon with the unpaid work associated with homemaking 26. The passage supports which of the following statements aboutthe early mill owners mentioned in the second paragraph? A.They hoped that by creating relatively unattractive“female” jobs they would discourage women from losing interest in marriage and family life.B.They sought to increase the size of the available labor forceas a means to keep men’s wages low.C.They argued that women were inherently suited to do well inparticular kinds of factory workD.They felt guilty about disturbing the traditional division oflabor in family.27.It can be inferred from the passage that the “unfinishedrevolution” the author mentions in L11 refers to theA.entry of women into the industrial labor market.B.Development of a new definition of femininity unrelated tothe economic forces of industrialismC.Introduction of equal pay for equal work in all professionsD.Emancipation of women wage earners fromgender-determined job allocation28. The passage supports which of the following statements about hiring policies in the United States?A.After a crisis many formerly “male ”jobs are reclassified as“female” jobs.B.Industrial employers generally prefer to hire women withprevious experience as homemakersC.Post-Second World War hiring policies caused women to losemany of their wartime gains in employment opportunity.D.Even war industries during the Second World War werereluctant to hire women for factory work.29. Which of the following words best expresses the opinion ofthe author of the passage concerning the notion that womenare more skillful than men in carrying out details tasks?A.“patient” (line17)B.“repetitive” (line18)C.“hoary” (line19)D.“homemaking” (line19)30. Which of the following best describes the relationship of thefinal paragraph to the passage as a whole?A.The central idea is reinforced by the citation of evidence drawnfrom twentieth-century history.B.The central idea is restated in such a way as to form a transitionto a new topic for discussionC.The central idea is restated and juxtaposed with evidence thatmight appear to contradict it.D.A partial exception to the generalizations of the central idea isdismissed unimportant.Passage FiveQuestions 31-36 are based on the following passage:Two modes of argumentation have been used on behalf ofwomen’s emancipation in Western societies. Arguments in what could be called the “relational” feminist tradition maintain the doctrine of “equality in difference”, or equity as distinct for equality. They posit that biological distinctions between the sexes result in a necessary sexual division of labor in the family and throughout society and that women’s procreative labor is cu rrently undervalued by society, to the disadvantage of women. By contrast, the individualist feminist tradition emphasizes individual human rights and celebrates women’s quest for personal autonomy, while downplaying the importance of gender roles and minimizing discussion of childbearing and its attendant responsibilities.Before the late nineteenth century, these views coexisted within the feminist movement, often within the writings of the same individual. Between 1890and 1920, however, relational feminism, which had been the dominant strain in feminist thought, and which still predominates among European and non-western feminists, lost ground in England and the United States. Because the concept of individual rights was already well established in the Anglo-Saxon legal and political tradition, individualist feminism came to predominate in England-speaking countries. At the same time, the goals of the two approaches began to seem increasingly irreconcilable. Individualist feminists began to advocate a totally gender-blind system with equaleducational and economic opportunities outside the home should be available for all women, continued to emphasize women’s special contributions to society as homemakers and mothers; they demanded special treatment including protective legislation for women workers. State-sponsored maternity benefits, and paid compensation for housework.Relational arguments have a major pitfall: because they underline women’s physiological and psychological distinctiveness, they are often appropriated by political adversaries and used to endorse male privilege. But the individualist approach, by attacking gender roles, denying the significance of physiological difference, and condemning existing familial institutions as hopelessly patriarchal, has often simply treated as irrelevant the family roles important to many women. If the individualist framework, with its claim for women’s autonomy, could be harmonized with the family-oriented concerns of relational feminists, a more fruitful model for contemporary feminist politics could emerge.31. The author of the passage alludes to the well-established natureof the concept of individual rights in the Anglo-Saxon legal andpolitical tradition in order toA.illustrate the influence of individualist feminist thought on moregeneral intellectual trends in English history.B.Argue that feminism was already a part of the largerAnglo-Saxon intellectual tradition, even though this has often gone unnoticed by critics of women’s emancipationC.Explain the decline in individualist thinking among feminists innon-English-speaking countries.D.Help account for an increasing shift toward individualistfeminism among feminists in English-speaking countries.32. The passage suggests that the author of the passage believes which of the following?A.The predominance of individualist feminism inEnglish-speaking countries is a historical phenomenon, the causes of which have not yet been investigated.B.The individualist and relational feminist views are irreconcilable,given their theoretical differences concerning the foundations of society.C.A consensus concerning the direction of future feminist politicswill probably soon emerge, given the awareness among feminists of the need for cooperation among women.D.Political adversaries of feminism often misuse argumentspredicated on differences between the sexes to argue that the existing social system should be maintained.33. It can be inferred from the passage that the individualist。
外文文献原稿和译文原稿logistics distribution center location factors:(1) the goods distribution and quantity. This is the distribution center and distribution of the object, such as goods source and the future of distribution, history and current and future forecast and development, etc. Distribution center should as far as possible and producer form in the area and distribution short optimization. The quantity of goods is along with the growth of the size distribution and constant growth. Goods higher growth rate, the more demand distribution center location is reasonable and reducing conveying process unnecessary waste.(2) transportation conditions. The location of logistics distribution center should be close to the transportation hub, and to form the logistics distribution center in the process of a proper nodes. In the conditional, distribution center should be as close to the railway station, port and highway.(3) land conditions. Logistics distribution center covers an area of land in increasingly expensive problem today is more and more important. Is the use of the existing land or land again? Land price? Whether to conform to the requirements of the plan for the government, and so on, in the construction distribution center have considered.(4) commodities flow. Enterprise production of consumer goods as the population shift and change, should according to enterprise's better distribution system positioning. Meanwhile, industrial products market will transfer change, in order to determine the raw materials and semi-finished products of commodities such as change of flow in the location of logistics distribution center should be considered when the flow of the specific conditions of the relevant goods.(5) other factors. Such as labor, transportation and service convenience degree, investment restrictions, etc.How to reduce logistics cost,enhance the adaptive capacity and strain capacity of distribution center is a key research question of agricultural product logistics distribution center.At present,most of the research on logistics cost concentrates off theoretical analysis of direct factors of logistics cost, and solves the problem of over-high logistics Cost mainly by direct channel solution.This research stresses on the view of how to loeate distribution center, analyzes the influence of locating distribution center on logistics cost.and finds one kind of simple and easy location method by carrying on the location analysis of distribution center through computer modeling and the application of Exeel.So the location of agricultural product logistics distribution center can be achieved scientifically and reasonably, which will attain the goal of reducing logistics cost, and have a decision.making support function to the logisties facilities and planning of agricultural product.The agricultural product logistics distribution center deals with dozens and even hundreds of clients every day, and transactions are made in high-frequency. If the distribution center is far away from other distribution points,the moving and transporting of materials and the collecting of operational data is inconvenient and costly. costly.The modernization of agricultural product logistics s distribution center is a complex engineering system,not only involves logistics technology, information technology, but also logistics management ideas and its methods,in particular the specifying of strategic location and business model is essential for the constructing of distribution center. How to reduce logistics cost,enhance the adaptive capacity and strain capacity of distribution center is a key research question of agricultural product logistics distribution center. The so—called logistics costs refers to the expenditure summation of manpower, material and financial resources in the moving process of the goods.such as loading and unloading,conveying,transport,storage,circulating,processing, information processing and other segments. In a word。
energy distribution in plants 托福阅读Energy distribution in plants is a crucial aspect of their growth and development. Plants, like all living organisms, require energy to carry out various physiological processes, such as photosynthesis, respiration, and growth. In this article, we will explore the different aspects of energy distribution in plants, including the sources of energy, its utilization, and the importance of efficient energy distribution.I. IntroductionPlants are autotrophic organisms that can produce their own food through photosynthesis. They capture sunlight and convert it into chemical energy in the form of glucose. This energy is then distributed throughout the plant to support various metabolic activities.II. Sources of Energy1.1 SunlightSunlight is the primary source of energy for plants. Through the process of photosynthesis, plants convert solar energy into chemical energy, stored in the form of glucose. This energy is vital for the synthesis of organic compounds and the growth of the plant.1.2 Soil NutrientsPlants also obtain energy from the soil in the form of nutrients. Nutrients, such as nitrogen, phosphorus, and potassium, are essential for plant growth. They are absorbed by the roots and used in various metabolic processes, including the production of ATP (adenosine triphosphate), the energy currency of cells.1.3 WaterWater is another crucial source of energy for plants. It is absorbed by the roots and transported throughout the plant via specialized tissues. Water provides the medium forvarious metabolic reactions and is involved in the transport of nutrients and photosynthesis.III. Utilization of Energy2.1 PhotosynthesisPhotosynthesis is the process by which plants convert sunlight, carbon dioxide, and water into glucose and oxygen. The energy captured from sunlight is used to power this process, providing the plant with a constant supply of energy-rich molecules.2.2 RespirationRespiration is the process by which plants break down glucose to release energy. It occurs in all living cells and is essential for the maintenance of cellular functions. The energy released during respiration is used for growth, reproduction, and other metabolic activities.2.3 Growth and DevelopmentEnergy is crucial for the growth and development of plants. It is used to synthesize new cells, build tissues, and support the formation of reproductive structures. Efficient energy distribution ensures that all parts of the plant receive the necessary energy for optimal growth and development.IV. Importance of Efficient Energy Distribution3.1 Maximizing Photosynthetic EfficiencyEfficient energy distribution allows plants to maximize their photosynthetic efficiency. By ensuring that energy is distributed to the chloroplasts in the leaves, plants can optimize the capture and conversion of sunlight, leading to increased glucose production.3.2 Resource AllocationEfficient energy distribution enables plants to allocate resources effectively. It ensures that energy is distributed to the parts of the plant that need it the most, such as growing tissues or reproductive structures. This allocation optimizes the plant's overall fitness and reproductive success.3.3 Adaptation to Environmental ConditionsEfficient energy distribution allows plants to adapt to changing environmental conditions. It enables them to allocate energy to defense mechanisms, such as the production of secondary metabolites or the reinforcement of cell walls, to protect against herbivores or abiotic stresses.V. ConclusionIn conclusion, energy distribution in plants is a complex and vital process that ensures the optimal functioning and growth of these organisms. The sources of energy, such as sunlight, soil nutrients, and water, are utilized through photosynthesis and respiration to support various metabolic activities. Efficient energy distribution is crucial for maximizing photosynthetic efficiency, resource allocation, and adaptation to environmental conditions. Understanding and studying energy distribution in plants can provide valuable insights into their physiology and ecology.。
2018年第37卷第3期 CHEMICAL INDUSTRY AND ENGINEERING PROGRESS·875·化 工 进展热负荷分配比例对抽凝-背压供热机组能耗影响杨志平,时斌,李晓恩,王宁玲(华北电力大学国家火力发电工程技术研究中心,北京 102206)摘要:抽凝-背压供热模式是实现能量梯级利用、降低火力发电煤耗的有效途径,研究不同室外温度下供热凝汽器与尖峰加热器热负荷分配比例对机组能耗的影响,确定最佳热负荷分配比例,是抽凝-背压供热机组节能降耗的核心问题之一。
本文利用热网变工况模型及Ebsilon 软件仿真,以某310MW 抽凝-背压供热机组为研究对象,分析了供热期不同温度下供热凝汽器与尖峰加热器热负荷分配比例不同时机组的发电功率及煤耗。
结果表明:对于抽凝-背压热电联产机组,并非供热凝汽器热负荷比例越高而发电功率越高,供热期不同阶段,机组发电功率随供热凝汽器热负荷变化呈现不同规律;相同室外温度下,供热凝汽器与尖峰加热器热负荷分配比例对机组能耗影响很大,凝汽器热负荷比例不同时,其极差最小值和最大值分别为2.02g/(kW·h)和5.50g/(kW·h)。
关键词:抽凝-背压供热;能量梯级利用;节能降耗;发电煤耗率;热负荷分配中图分类号:TM621 文献标志码:A 文章编号:1000–6613(2018)03–0875–09 DOI :10.16085/j.issn.1000-6613.2017-1158Impacts of heat load distribution ratio on energy consumption ofextraction steam- high back pressure heating cogeneration unitYANG Zhiping ,SHI Bin ,LI Xiao’en ,WANG Ningling(National Thermal Power Engineering & Technology Research Center ,North China Electric Power University ,Beijing 102206,China )Abstract :Using extraction steam- high back pressure heating mode is an effective way to achieve energy cascade utilization and reduce the coal consumption rate in cogeneration units. In a cogeneration power plant ,a key issue for energy saving is to determine the optimal value of heat load distribution ratio between heating condenser and peak load heater ,based on the power plant energy consumption variation resulting from the heat load distribution ratio between heating condenser and peak load heater. The output power was simulated by using Ebsilon platform ,and coal consumption were calculated under conditions with different heat load distribution ratios. Results showed that heat load distribution ratio greatly affects cogeneration unit energy consumption. The variation of coal rate range is from 2.02g/(kW·h) to 5.50g/(kW·h),due to the change of heat load distribution ratio between heating condenser and peak load heater. The relations between the output power and heating condenser heat load distribution ratio varies with heating stages.Key words :extraction steam- high back pressure heating ;energy cascade utilization ;energy saving ;coal consumption ;heat load distribution新常态下,我国能源形势日趋复杂,节能降耗压力不断增加。
International Journal of Machine Tools&Manufacture39(1999)1087–1101Thermal error analysis for a CNC lathe feed drive system Won Soo Yun a,Soo Kwang Kim b,Dong Woo Cho a,*a Department of Mechanical Engineering,Pohang University of Science and Technology,San31Hyoja-dong,Nam-gu,Pohang,Kyungbuk790-784,South Koreab Group Department of Machine Industry,Pusan Information College,48-6Gupo-dong,Buk-gu,Pusan616-737,South KoreaReceived31March1998;received in revised form21September1998AbstractThe development of high-speed feed drive systems has been a major issue in the machine tool industry for the past few decades.The resulting reduction in the time needed for tool changes and the rapid travel time can enhance productivity.However,a high-speed feed drive system naturally generates more heat and resultant thermal expansion,which adversely affects the accuracy of machined parts.This paper divides the feed drive system into two parts:the ball screw and the guide way.The thermal behavior model for each part is developed separately,in order to estimate the position errors of the feed drive system caused by thermal expansion.The modified lumped capacitance method(MLCM)and genius education algorithm (GEA)are used to analyse the linear positioning error of the ball screw.Thermal deformation of the guide way affects straightness and introduces angular errors,as well as affecting linear positioning.Thefinite element method is used to estimate the thermal behavior of the guide way.The effectiveness of the proposed models is verified through experiments using a laser interferometer.©1999Elsevier Science Ltd.All rights reserved.Keywords:Angular error;Ball screw;Feed drive system;Guide way;Linear positioning error;Thermal deformationNomenclatureQ f(t)heat by frictionQ cl(t),Q cr(t),heat by conductionQ cn(t)*Corresponding author.Tel:ϩ82-562-279-2171;fax:ϩ82-562-279-5899;e-mail:dwcho@postech.ac.kr0890-6955/99/$-see front matter©1999Elsevier Science Ltd.All rights reserved.PII:S0890-6955(98)00073-X1088W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–1101Q h(t)heat by convectionmaterial densityT temperaturet timeV volumec specific heatK q torque coefficienti motor currentw¯rotational velocityT mea measured torqueT ana estimated torquefriction coefficientW distributed load along the nutr radius of the screw shaftqЉheatfluxk conduction coefficient1.IntroductionThe demand for high-speed/high-precision machine tools is rapidly increasing in response to the development of production technology that requires high-precision parts and high productivity. Research on high-speed machine tools can be approached from two directions:the main spindle or the feed drive system.In this research,a high-speed feed drive system was investigated in order to achieve rapid travel with improved precision.A high-speed feed drive system reduces non-cutting time and tool replacement time,making production more economical.However,it also generates more heat through friction at contact areas such as the ball screw and the guide ways,causing thermal deformation that subsequently degrades the accuracy of the machine tool. With this in mind,the thermal behavior of a feed drive system was investigated in order to develop a systematic method of analysis.It has been reported that thermal errors account for40ෂ70%of the total errors arising from various error sources[1].Research has considered a number of ways of reducing thermal error, including the thermally symmetric design of a structure,separation of the heat sources from the main body of the machine tool,cooling of the structure,compensation for the thermal error and so forth[2–9].However,accurate modelling of the heat source is quite difficult because of the constantly changing characteristics:the thermal inertia,the complexity of the machine tool struc-ture,etc.The research so far reported has therefore been unable to provide a satisfactory result. In the course of this research,thefinite element method(FEM)and thefinite difference method (FDM)have often been employed to estimate the thermal behavior of machine tools.However, they also show some significant differences between estimated and experimental results,which are attributed to the fact that it is difficult to establish the boundary conditions because of the complex shape of the structure and the varying heat generation rate[10,11].In the present research,the thermal error of the feed drive system was estimated by separately modelling the thermal behaviors of the ball screw and the guide way,and then subsequently evaluating the1089 W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–1101 eventual thermal deformation of the tool position.The ball screw was modelled by applying the modified lumped capacitance method(MLCM)[12,13]developed previously to estimate thermal deformation in real-time.The thermal deformation of the guide way was evaluated using FEM with boundary conditions obtained experimentally.The developed thermal model was applied to a CNC lathe to ascertain its effectiveness.2.Thermal error analysis for the ball screw systemTo meet the requirements of high accuracy and great rigidity,applying a preload between double nuts is used as a way to restrict the backlash of the ball screw[14].The frictional resistance between the screw shaft and the nut is increased by this preload.This generates greater heat, leading to thermal deformation of the ball screw and causing low positioning accuracy.Conse-quently,the accuracy of the main system,such as a machine tool,is degraded.Therefore,the thermal deformation of the ball screw is one of the most important things to consider for high-accuracy,high-speed machine tools[1,15,16].The present paper reports the development of a model that is able to estimate this thermal deformation in real-time.The deformation was assumed to occur only in the feed direction,caus-ing positioning error.2.1.Modified lumped capacitance method(MLCM)The thermal deformation of a ball screw can be compensated if the position of the nut can be accurately measured.However,m order measurement requires expensive equipment such as a laser interferometer,the actual implementation of which is not easy because of the effects of the hostile environment such as machine tool vibration and so on.Alternatively,if the temperature distribution can somehow be obtained,the corresponding thermal deformation can then be calcu-lated.However,to do so requires that the temperature be measured at all points on the screw shaft,which is almost impossible because of the structure of ball screws.The following describes the derivation of a proposed real-time heat transfer model that makes real implementation easier. The model was named the modified lumped capacitance method(MLCM).FEM analysis and experiments verified that both the nut and the screw have almost uniform temperature distribution in a radial direction[12,13].The real-time model estimates the tempera-ture of the screw by measuring the temperature only at those locations where sensors can be mounted even during feed motion,such as at the nut surface,inside the nut and at the support bearings at both ends.Simplifying the system into a lumped model,as shown in Fig.1,the following equation can be derived:Q f(t)ϩQ cl(t)ϩQ cr(t)ϪQ h(t)ϩQ cn(t)ϭcV dTdt(1)Using the above equation can produce large errors when estimating temperature,because of the inherent error involved with a lumped model and the inexact heat transfer coefficients.Therefore, compensation coefficients were introduced and multiplied to each and every term:1090W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–1101Fig.1.Schematic diagram for MLCM model.␣s Q f(t)ϩ␣lb Q flb(t)ϩ␣rb Q frb(t)ϩ␦Q cl(t)ϩ␦Q cr(t)Ϫ1b Q hs(t)(2)Ϫ2b Q hm(t)Ϫ1Q hs(t)Ϫ2Q hm(t)ϩ␥Q cn(t)ϭcV dTdtwhere␣s compensates for the inexact conjecture of the ratio of the power consumed by the friction between the screw and nut to the total motor,and for some unexpected heat generation.␣lb and ␣rb are the compensation coefficients for the heat generated at the left and right support bearings, respectively.1,2were introduced to compensate for environmental/shape factors and the heat transfer coefficients at the boundary.1b,2b represent the compensation coefficients at the support bearings in the same manner.␥compensates for the conduction coefficient.Finally,to compensate for any error due to simplifying the heat transfer between the screw and the nut,␦was introduced. Once the frictional heat source(Q f)and MLCM compensation coefficients are determined in Eq.(2),the temperature distribution on the screw and resulting thermal deformation can be obtained.2.2.Estimation of frictional heat(Q f)and MLCM compensation coefficientMotor torque must overcome basic motor torque for rotation,bearing friction and friction between the screw shaft and nut.Through some preliminary experiments,the portion of the total motor torque attributable to overcoming ball screw friction was assigned.Since the torque can be obtained by multiplying the measured motor current by the given torque coefficient,the fric-tional heat between the screw and the nut can be calculated byQ fϭ(K q·i)w¯(3)ϭT mea·w¯Fig.2shows a single axis feed drive system that consists of a ball screw and a feed motor. The ball screw was of grade C5,the backlash of which is less than0.005mm,and the maximumW.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–11011091Fig.2.Testbed for the thermal behavior analysis of a ball screw system.feed speed was18m/min.Temperatures were measured during motion,with T-type thermocouples with0.1s response characteristics.After experimentally determining the frictional heat using Eq.(3),the MLCM compensation coefficients were obtained byfitting the model(Eq.(2))to the measured temperature data.A new optimization technique was developed and applied to minimize the error between the model and the measured data.It was named the genius education algorithm(GEA)[13,17]and was shown to be able to solve various optimization problems with good speed and reliability.It also excludes the need for computing derivatives and lessens the difficulty of determining parameter values. In the present research,the MLCM compensation coefficients were obtained through conducting experiments on a test bed and applying the GEA.Fig.3shows comparisons between the tempera-tures estimated by the MLCM model and the measured temperatures.The estimation was quite accurate,with the error bounded within3%,which validates the proposed scheme—the MLCM model with determining compensation coefficients using GEA.2.3.Thermal deformation error of the ball screwThe developed algorithm is applied to a two-axis CNC lathe.Thermal deformation analysis is performed regarding the z-axis only.Fundamental specification is as in Table1.Since it is almost impossible to measure the temperatures at the suggested points,the MLCM compensation coefficients and the frictional heat can not be estimated by measuring the tempera-ture and motor current as described.However,the main need for the compensation coefficients is because of the inexactness of the heat transfer coefficients.Since they do not change drastically for the same type of nut and screw structure,the compensation coefficients,other than the heat generation term,can be experimentally determined on a test bed.The frictional heat generation term needs to be evaluated by an analytical model because it differs for each machine tool.The following equation can be used to obtain the torque and the frictional heat due to the friction between the screw and the nut:T anaϭ͐Wrdx1092W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–1101parison with measurement and MLCM analysis(420rpm,stop time:1s).Table1The specification for a ball screw system of a CNC latheScrew shaft length1060mmNut length(one side)60mmPreload200kgfStroke400mmNut type Double nutQ fϭT ana·w¯(4)ϭ͐Wrw¯dxThe load distribution W varies with respect to the preload given to the nut as well as the specifications(nut length,ball diameter,screw shaft radius,lead angle,etc.)of the ball screw system concerned.Hence,the frictional torque and heat can be evaluated by the above equation after calculating the load distribution[13].In this paper,␣s in Eq.(2)was corrected by comparing the calculated frictional heat with that measured on a test bed.The frictional heat of the machine tool concerned can subsequently beW.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–11011093Fig.4.Load distribution along the nut.calculated from the corrected␣s and the load distribution W along the z-axis of the machine.Fig. 4shows the load distribution along the z-axis.The preload imposed was200kgf.Each nut was 60mm long and of double nut structure.The total length of the nut was thus120mm.Fig.4 represents the load distribution of the left nut,and because of the symmetry the other nut has the same distribution.The distribution in Fig.4can be substituted in Eq.(4)to determine the heat generated.Based on the preceding derivation,thermal analysis of a ball screw system was performed and the results are illustrated in Figs.5and6.The feed conditions used for both the analysis and the experiment are shown in Table2.Thefigures show the thermal errors within the stroke range,with the errors being represented relative to the machine origin.Fig.5shows the temperature distribution estimated by substituting the calculated frictional heat and the corrected compensation coefficients in Eq.(2).The tempera-ture rises by7.55°C after30min operation for the z-axis stroke of400mm.Fig.6shows the errors due to thermal expansion within the stroke range.The error was measured with respect to a reference position,which was set to beϪ10mm from the machine origin,both in the estimation and in the experiment.The maximum error,34.2m,occurred at the end of the stroke.It was assumed that the thermal error of the ball screw system consists of only linear axial error.Fig.5.Temperature rise at z-axis screw shaft of the CNC lathe.1094W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–1101Fig.6.Thermal deformation of z-axis screw shaft of the CNC lathe.Table2The feeding condition for the analysis and experiment of the CNC latheFeeding velocity12m/minStop time2sFeeding time30min3.Thermal error analysis of the guide wayThere has been little active research on the thermal behavior of the guide way,since its effect on a machine tool’s accuracy has been considered less significant.However,in most cases where a CNC lathe takes sliding guide ways,the heat generated between the sliding surfaces can signifi-cantly degrade positioning accuracy.Furthermore,the thermal error of the guide way causes angular errors such as roll,pitch and yaw,as well as linear positioning error,subsequently increasing the amount of error at the tool-tip.In this paper,FEM analysis is applied to estimate the thermal error of guide ways.The heat sources are estimated from experiments,to be used for FEM[18].3.1.Estimation of the heat source and the boundary condition for FEMThe guide ways of the object CNC lathe are modelled as shown in Fig.7,in order to estimate the temperature rise and the subsequent thermal deformation generated by the friction of sliding. The turret is placed on the x-axis carriage but is excluded from the FEM analysis due to its complexfixing condition and structure.Each element was a cube with eight nodes,a node on each corner of the cube.The number of nodes and elements was3670and2376,respectively. The following assumptions were made to perform thermal analysis by FEM:1.Machining is not performed,thus the chip effect is not considered.2.Heat conduction from the motors is replaced by heatfluxes.3.Heat generated between the sliding surfaces is replaced by heatfluxes directed to the surfacesof the sliding surface elements.1095W.S.Yun et al./International Journal of Machine Tools &Manufacture 39(1999)1087–1101Fig.7.Geometry model for the structure of the CNC lathe guide way.4.There is no thermal deformation in x -,y -,or z -direction on the bottom of the machine.According to the above assumptions,the heat sources of the guide ways were divided as fol-lows:1.Heat generated between the carriage and the guide block.2.Heat conducted from the feed drive motors.3.Heat conducted from the ball screw.The first of the above is the biggest heat source for the guide way,and changes as the feed velocity changes.Exact estimation of the heat generated imposes many difficulties.Therefore,the boundary conditions for FEM were replaced by the heat fluxes,which were calculated from the measured temperatures.Temperatures were measured at points 1and 2in Fig.7while the machine was running under the conditions in Table 2.The difference in the temperature rise between the two points was almost constant at 0.173°C,and the temperature rise at point 3and at point 4was measured to be 0.8°C and 1.53°C,respectively.For commercial machine tools,it is almost impossible to measure the temperatures at any other points on the guide way.The approximate heat flux can be estimated by using the following equation,along with the measured temperature values of points 1and 2:q ЉϭϪk dTdx (5)The heat flux estimated above was assumed to be conducted to the carriage,and the heat flux to the sliding surfaces of the guide block was determined according to the duration of contact.1096W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–1101Fig.8.Temperature distribution of the CNC lathe guide way after30min heat up.3.2.Thermal deformation error of the guide wayThe results of the FEM analysis are shown in Fig.8.Thefigure describes the thermal distri-bution of the guide way structure after the z-axis was run for30min with the feeding conditions in Table2.The maximum temperature rise was around3.4°C.Table3compares the FEM results with the values measured at the specified four locations.The thermal deformation at each node on the sliding surfaces A and B(Fig.7)can be evaluated using the results in Fig.8.The six error components(one linear positioning error,two straightness errors,three angular errors)asso-ciated with one axis of a machine tool can subsequently be identified.Fig.9represents the temperature distribution of the two slide surfaces(A and B)of the guide block along the z-axis.As can be seen in thefigure,the two distributions differ,causing angular errors while feeding along the z-axis.Fig.10shows the respective thermal deformations.In the machine tool concerned,the contact area of slide surface B is greater than that of A,and the machine tool itself is not a symmetrical structure.Some difference therefore appears,as might be expected,between the thermal expansion of slide surface A and slide surface B.The defor-mation that is in the direction of feed can be interpreted as the linear positioning error.Table3Comparison of temperature rise with measured and calculated valuesAfter30min heat up Measured value Calculated value(°C)(°C)Temperature difference of point1,20.1730.12Temperature rise of point30.8 1.06Temperature rise of point4 1.53 1.83W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–11011097Fig.9.Temperature rise of two slide surface of the guide way system.Fig.10.Z direction thermal deformation of two slide surface of the guide way system.Figs.11and12represent the thermal deformation errors on the slide surfaces A and B in the x-and y-axis,respectively.The results show that x-and y-axis straightness errors occur due to thermal deformation during motion in the z-axis.Pitch error is also induced by the thermal deformation,as shown in Fig.13.The bottom surfaceFig.11.X direction thermal deformation of two slide surface of the guide way system.1098W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–1101Fig.12.Y direction thermal deformation of two slide surface of the guide way system.Fig.13.X axis angular error in the YZ plane(pitch angle error).of the guide way was assumed to be not deformable,so that it could be used as a reference for pitch error.Fig.14illustrates the total amount of error,together with the individual error components,of a ball screw and guide way of a feed drive system.Fig.14(a)shows the estimated tool tip errorFig.14.Z axis linear positioning error due to the thermal deformation of the feed drive system.W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–11011099 induced by the pitch angle error of the guide way.It was viewed that a guide block simply guides the carriage block without constraints.Therefore,thermal expansion of the guide block does not give rise to linear positioning error of the tool-tip,whereas pitch angle error does,as in Fig. 14(a).The tool-tip is located300mm from the guide block along the y-axis,and is thus signifi-cantly affected by pitch angle error.Fig.14shows that the maximum tool-tip thermal error at the end of the stroke is10.04m.The remaining angle errors,such as roll and yaw errors,appear to be negligibly small.The results are not shown in this paper.4.Thermal deformation error of the feed drive systemTo predict the linear positioning error of a tool(cutting point)in a CNC lathe,the thermal expansion of the ball screw and of the guide way should be added together.Fig.14(c)represents the total tool-tip error caused by the thermal deformation of the feed drive system,which was obtained by summing the individual errors:the error due to ball screw expansion(Fig.14(b))and the pitch angle error due to guide way deformation(Fig.14(a)).The resulting maximum linear positioning error due to thermal deformation was44.24m.The linear positioning error during z-axis feeding was measured in the CNC lathe concerned using a laser interferometer,for comparison with the calculated results.The measured values of the linear positioning error are shown in Fig.15.The positioning error was measured immediately after the machine was turned on and again after30min movement,under the conditions in Table 2.The former was subtracted from the latter and this difference was taken as the error due to thermal deformation.As can be seen in thefigure,the z-axis stroke was400mm and the error was measured every20mm.The maximum thermal error within a stroke turned out to be40.45m. Fig.16compares the experimental and the estimated thermal errors during z-axis feeding.The proposed scheme of analysis,which combines the individual errors of the ball screw and the guide way in the aforementioned manner,estimates the errors caused by the thermal deformation of the feed drive system with surprising accuracy.Through estimation and experiment,it was shown that the thermal error due to the pitch angleFig.15.Experimental results using the laser interferometer.1100W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–1101parison between the estimated and experiment results.error of the guide way accounted for22.7%of the total thermal error.The guide way should therefore be taken into account to estimate thermal error accurately.5.ConclusionsThis paper proposes a method of estimating the tool-tip error caused by the thermal errors of a feed drive system.The approachfirst individually models and calculates the thermal errors of the ball screw and the guide way structures using MLCM/GEA and FEM,respectively,and then adds them together.The ball screw gives rise to only linear positioning errors,whereas the guide way causes angular errors as well as linear positioning and straightness errors.Of all the angular errors,it is the pitch angle error that most affects the linear positioning error at the tool-tip.For better thermal analysis of a feed drive system,the guide way as well as the ball screw should be taken into account.Using the proposed scheme,the thermal error of a ball screw system can be found with only its specifications,and the heatflux of the guide way system can easily be obtained by experiment and then used for FEM analysis.The effectiveness of the proposed models was proven through experiments,with an accuracy of3.79m at the end of the stroke.AcknowledgementsThis research was funded by the Machine Tool Division of Daewoo Heavy Industries Ltd.,and many of the experiments were conducted using their machine tools in their plant.The authors are sincerely grateful to the persons concerned.References[1]J.B.Bryan,Ann.CIRP39(1990)645–656.[2]G.Spur,E.Hoffman,E.Paluncic,K.Benzinger,H.Nymoen,Ann.CIRP37(1988)401–405.1101 W.S.Yun et al./International Journal of Machine Tools&Manufacture39(1999)1087–1101[3]J.S.Chen,J.X.Yuan,J.Ni,S.M.Wu,Trans.Am.Soc.Mech.Engrs.,J.Engng.for Industry115(1993)472–479.[4]A.Kurtoglu,Ann.CIRP39(1990)417–419.[5]Y.Hatamura,T.Nagao,M.Mitsuishi,K.Kato,S.Taguchi,T.Okumura,Ann.CIRP42(1993)549–552.[6]J.S.Chen,Int.J.Mach.Tools Manufact.35(1995)593–605.[7]S.C.Veldhuis,M.A.Elbestawi,Ann.CIRP44(1995)373–377.[8]M.Weck,Ann.CIRP44(1995)589–598.[9]T.Moriwaki,C.Zhao,Proceedings of8th International IFIP WG5.3Conference,PROLAMAT’92,Japan(1992).[10]M.Weck,L.Zangs,Proceedings of the16th MTDR Conference16(1975)185–194.[11]T.Moriwaki,Ann.CIRP37(1988)393–396.[12]S.K.Kim,D.W.Cho,Int.J.Mach.Tools Manufact.37(1997)451–464.[13]S.K.Kim,Real-time estimation of temperature distribution in a ball screw system.PhD Thesis,Pohang Universityof Science and Technology(POSTECH),1997.[14]K.Takafuji,K.Nagashima,Jap.Soc.Mech.Engrs.33(1990)620–626.[15]J.Otsuka,S.Fukada,N.Obuchi,Jap.Soc.Precision Engng.50(1984)8–13.[16]L.M.Kordysh,L.V.Margolin,Soviet Engng.Res.54(5)(1983)22–24.[17]S.K.Kim,D.W.Cho,Genius education algorithm:a new global optimization method.IEEE Trans.Sys.,ManCybernet.,(in preparation).[18]abaqus version5.3manual.Hibbitt,Karlsson and Sorensen,Inc.。
Carbon Type Distribution of Petroleum Oils with SVM™ 4001and AbbematRelevant for: Petroleum Industry - Research, production and incoming quality control of base oils, lube oilsand process oils.Measure the required parameters and calculate carbon distribution and ring content of oilsaccording to ASTM D3238 in one go within minutes.1 Why determine carbon type distribution? The carbon type distribution serves to express the gross composition of the heavier fractions of petroleum into paraffinic, naphthenic, and aromatic components. It is one of the most important parameters for the qualification of base oils, lube oils, process oils, or plasticizer because it directly correlates to critical product performance properties. According to the standard ASTM D3238, the carbon distribution and ring content of olefin-free petroleum oils is calculated from measurements of refractive index, density and molecular weight (n-d-M method). The mean molecular weight can be calculated following ASTM D2502 from viscosity measurements at 37.78 °C and 98.89 °C (100 °F and 210 °F).So the following basic parameters are required: •kinematic viscosity at 37.78 °C and 98.89 °C (obtained from SVM™ 4001)•refractive index at 20 °C (obtained from the refractometer)•density at 20 °C (calculated from the measured density values by the SVM™ software)Further, the mean molecular weight is required. It is calculated from kinematic viscosity at 37.78 °C (100 °F) and 98.89 °C (210 °F) according toASTM D2502.From all these parameters, the carbon distribution (C A, C N, C P) and ring content (R T, R A, R N) are deter-mined according to the formulas in ASTM D3238. This report describes specifically how to test petroleum oils with the SVM™ 4001 (according to ASTM D7042, D4052 and D2502) in combination with an Abbemat refractometer to get the carbon type distribution according to ASTM D3238.2 Which instruments are used?For the viscosity and density measurement, theSVM™ 4001 Stabinger Viscometer™ with two measuring cells for simultaneous viscosity measurement at two temperatures is used.For the RI measurement, the Abbemat 550 is used. Connected via CAN interface, it is a module controlled by the SVM™ 4001 as master instrument.Tip: Any other Anton Paar refractometer from the Performance/ Performance Plus Line (300/350 or 500) or from the Heavy Duty line (450/650) can be used.3 Which samples are tested?Five oil samples as listed below were tested:SampleNytro 4000X Severely hydrotreated insulating oilT110 Severely hydrotreated base oil Nyflex 3150 Severely hydrotreated process oil Nypar 315 Severely hydrotreated process oil Samples were kindly provided by Nynas AB, Sweden.4 Sample measurement4.1 Instrument setupMethod: "SVM 4001 VI + Abbemat"SVM™ 4001:According to ASTM D7042, the following settings are predefined by default:•Measuring temperatures:Cell 1: 37.78 °C, Cell 2: 98.89 °C•Precision class "Precise"•RDV limit 0.10 %•RDD limit 0.0002 g/cm³• 5 determinations•Automatic prewetting: yes•Sulfur correction: activated (enter the value if the sulfur content is 0.8 % or higher to improve theaccuracy of the CTC calculation)•Drying time: 150 s (built-in air pump)when using compressed air at 2 bar: 60 s Abbemat refractometer:The method SVM + Abbemat includes the following settings for the refractometer:•Temperature: 20 °C•Measurement accuracy "Most Precise"•Hold time: 1 s•Timeout: 200 s•Wavelength: 589.3 nm (fixed parameter)4.2 CalibrationUse only a calibrated instrument. The calibration shall be performed periodically using certified reference standards. According to ASTM D7042, the reference standards shall be certified by a laboratory, which meets the requirements of ISO/IEC 17025 or a corresponding national standard. Viscosity standards should be traceable to master viscometer procedures. The uncertainty for density standards must not exceed 0.0001 g/cm³. For each certified value the uncertainty should be stated (k = 2; 95 % confidence level). Use one or more standard(s) in the viscosity range of your oil sample(s). If required, apply a calibration correction to improve the reproducibility. To perform calibration (correction), refer to the SVM X001 Reference Guide. For the refractometer perform at least a water check. For checks and adjustments of the Abbemat refer to the documentation of the Abbemat.4.3 Sample preparationIf the sample is not freshly drawn from a production line or else, homogenizing the test specimen may improve the measurement repeatability. For some samples degassing may be required. Refer to the SVM™ X001 Reference Guide.4.4 Filling10 mL single-use syringes are recommended to have enough sample for refills. Never use syringes with rubber seals, as the rubber is chemically not resistant and these syringes tend to draw bubbles.Ensure that the system (measuring cells and hoses) is leak tight, clean and dry. For flow-through filling, inject approx. 4.5 mL as first filling. After prewetting refill at least 1 mL or until the sample in the waste hose is free of bubbles. The typical amount for valid results is approx. 7 mL, where the volume can vary depending on the sample.4.5 Cleaning4.5.1 SolventsEnsure that the solvent starts boiling at a temperature higher than the measuring temperature. Otherwise a lack of cleaning in the hot upper cell may impact the measuring results.Petroleum benzine 100/140 (aliphatic hydrocarbon solvent mixture with a boiling range of 100 to 140 °C respectively 212 to 284 °F) is a universal solvent, suitable for most oils.Some oils may require an aromatic solvent, as they are not completely soluble in petroleum benzine. If so, use toluene or xylene as first solvent and the aliphatic hydrocarbon solvent as drying solvent.Avoid using acetone or ethanol, as these solvents start boiling below the temperature of the upper cell and as they are not suitable for most oils.For details, see the SVM™ X001 Reference Guide. 4.5.2 Cleaning Procedure•Tap the cleaning button to open the cleaning screen. Observe it during cleaning to get infor-mation on the cleaning status of the SVM™. •Remove the sample from the cells (push through using an air-filled syringe).•Fill approx. 5 mL of solvent using a syringe and leave the syringe connected (a 5 mL syringe forworks well for cleaning purposes).•Tap the motor speed button to improve the cleaning performance in the viscosity cell and stop it again.•Move the plunger of the syringe back and forth (motor at filling speed) to improve the cleaningperformance in the cells of SVM™ and Abbemat. •Blow air for some seconds through the cells to remove the sample-solvent-mixture.•Repeat the procedure until the liquid has reached approx. the solvent’s viscosity while the motor isturning at high speed.•Perform a final flush with a drying solvent to remove any residues.•Observe the cleaning screen. Dry the measuring cells until the cleaning value turns green and stays steadily green.•Set a sufficiently long drying time to ensure that also the Abbemat cell (at 20 °C) is completely dry. For details, see the SVM™ X001 Reference Guide.5 ResultsFor this report, the measurement and calculation results obtained from SVM™ 4001 and Abbemat 550 and the reference values on the respective data sheets (PDS, CoA) are compared.Carbon type analysis:Carbon distribution:T110 13.80 34.73 51.40 Nypar 315 0.20 30.55 69.23 Nyflex 3150 9.63 29.03 61.30 Nytro 4000X 2.35 47.30 50.40 Table 1: ASTM D3238 (n-d-M) Carbon distribution (mean of 4measurements)Ring content:T110 3.00 0.68 2.32 Nypar 315 1.58 0.01 1.57 Nyflex 3150 2.95 0.58 2.37 Nytro 4000X 1.96 0.08 1.88 Table 2: ASTM D3238 (n-d-M) Ring content (mean of 4measurements)Carbon distribution, deviations:T110 IR: -1.20D2140: 2.804.28 -1.40 Nypar 315 Fulfilled ** 3.45 4.22Nyflex 3150 IR: 0.63D2140: 2.63-3.98 1.30 Nytro 4000X IR: -1.65 IR: 2.30 IR: -0.60Table 3: Deviation to typical sample values *(dev. in percentage points)* Reference values / typical values were obtained by different methods. Where not mentioned, the value was determined by ASTM D2140.** Value must be < 1.Refractive Index:Sample RI meas. [nD] RI typ. [nD] Dev. [nD] T110 1.5035 1.502 0.0015 Nypar 315 1.4681 1.468 0.0001 Nyflex 3150 1.4949 1.494 0.0009 Nytro 4000X 1.4746 n.a. n.a. Table 4: Refractive Index and deviation to typical values at 20 °C ASTM D2502 Mean Molecular Mass:[g/mol] rangemeets rangevalueT110 399.59 352 ... 408 OKNypar 315 371.59368 ... 392 OKNyflex 3150 494.49 468 ... 505 OKNytro 4000X 273.01 n.a. n.a.Table 5: Mean molecular mass6 ConclusionThe assembly of SVM™ 4001 with Abbemat is perfectly suitable for determining the carbon type analysis of petroleum oils, provided that all requirements according to section 4, "Sample measurement" are fulfilled.Figure 1: SVM™ 4001 with Abbemat 5507 Literature•ASTM D7042: Standard Test Method for Dynamic Viscosity and Density of Liquids by StabingerViscometer (and the Calculation of KinematicViscosity)•ASTM D3238: Standard Test Method for Calculation of Carbon Distribution and Structural Group Analysis of Petroleum Oils by the n-d-MMethod•ASTM D2502: Standard Test Method for Estimation of Mean Relative Molecular Mass ofPetroleum Oils from Viscosity Measurements •Anton Paar Application Report SVM™ 3001 with Abbemat for Transformer Oils Doc. No.D89IA013EN.Contact Anton Paar GmbHTel: +43 316 257-0****************************APPENDIXAppendix A. Carbon type analysisCarbon-type analysis expresses the average amount of carbon atoms which occur in aromatic, naphthenic and paraffinic structures, reporting•the percentage of the carbon atoms in aromatic ring structures (% C A),•the percentage in naphthenic ring structures (% C N) and•the percentage in paraffinic side chains (% C P). There are several physical property correlations for carbon type analysis.In this application report the n-d-M method (refractive index – density – mean relative molecular mass), standardized as ASTM D3238, is described. Besides, a further empiric procedure exists, the VGC-r i method (viscosity gravity constant – refractivity intercept), standardized as ASTM D2140.Why carbon type analysis?Base oils, process oils and other petroleum oils are checked for their carbon type distribution. Oils with specified carbon type distribution are designed for different industries. Carbon type analysis according to ASTM D3238 is further used to determine the quantification of aromatics in diesel fuel.Major groups for this kind of analysis are process oils. To know the carbon type analysis is important to improve product properties, the process efficiency and reliability. Process oils are used in various fields e.g.: •as plasticizer in the rubber and polymer industrye.g. for automotive tires, sealants, footwear orother rubber products. Properties of the ready touse product like elasticity, grip, durability, lowtemperature performance, environmentalsustainability on the one hand, further solvencyand compatibility with the rubber compound during production highly depend on the used process oil.Such oils can be aromatic, naphthenic or paraffinic types.•as textile auxiliary formulations in the production process of yarns. They are used to reducerespectively avoid dust formation, prevent wearand rupture of fibers, electrostatic charging andmore. Such oils should have very low aromatichydrocarbon content and a high viscosity index. •in the production of cosmetics. Such oils need to have very low aromatic hydrocarbon content andmust meet the requirements for medical white oil.Nevertheless, there are also process oils, which are analyzed according to ASTM D2140. ASTM D3238 (n-d-M)“Standard Test Method for Calculation of Carbon Distribution and Structural Group Analysis of Petroleum Oils by the n-d-M Method”This test method covers the calculation of the carbon distribution and ring content of olefin-free petroleum oils from measurements of refractive index, density and mean relative molecular mass.The refractive index and density of the oil are determined at 20 °C. The mean relative molecular mass is estimated from measurements of viscosity at 37.78 °C and 98.89 °C (100 °F and 210 °F).These data are then used to calculatethe carbon distributionpercentage of the total number of carbon atoms that are present in aromatic rings (% C A), naphthenic rings (% C N) and paraffinic chains (% C P) orthe ring analysisproportions of aromatic rings (R A) and naphthenic rings (R N), and paraffinic chains (C P) that would comprise a hypothetical mean molecule.ASTM D2502 - Mean relative molecular mass "Standard Test Method for Estimation of Molecular Weight (Relative Molecular Mass) of Petroleum Oils From Viscosity Measurements”The mean relative molecular mass is a fundamental physical constant that can be used in conjunction with other physical properties to characterize hydrocarbon mixtures.This procedure covers the estimation of the mean relative molecular mass of petroleum oils or hydrocarbon fractions from kinematic viscosity measurements at 37.78 °C and 98.89 °C."SVM™ 4001 VI + Abbemat" MethodBeside the measurement results of the incoming parameters for the carbon type analysis and the analysis results according to ASTM D3238, this method offers a lot of additional useful parameters characterizing your oil:•kinematic viscosity at 40 °C and 100 °C(extrapolated according to ASTM D341) •Viscosity Index (according to ASTM D2270) •Carbon type composition according toASTM D2140 including the viscosity-gravity-constant (VGC) following ASTM D2501 •Density 20 °C•API Spec. Gravity 15.56 °C (60 °F)•Viscosity Gravity Constant according toASTM D2501。
东辽河重点河段氮磷污染特征分析秦杨,刘洋,朱娜,吴雨华(吉林省生态环境监测中心,吉林长春130000)富营养化问题是当今水污染的主要问题之一,虽然我国的污水处理效率逐步提高,但河流氮磷污染问题仍没有很好解决。
氮素物质对水体及人类有很大的危害,氨氮会消耗水体中的溶解氧,同时还可转化为亚硝胺,对人和鱼类等生物有毒害作用。
富含磷酸盐的水在光照作用下非常适合藻类生长,随着藻类死亡及代谢,会耗去水中大量的溶解氧,造成水体质量恶化和水生态结构破坏。
近年来,东辽河流域污染较为严重,成为吉林省41条江河中污染最为严重的河流之一。
根据近年水质监测结果,东辽河6个国控监测断面,除上游辽河源能达到优良水体,其余5个断面Ⅴ类、劣Ⅴ类占比非常高,污染严重,主要污染指标分别为氨氮、总磷、高锰酸盐指数、生化需氧量、化学需氧量。
已经成为影响吉林省和下游辽宁省经济与社会可持续发展的重大制约性因素。
随着经济社会的发展,流域内人口迅速增多,用水量迅速加大,对河流水质需求也越来越高,水资源供需矛盾的日益突出,已严重制约了流域内社会经济的健康良好发展。
从污染成因上看,水体污染以及水体环境化学成分的改变主要受自然因素、城市化和农业活动影响[1]。
而氮磷元素作为诱发水体富营养化的主要原因,分析其在水体中污染的分布特征和分析水质污染来源,有利于进一步分析河流污染问题,本研究选用东辽河重点河段为研究对象,以期为东辽河流域水环境评估策略和有效的水质管理提供很好的技术支持。
1研究区概况1.1自然情况东辽河是辽河干流上游地区东侧的大支流,发源于吉林省辽源市东辽县的哈达岭山脉小寒葱顶子峰东南萨哈领五座庙福安屯附近,源区海拔高360米,流经东辽县、辽源市区、四平市伊通满族自治县、梨树县、公主岭市区、双辽市及辽宁省的西丰、昌图等市县,在双辽市出省境,在辽宁省康平县三门郭家与西辽河汇合,出省境断面为四双大桥。
东辽河干流全长360米,吉林省境内长321公里,多年平均径流量3.7亿立方米[2]。
托福阅读energy distribution in plants The distribution of energy in plants is a crucial process that involves the conversion and transfer of energy from the sun to the various parts of the plant. Plants use photosynthesis to capture energy from the sun and convert it into chemical energy in the form of glucose. This glucose is then used as a source of energy for the plant's growth and development.The process of energy distribution in plants begins with photosynthesis, which occurs in the leaves. The chlorophyll in the leaves captures photons from the sun and uses their energy to convert carbon dioxide and water into glucose and oxygen. The glucose is then transported to other parts of the plant, providing energy for growth and development.The distribution of energy in plants is not uniform. The leaves, stems, and roots all have different energy requirements. The leaves have a high demand for energy because they are responsible for photosynthesis, while the roots play a crucial role in absorbing water and nutrients from the soil. The stems serve as the transport system for water, nutrients, and signals between the leaves and roots.The efficiency of energy distribution in plants is crucial for their survival and reproduction. If the energy distribution is not efficient, the plant may not be able to grow and develop properly, which can lead to decreased fitness and survival. Therefore, plants have evolved complexsystems to ensure efficient energy distribution, including the transport of glucose through the phloem and xylem tissues.In conclusion, the distribution of energy in plants is a crucial process that involves the conversion and transfer of energy from the sun to the various parts of the plant. The efficiency of energy distribution is crucial for plant survival and reproduction, and plants have evolved complex systems to ensure efficient energy distribution.。
Load Distribution in a CORBA EnvironmentT.Barth,G.Flender,B.Freisleben,F.Thilo University of Siegen,H¨o lderlinstr.3,D–57068Siegen,Germanybarth,thilo@fb5.uni-siegen.de,freisleb,plgerd@informatik.uni-siegen.deAbstractThe design and implementation of a CORBA load distri-bution service for distributed scientific computing applica-tions running in a network of workstations is described.The proposed approach is based on integrating load distribu-tion into the CORBA naming service which in turn relies on information provided by the underlying W INNER resource management system developed for typical networked Unix workstation environments.The necessary extensions to the naming service,the W INNER features for collecting load information and the placement decisions are described.A prototypical implementation of the complete system is pre-sented,and performance results obtained for the parallel optimization of a mathematical test function are discussed.1.IntroductionDistributed software architectures based on the Common Object Request Broker Architecture (CORBA)[15]have started to offer real-life produc-tion solutions to interoperability problems in various business applications,most notably in the banking and financial areas.In contrast,most of todays applications for distributed scientific computing traditionally use message passing as the means for communication between processes residing on the nodes of a dedicated parallel multiprocessor architecture.Message passing is strongly related to the way communication is realized in parallel hardware and is particularly adequate for applications where data is fre-quently exchanged between nodes.Examples are parallel algorithms for complex numerical computations,such as in computationalfluid dynamics where essentially algebraic operations on large matrices are performed.The advent of networks of workstations(NOW)as cost effective means for parallel computing and the advances of object-oriented software engineering methods have fostered efforts to develop distributed object-oriented software in-frastructures for performing scientific computing applica-tions on NOWs and also over the WWW[1,4,11,22]. Other computationally intensive engineering applications with different communication requirements,such as sim-ulations and/or multidisciplinary optimization problems[7] typically arising in the automotive or aerospace industry, have even strengthened the need for a suitable infrastructure for distributed/parallel computing.The common proper-ties of scientific computing applications are:(a)the code is mathematically rather sophisticated,it has been developed over a long period of time,and includes many thousand man years of expert knowledge which is almost impossible to transfer into a redesigned and reimplemented version of the software,and(b)the requirements on computation times and storage capacities are usually very high.From the software engineering point of view,these prop-erties lead to several design aspects that must be consid-ered when developing an adequate software infrastructure for dealing with these problems:To enable the reuse of“legacy code”,a software de-sign has to provide abstractions to wrap these codes and further treat them as an integral part of the object–oriented design and implementation.The enormous demand for computation time can be encountered with parallel or distributed implementa-tions running either on dedicated supercomputers or networked high performance workstations;using net-worked workstations in a shared environment with other(console)users raises the demand for an optimal management of the available resources.In Fig.1,a proposal for a software system architecture for a distributed problem solving environment for engineer-ing applications is presented.The main feature of such an environment is the integration of simulation and optimiza-tion software and the distributed solution of the particular computational problems involved.Figure1.Software architecture for an integrated problem solving environment for the solution of engineering prob-lems.The middleware layer provides(platform–independent) functionality to start distributed components of the systems and for the communication between them.It can be real-ized using an object–oriented approach based on CORBA ([15],[13],[17]),but alternatively,using an implementation of the non–object–oriented Message Passing Interface stan-dard(MPI)[12]or the Parallel Virtual Machine(PVM)[18], is also possible.The layer above implements the interface management functions providing the basis for application–level communication between objects.This layer encapsu-lates the application–specific interface,whether it isfile–based or a programming interface and makes a common in-terface for data exchange available,e.g.by creation and/or transformation offiles.Furthermore,synchronization be-tween the components must be handled.The topmost layer provides a common interface for the complete ponents for pre–and postprocessing have their own(graphical)user interfaces.Additionally,user inter-faces,e.g.for the convenient formulation of an optimization problem(by a graphical selection of nodes in afinite ele-ment model for constraints or decision variables,or the se-lection of a predefined objective function)and sophisticated visualization techniques for the results of the optimization are made available.As a whole,this layer should present the components of a coupled optimization and simulation system in a consistent manner.It also initiates and controls the dataflow between distributed components:from model generation in preprocessing,simulation and optimization to the visualization of optimization results in postprocessing.As already mentioned,the communication patterns be-tween the distributed components of a system are important for selecting the most suitable middleware.If the amount of communication in an application is much less than the amount of computation,then the overhead introduced by CORBA compared to low level message passing is reason-able.For example,quite often a single simulation during an optimization run may take several minutes or hours to complete,such that the communication costs for passing problem–specific data(decision variables,constraint val-ues etc.)are negligible.Furthermore,trying to use a mes-sage passing library like PVM or MPI in conjunction with a distributed object system is inconvenient and error–prone. Using message passing implies the“simulation”of method calls on remote objects.The sender has to pass a token to identify a method,then pack the arguments and send them to a specified node where the method should be executed. On the receiving node,the token has to be mapped to an object’s method and the parameters of the method call must be unpacked.Adding methods to an interface of an object or changing the prototype of an existing one is therefore as-sociated with implementing the whole packing and unpack-ing functionality on either sender and receiver side.In the regular case of using CORBA’s static invocation interface (SII)the complete procedure of packing and unpacking pa-rameters for a method call is obsolete and handled by the ORB.When using the dynamic invocation interface(DII), e.g.for asynchronous method calls without using multiple threads,the user is responsible for packing the parameters on the client side almost similar to PVM.But in contrast to PVM,at least the unpacking of parameters on the server side is unnecessary.Thus,using object–oriented middle-ware such as CORBA for the implementation of a complex object–oriented software system for distributed scientific computing has several advantages.Furthermore,in contrast to other middleware platforms such as Legion[10],CORBA as the middleware”standard“(a)is likely to be available for future hardware/operating systems environments,(b)will be known to an ever growing number of programmers,and(c)is intended to be an integral part of the current Internet–based metacomputing efforts such as DATORR(”desktop access to remote resources“)[6].This implies that services that are essential for distributed scientific computing must be also realized as CORBA services.In this paper,the design and implementation of a load distribution service for CORBA in a NOW environment is presented.Load distribution is one of the most important features in NOWs,since the console users of workstations in a network typically do not fully utilize the processing ca-pabilities of their machines(e.g.while editing text,reading mail,browsing the web,or being physically absent),and thus the idle times of workstations are frequently as high as95%[9].We discuss different approaches to integrate a load distribution mechanism into an object request bro-ker(ORB).Our approach is based on integrating it into the CORBA naming service[14]which in turn relies on infor-mation provided by the underlying W INNER resource man-agement system([2],[3])that we have developed for typical Unix NOW environments.The necessary extensions to the naming service and the W INNER features for the collection of load information from workstations and the placement decisions are described.A prototypical implementation of the complete system carried out within our research group is used to distribute the computations of a decomposed math-ematical optimization problem in our NOW environment. Finally,performance results will be presented.The paper is organized as follows.Section2dis-cusses possible approaches to integrate load distribution into CORBA and presents our solution based on a modified naming service and W INNER.In Section3,the relevant fea-tures of W INNER are described.Section4presents perfor-mance results.Section5concludes the paper and discusses areas for future research.2Integrating load distribution into CORBA In general,CORBA applications consist of a set of clients(applications objects)requesting a set of services. These services can either be other application objects within a distributed application,or commonly available services (object services)providing resolution(naming service)or object persistence(persistence service).There are different approaches to integrate load distribution func-tionality into a CORBA environment:Implementation of an explicit service(e.g.a”trader“,[19])which returns an object reference for the re-quested service on an available host(centralized load distribution strategy)or references for all available ser-vice objects.In the latter case,the client has to eval-uate the load information for all of the returned refer-ences and has to make a selection by itself(decentral-ized load distribution strategy).Integrating the load distribution mechanism to the ORB itself,e.g.by replacing the default locator by a locator with an integrated load distribution strategy[8] or using an IDL–level approach[21]The drawbacks of these approaches are that either the source code of clients has to be changed(as in thefirst ap-proach)or that load distribution depends on a specific ORB implementation or IDL compiler and can thus not be used when other ORBs are used(as in the second approach).To integrate load distribution transparently to a CORBA en-vironment,our proposal is based on integrating it into the naming service.This ensures transparency for the client side and allows the reuse of the load distribution naming service in any other CORBA–compliant ORB implementa-tion.In almost every CORBA–based implementation the naming service is utilized.In the case of applications which make no use of the naming service it would be useful to implement load distribution as an explicit service.In Fig.2,the proposed approach is illustrated graphi-cally.The load information collected by the W INNER sys-tem manager(described in Section3)from the node man-agers is available for the naming service.This load infor-mation represents system load,i.e.data like CPU utilization which is collected by the host operating system.Thus,the load information reflects CORBA-induced activity as well as load caused by other processes running on the same host. The system manager has an interface to query the status of each node manager and functionality to determine the machine with the currently minimum load.Requests from application objects to the naming service are resolved us-ing this load information for the selection of an appropriate server.The implementation of a naming service which pro-vides a transparent load distribution for any client applica-tion is described in the following.The naming service is not an integral part of a CORBA ORB but is always implemented as a CORBA service.The OMG specifies the interface of a naming service without making assumptions about implementation details of the service.Therefore,every ORB can interoperate with a new naming service as long as it complies to the OMG specifi-cation.Using the naming service,a client can get a reference to an object that is associated with a named service.This object needs to be bound to a name by the server which implements this object.Normally,a server creates a binding of an object to a name by calling the bind method of the naming service.As described in the CORBA specification [15],via a naming service normally only one object can be bound to a particular name in a context at the same time.If the name is already bound in a given context,the operation raises the AlreadyBound exception.In our approach,the naming service implementation was altered.The bind operation no longer returns an excep-Figure2.Schema for the integration of load distribution in a naming service.tion,but allows binding of several objects to the same name.A call to resolve checks the load status of all machines with an object bound to the given name.It then selects the one with the minimum load and returns it to the client.The following two excerpts from the source code of the altered naming service show the different ways of handling multi-ple services registered under the same name.The original naming service checks for a name collision and throws the mentioned exception:ob=resolve_simple(n);if(!rebind)throw AlreadyBound();The check for a name collision is now restricted to the registration of a new context and not applied when register-ing a service.Therefore,the type t of the service or context to be registered is checked:if(t==ncontext)ob=resolve_simple(n);if(!rebind)throw AlreadyBound();If such an altered naming service is used to obtain the reference to an object,the application is independent of the actual ORB implementation.To distribute the load,the load information provided by W INNER needs to be evaluated.The following pseudo code depicts the selection mechanism of the best host for the re-quested service inside the naming service:requested_name_found=falsefor all objects in contextif ==requested_name <store object in objectList>requested_name_found=trueif requested_name_foundobject_found=falserepeatfor all objects in objectList<store hostname of objectin hostList>best_host=<request from WINNER besthost in hostList>if<object exists on best_host>object=<object for requested_nameon best_host>object_found=trueelse<delete object from objectList> until object_found orobjectList is emptyif object_foundreturn objectelsereturn<first object withrequested_name in context> elsethrow exception NotFoundThefirst loop collects all objects registered under the name requestedworks almost like the standard Unix command rsh,except that the job is automatically started on the currently best suited workstation in a network.In the W INNER terminology,the CORBA naming ser-vice acts as a special kind of job manager.Unlike other job managers,the naming service uses W INNER only for query-ing the fastest available host.It does not contact the node managers itself to actually start the CORBA services on the corresponding workstation.Instead,this is achieved via the normal CORBA infrastructure.In order to perform suitable task placement decisions, the system manager must have an accurate global view of the processor speed and current utilization of the worksta-tions.To achieve this,each node manager provides the sys-tem manager with the information related to its own work-station.At startup,each node manager performs a simple bench-mark loop,evaluating the machine’s speed(of a single pro-cessor)in integer operations,floating point calculations, and memory access.This benchmark’s result is a single number proportional to the host’s sequential performance, relative to every other workstation in a network.This value (called the base speed)is reported to the system man-ager along with the node name,IP address,and other static data such as the amount of main memory and the number of CPUs.Afterwards,the node manager regularly queries several load characteristics of its local host and reports them to the system manager either if they differ significantly from the last set of data sent or after a certain time interval,indicating that the node manager is still“alive”.To measure the currently existing workload of a ma-chine,Unix kernels provide a so-called load average value averaging the number of processes in the ready queue within certain time intervals,the fastest of which is typically averaged over the last60seconds(depending on which op-erating system is used).Due to this averaging procedure, these load values follow the real load situation only very slowly.To get more recent data,W INNER computes the current run queue length from two consecutively measured load-average values.This results in a more up-to-date load average value which is comparable between differ-ent operating systems.Unfortunately,the load average values as reported by the Unix kernels may be misleadingly high.This can hap-pen when many shortly running processes are in the run queue.In that case,the CPU utilization can be observed more accurately using the fraction of time the processor(s) spent in the idle CPU state.In the single-processor case,the fraction of time spent in this state is reported by the operat-ing system to be within the interval.In the case of a machine with processors,an interval of is used in-stead.However,once the CPU idle time is close to zero,it is impossible to distinguish whether only one process is fully utilizing the CPU or whether several processes(per proces-sor)are present in the CPU run queue.Hence,both types of information have to be used by W INNER to determine processor utilization exactly.Whenever a job manager requests a new node for its job,the system manager has to select the most appropriate machine.Besides checking for the presence of a console user and verifying static properties such as requested mem-ory sizes,the system manager basically takes the currently available speed into account.Based on the workstation’s base speed,its current speed is calculated as,where denotes the fraction of available processing power in the presence of the current workload.Assuming a constant load,could be calculated as(where is the load average as com-puted by the node manager).This reflects the fact that after starting a new process,there are active processes sharing the CPU.As explained above,computing the available processor speed based on the load average may be inaccurate.Hence, for achieving more precise values,W INNER’s system man-ager instead calculates by using(the percentage of time the processor was idle):,yielding for high percentages of idle time and for higher CPU usage.In the case of small values of,it can be assumed that the workload consists of more than one process.Then,the load average value should be used for calculating in or-der to reflect the higher load.An empirically determined threshold of is used for switching between the two cases.is hence computed as follows:For seamlessly integrating symmetric multiprocessor machines with shared memory(SMP)into W INNER net-works,the system manager has to adapt its computation of accordingly.There are two basic differences that have to be taken into account.First,on a machine with proces-sors,the fraction of CPU idle time is reported in the interval .Second,the number of running processes(constitut-ing the load average)will be serviced by all processors. Assuming an ideal scheduler(in the operating system),their load will be equally distributed across all processors.Nev-ertheless,a single process can only be served by a single processor.Hence,whenever there are less processes than processors,the available speed must be derived from the capacity of only one processor.This situation changes in the case of multithreading.But since it is impossible to predict whether a given program bi-nary will use multiple threads of control,a resource“multi-processor machine”has to be requested by the user explic-itly in this case.Although the scheme presented here doesnot help to automatically select multiprocessor workstations for multithreaded applications,the workload generated by multiple threads will still be observed correctly.The computation of is performed analo-gously to the single-processor case with the exception that is computed via an intermediate value.For, is computed as.For,as with a single processor.Finally,is computed aswhile excluding values whenever there are less pro-cesses than processors.The computation of and can be summarized as follows:It is easy to see that the computation for coincides with its single-processor counterpart for.Consequently, W INNER always uses this enhanced scheme for computing .A problem arises when the system manager receives sev-eral requests for available hosts within a short time inter-val.Without further measures,the system manager would choose the same node for all these requests resulting in the fastest host being swamped with new tasks.This is due to the fact that it takes some seconds for the selected work-station’s load to change and some additional time for this information to be reported back to the system manager.In order to avoid this problem,a bias procedure was introduced to the system manager’s calculations which assigns a tem-porary penalty to recently allocated nodes.This penalty is withdrawn when the system manager receives new load in-formation from the respective node.Figure4shows the output of W INNER’s status tool which illustrates the meaning of the maximum speed and the current speed.On the left side of the status window, the names of the machines are listed.Available machines are printed in black whereas unavailable workstations(for example those with active console users)are printed in light grey.The relative speed of the given machines is indicated by the size of coloured bars.The total size of a bar cor-responds to the statically measured machine speed.The black fraction of the bar indicates the currently available fraction of the processor speed().The bar’s blue frac-tion(printed in grey)is currently in use by other processes. In this snapshot,almost all available CPU power is usable for W INNER jobs.4.Experimental resultsTo investigate the benefits of an integrated load distri-bution mechanism in CORBA,a test case frommathemat-Figure4.W INNER status tool,displaying relativespeeds of workstations.ical optimization was taken.The well known Rosenbrock test function is widely used for benchmarking optimization algorithms because of its special mathematical properties. For the general–dimensional case it is defined as follows [20]:withIn our experiments,the function is used to demonstrate the benefits of an adequate placement of computationally ex-pensive processes on nodes of a NOW.To compute the func-tion in parallel,a decomposed formulation of the Rosen-brock function has been taken.In the decomposed formula-tion,several(sub-)problems with a smaller dimension than the original–dimensional problem are solved by workers, and the subproblems are then combined for the solution of the original problem in a manager.In the case of the Rosen-brock function,the expression prevents the independent solution of a subset of the sum,because indices and occur in every term of the sum.There-fore,a manager/worker scheme can be applied,in which the worker processes compute a solution for decision variables which are non–decision variables in the manager process and vice versa.These worker processes can be computed in parallel,alternately to the computation of the manager process.In Fig.5the results of the different test scenarios are compared.All test cases were computed using an imple-mentation of the Complex Box algorithm[5].Both manager and worker problems were solved using the same algorithmwith different parameters:the maximal number of iterations for worker problems were set to 10000,for the manager problem it was set to 2000.This setting reduces the sequen-tial part of the solution strategy by decreasing the maximal number of iterations and thereby the time for the computa-tion of the manager problem.All other parameters of the Complex Box algorithm were identical.Different scenarios were used to show the benefit of load distribution.The com-putation times for the test problems using the original nam-ing service were compared with the computation times us-ing W INNER when computing the manager/worker schemes on a network of up to 10workstations (Pentium II/300MHz under Linux 2.1).The naming service was implemented in C .The ORB implementation used was omniORB 2.7.1[16].For the comparison of the different implementations of the naming service,a background load (a long running optimization of a 500–dimensional Rosenbrock function)was generated on 2,4,6or 8hosts.Additionally,the com-putations were performed with no background load.20406080100120140160012345678R u n t i m e (s e c o n d s )Number of hosts with background loadCORBA 100/7CORBA/Winner 100/7CORBA 30/3CORBA/Winner 30/3Figure 5.Different test cases of a decomposed 30–and 100–dimensional Rosenbrock function with 3and 7worker problems under different load situations.The two lower curves (CORBA 30/3,CORBA /W INNER 30/3)show the computation times for a 30–dimensional Rosenbrock function with 3worker problems (with problem dimension 10,9and 9)and a 2–dimensional manager problem.In this scenario,6workstations were available for the 4processes.The effect of load distribution is obvious when 2hosts had background load.The selection of hosts with the new naming service avoided these hosts and the computation time was the same as in the scenario with no background load.When 4workstations were busy,W INNER had to select 2of these hosts and the total com-putation time is only marginally less than the time achieved by the original naming service.Differences in computationtimes between the two naming services are caused by the se-lection of hosts for manager and worker processes:firstly,the worker processes compute different optimization prob-lems and hence need different computation times.Placing a longer running worker process on a host with background load causes a slightly longer total computation time.If the client application can predict the computation times for dif-ferent workers it should allocate a host for the longer run-ning worker first.Secondly,it makes almost no difference in terms of total computation times if one,two or even three workers are placed on hosts with background load.The to-tal time conforms to the longest running worker process be-cause of the necessary synchronization with the manager process.These effects of process placement depend on the properties of the distributed application.These test cases show the benefit of load distribution even for numerically expensive processes typical for scientific computing.The two upper curves (CORBA 100/7,CORBA /W IN -NER 100/7)compare the computation times for a 100–dimensional Rosenbrock function with 7worker problems (3workers with dimension 14,4workers with dimension 13)and a 6–dimensional manager problem.The result-ing 8processes had to be distributed among 10worksta-tions.The benefit of load distribution is again most obvious when W INNER had the possibility to select idle worksta-tions.With increasing background load the advantage di-minishes because both implementations of the naming ser-vices are forced to select services on hosts with background load.To summarize,the benefit of load distribution for the test cases mentioned above can be estimated by ca.40%in the best case.Even in the worst case it yields at least the same results as the unmodified naming service.This is the case if all available hosts are already in use and the load distri-bution mechanism has no possibility to explicitly select idle hosts.The mathematical properties of the test cases as men-tioned above result in an average reduction of computation time of about 15%.5.ConclusionsIn this paper,the design and implementation of a load distribution service for CORBA in a NOW environment suitable for distributed scientific computing was presented.The proposed approach was based on integrating load dis-tribution into the CORBA naming service which in turn re-lied on information provided by the underlying W INNER resource management system developed for typical Unix NOW environments.The necessary extensions to the nam-ing service,the W INNER features for the collection of load information and the placement decisions were described.A prototypical implementation of the complete system was described,and performance results obtained for the parallel。