数据库_外文翻译__毕业设计
- 格式:doc
- 大小:37.50 KB
- 文档页数:6
外文文献翻译格式范例本科毕业设计(外文翻译)外文参考文献译文及原文学院信息工程学院专业信息工程(电子信息工程方向)年级班别 2006级(4)班学号 3206003186学生姓名柯思怡指导教师 ______ 田妮莉 _ __2010年6月目录熟悉微软SQL Server (1)1Section A 引言 (1)2Section B 再谈数据库可伸缩性 (4)3Section C 数据库开发的特点 (7)Get Your Arms around Microsoft SQL Server (9)1Section A Introduction to SQL Server 2005 (9)2Section B Database Scalability Revisited (13)3Section C Features for Database Development (17)熟悉微软SQL Server1 Section A 引言SQL Server 2005 是微软SQL生产线上最值得期待的产品。
在经过了上百万个邮件,成百上千的规范说明,以及数十次修订后。
微软承诺SQL Server 2005 是最新的基于Windows数据库应用的数据库开发平台。
这节的内容将指出SQL Server 2005产品的一些的重要特征。
SQL Server 2005几乎覆盖OLTP及OLAP技术的所又内容。
微软公司的这个旗舰数据库产品几乎能覆盖所有的东西。
这个软件在经过五年多的制作后,成为一个与它任何一个前辈产品都完全不同的产品。
本节将介绍整个产品的大部分功能。
当人们去寻求其想要的一些功能和技术时,可以从中提取出重要的和最感新区的内容,包括SQL Server Engine 的一些蜕变的历史,以及各种各样的SQL Server 2005的版本,可伸缩性,有效性,大型数据库的维护以及商业智能等如下:●数据库引擎增强技术。
SQL Server 2005 对数据库引擎进行了许多改进,并引入了新的功能。
金融体制、融资约束与投资——来自OECD的实证分析R.SemenovDepartment of Economics,University of Nijmegen,Nijmegen(荷兰内梅亨大学,经济学院)这篇论文考查了OECD的11个国家中现金流量对企业投资的影响.我们发现不同国家之间投资对企业内部可获取资金的敏感性具有显著差异,并且银企之间具有明显的紧密关系的国家的敏感性比银企之间具有公平关系的国家的低.同时,我们发现融资约束与整体金融发展指标不存在关系.我们的结论与资本市场信息和激励问题对企业投资具有重要作用这种观点一致,并且紧密的银企关系会减少这些问题从而增加企业获取外部融资的渠道。
一、引言各个国家的企业在显著不同的金融体制下运行。
金融发展水平的差别(例如,相对GDP的信用额度和相对GDP的相应股票市场的资本化程度),在所有者和管理者关系、企业和债权人的模式中,企业控制的市场活动水平可以很好地被记录.在完美资本市场,对于具有正的净现值投资机会的企业将一直获得资金。
然而,经济理论表明市场摩擦,诸如信息不对称和激励问题会使获得外部资本更加昂贵,并且具有盈利投资机会的企业不一定能够获取所需资本.这表明融资要素,例如内部产生资金数量、新债务和权益的可得性,共同决定了企业的投资决策.现今已经有大量考查外部资金可得性对投资决策的影响的实证资料(可参考,例如Fazzari(1998)、 Hoshi(1991)、 Chapman(1996)、Samuel(1998)).大多数研究结果表明金融变量例如现金流量有助于解释企业的投资水平。
这项研究结果解释表明企业投资受限于外部资金的可得性。
很多模型强调运行正常的金融中介和金融市场有助于改善信息不对称和交易成本,减缓不对称问题,从而促使储蓄资金投着长期和高回报的项目,并且提高资源的有效配置(参看Levine(1997)的评论文章)。
因而我们预期用于更加发达的金融体制的国家的企业将更容易获得外部融资.几位学者已经指出建立企业和金融中介机构可进一步缓解金融市场摩擦。
附件1:外文资料翻译译文数据库简介1.数据库管理系统(DBMS)。
众所周知,数据库是逻辑上相关的数据元的集合。
这些数据元可以按不同的结构组织起来,以满足单位和个人的多种处理和检索的需要。
数据库本身不是什么新鲜事——早期的数据库记录在石头上或写在名册上,以及写入索引卡中。
而现在,数据库普遍记录再可磁化的介质上,并且需要用计算机程序来执行必需的存储和检索操作。
在后文中你将看到除了简单的以外,所有数据库中都有复杂的数据关系及其连接。
处理与创建、访问以及维护数据库记录有关的复杂任务的系统软件包叫做数据库管理系统(DBMS)。
DBMS软件包中的程序在数据库极其用户间建立了接口(这些用户可以是应用程序员、管理员以及其他需要信息和各种操作系统的人员)。
DBMS可组织、处理和显示从数据库中选择的数据元。
该功能使决策者可以搜索、试探和查询数据库的内容,从而对在正式报告中没有的、不再出现的且无计划的问题作出回答。
这些问题最初可能是模糊的并且是定义不清的,但是人们可以浏览数据库直到获得问题的答案。
也就是说DBMS将“管理”存储的数据项,并从公共数据库中汇集所需的数据项以回答那些非程序员的询问。
在面向文件的系统中,需要特定信息的用户可以将他们的要求传送给程序员。
该程序员在时间允许时,将编写一个或多个程序以提取数据和准备信息。
但是,使用DBMS可为用户提供一种更快的、用户可以选择的通信方式。
顺序的、直接的以及其它的文件处理方式常用于单个文件中数据的组织和构造,而DBMS能够访问和检索非关键记录字段的数据,即DBMS能够将几个大文件中逻辑相关的数据组织并连接在一起。
逻辑结构。
确定这些逻辑关系是数据管理者的任务,由数据定义语言完成。
DBMS 在存储、访问和检索操作过程中可选用以下逻辑结构技术:(1)表结构。
在该逻辑方式中,记录通过指针链接在一起。
指针是记录中的一个数据项,它指出另一个逻辑相关的记录的存储位置,例如,顾客主文件的记录将包含每个顾客的姓名和地址,而且该文件中的每个记录都由一个帐号标识。
附录附录A: 外文资料翻译-原文部分:CUSTOMER TARGETTINGThe earliest determinant of success in the development of a profitable card scheme will lie in the quality of applicants that are attracted by the marketing effort. Not only must there be sufficient creditworthy applicants to avoid fruitless and expensive application processing, but it is critical that the overall mix of new accounts meets the standard necessary to ensure ultimate profitability. For example, the marketing initiatives may attract sufficient volume of applicants that are assessed as above the scorecard cut-off, but the proportion of acceptances in the upper bands may be insufficient to deliver the level of profit and lesser bad debt required to achieve the financial objectives of the scheme.This chapter considers the range of data sources available to support the development of a credit card scheme and the tools that can be applied to maximize the flow of applications from the required categories.Data availabilityThe data that makes up the ingredients from which marketing campaigns can be constructed can come from many diverse sources. Typically, it will fall into four categories:1 the national or regional register of voters;2 the national or regional register of court judgments that records the outcomeof creditor-debtor legislation;3 any national or regional pooled information showing the credit history of clients of the participating lenders; and4 commercially compiled data including and culled from name and address lists, survey results and other market analysis data, e.g. neighborhoods and lifestyle categorization through geo-demographic information systems.The availability and quality of this data will vary from country to country and bureau to bureau.Availability is not only governed by the extent to which the responsible agency has undertaken to record it, but also by the feasibility of accessing the data and the extent (if any) to which local consumer legislation or other considerations (e.g. religious principles) will allow it to be used. Other limitations on the use of available data may lie in the simple impossibility or expense of accessing the information sources, perhaps because necessary consumer consent for divulgence has been withheld or because the records are not yet stored electronically.The local credit information bureaux will be able to provide guidance on all of these matters, as will many local trade or professional associations or the relevant government departments.Data segmentation and AnalysesThe following remarks deal with the ways in which lawfully obtained data may then be processed and analyzed in order to maximize its value as the basis of a marketing prospect list. Examples of the types and uses of data that will play a role in the credit decision area are discussed later in the chapter, within the context of application processing.The key categories into which prospects may be segmented include lifestyle, propensity to purchase specific products (financial or otherwise) and levels of risk. The leading international information bureaux will be able to provide segmentation systems that are able to correlate each of these data categories to provide meaningful prospect lists in rank order. Additionally, many bureaux will have the capability to further enhance the strength and value of the data. Through the selective purchasing of data from bona fide market sources, and by overlaying generic factors deduced from the analysis of the broad mass of industry information that routinely passes through their systems, the best international operators are now able to offer marketing and credit information support that can add significantly to the quality of new applicants.The importance of the role and standard of this data in influencing the quality of the target population for mailings, etc. should not be underestimated. Information that is dated or inaccurate may not only lead a marketer and the organization into embarrassment and damage their reputations, but it will also open the credit card scheme to applicants from outside either the target sector or ,worse still, applicants outside the lender’s view of an acceptable credit risk.From this, it follows that you should seek to use an information bureau whose business principles and operating practices comply with the highest levels of both competence and integrity.Developing the prospect databaseThis is the process by which the raw data streams are brought together and subjected to progressive refinement, with the output representing the refined base from which prospecting can begin in earnest. A wide experience-often across many different markets and countries-in the sourcing, handling and analysis of data inevitably improves the quality of the ideas and systems that a bureau can offer for the development of the prospect database.In summary, the typical shape of the service available from the very best bureaux will support a process that runs as follows:1.collect and consolidate all data to be screened for inclusion;2.merge the various streams;3.sort and classify the data by market and credit categories;4.screen the date using predetermined marketing and credit criteria; and5.consolidate and output the refined list.Bureaux will charge for the use of their expertise and systems.Therefore, consideration should be given to the volumes of data that are to be processed and the costs involved at each stage. The most cost-effective approach to constructing prospect databases only undertakes the lowest-cost screening process within the earlier stages. The more expensive screening processes are not employed until the mass of the data has been reduced by earlier filtering.It is impossible to be prescriptive about the range and levels of service that are available, but reference to one of the major bureaux operating in the region could certainly be a good starting point.Campaign Management and AnalysisAgain, this is an area where excellent support is available from the best-of-breed bureaux. They will provide both the operational support and software capabilities to mount, monitor and analyse your marketing campaign, should you so wish. Their depth of experience and capabilities in the credit sector will often open up income: cost possibilities from the solicitation exercise that would not otherwise be available to the new entrant.The First Important Applications of DBMS’sData items include names and addresses of customers, accounts, loans and their balance, and the connection between customers and their accounts and loans, e.g., who has signature authority over which accounts. Queries for account balances are common, but far more common are modifications representing a single payment from or deposit to an account.As with the airline reservation system, we expect that many tellers and customers (through ATM machines) will be querying and modifying the bank’s data at once. It is vital that simultaneous accesses to an account not cause the effect of an ATM transaction to be lost. Failures cannot be tolerated. For example, once the money has been ejected from an ATM machine ,the bank must record the debit, even if the power immediately fails. On the other hand, it is not permissible for the bank to record the debit and then not deliver the money because the power fails. The proper way to handle this operation is far from obvious and can be regarded as one of the significant achievements in DBMS architecture.Database system changed significantly. Codd proposed that database system should present the user with a view of data organized as tables called relations. Behindthe scenes, there might be a complex data structure that allowed rapid response to a variety of queries. But unlike the user of earlier database systems, the user of a relational system would not be concerned with storage structure. Queries could be expressed in a very high level language, which greatly increased the efficiency of database programmers. Relations are tables. Their columns are headed by attributes.Client –Server ArchitectureMany varieties of modern software use a client-server architecture, in which requests by one process (the client ) are sent to another process (the server) for execution. Database systems are no exception, and it is common to divide the work of the components shown into a server process and one or more client processes.In the simplest client/server architecture, the entire DBMS is a server, except for the query interfaces that the user and send queries or other commands across to the server. For example, relational systems generally use the SQL language for representing requests from the client to the server. The database server then sends the answer, in the form of a table or relation, back to client. The relationship between client and server can get more complex, especially when answers are extremely large. We shall have more to say about this matter in section 1.3.3. there is also a trend to put more work in the client, since the server will be a bottleneck if there are many simultaneous database users.附录B: 外文资料翻译-译文部分:客户目标:最早判断发展可收益卡的成功性是在于受市场影响的被吸引的申请人的质量。
(Shear wall st ructural design ofh igh-lev el fr ameworkWu Jiche ngAbstract : In t his pape r the basic c oncepts of man pow er from th e fra me sh ear w all str uc ture, analy sis of the struct ur al des ign of th e c ont ent of t he fr ame she ar wall, in cludi ng the seism ic wa ll she ar spa本科毕业设计外文文献翻译学校代码: 10128学 号:题 目:Shear wall structural design of high-level framework 学生姓名: 学 院:土木工程学院 系 别:建筑工程系 专 业:土木工程专业(建筑工程方向) 班 级:土木08-(5)班 指导教师: (副教授)nratiodesign, and a concretestructure in themost co mmonly usedframe shear wallstructurethedesign of p oints to note.Keywords: concrete; frameshearwall structure;high-risebuildingsThe wall is amodern high-rise buildings is an impo rtant buildingcontent, the size of theframe shear wall must comply with building regulations. The principle is that the largersizebut the thicknessmust besmaller geometric featuresshouldbe presented to the plate,the force is close to cylindrical.The wall shear wa ll structure is a flatcomponent. Itsexposure to the force along the plane level of therole ofshear and moment, must also take intoaccountthe vertical pressure.Operate under thecombined action ofbending moments and axial force andshear forcebythe cantilever deep beam under the action of the force levelto loo kinto the bottom mounted on the basis of. Shearwall isdividedinto a whole walland theassociated shear wall in theactual project,a wholewallfor exampl e, such as generalhousingconstruction in the gableor fish bone structure filmwalls and small openingswall.Coupled Shear walls are connected bythecoupling beam shear wall.Butbecause thegeneralcoupling beamstiffness is less thanthe wall stiffnessof the limbs,so. Walllimb aloneis obvious.The central beam of theinflection pointtopay attentionto thewall pressure than the limits of the limb axis. Will forma shortwide beams,widecolumn wall limbshear wall openings toolarge component atbothen ds with just the domain of variable cross-section ro din the internalforcesunder theactionof many Walllimb inflection point Therefore, the calcula tions and construction shouldAccordingtoapproximate the framestructure to consider.The designof shear walls shouldbe based on the characteristics of avariety ofwall itself,and differentmechanical ch aracteristicsand requirements,wall oftheinternalforcedistribution and failuremodes of specific and comprehensive consideration of the design reinforcement and structural measures. Frame shear wall structure design is to consider the structure of the overall analysis for both directionsofthehorizontal and verticaleffects. Obtain theinternal force is required in accordancewiththe bias or partial pull normal section forcecalculation.The wall structure oftheframe shear wall structural design of the content frame high-rise buildings, in the actual projectintheuse of themost seismic walls have sufficient quantitiesto meet thelimitsof the layer displacement, the location isrelatively flexible. Seismic wall for continuous layout,full-length through.Should bedesigned to avoid the wall mutations in limb length and alignment is notupand down the hole. The sametime.The inside of the hole marginscolumnshould not belessthan300mm inordertoguaranteethelengthof the column as the edgeof the component and constraint edgecomponents.Thebi-direc tional lateral force resisting structural form of vertical andhorizontalwallconnected.Each other as the affinityof the shear wall. For one, two seismic frame she ar walls,even beam highratio should notgreaterthan 5 and a height of not less than400mm.Midline columnand beams,wall midline shouldnotbe greater tha nthe columnwidthof1/4,in order toreduce thetorsional effect of the seismicaction onthecolumn.Otherwisecan be taken tostrengthen thestirrupratio inthe column tomake up.If theshear wall shearspan thanthe big two. Eventhe beamcro ss-height ratiogreaterthan 2.5, then the design pressure of thecut shouldnotmakeabig 0.2. However, if the shearwallshear spanratioof less than two couplingbeams span of less than 2.5, then the shear compres sion ratiois notgreater than 0.15. Theother hand,the bottom ofthe frame shear wallstructure to enhance thedesign should notbe less than200mmand notlessthanstorey 1/16,otherpartsshouldnot be less than 160mm and not less thanstorey 1/20. Aroundthe wall of the frame shear wall structure shouldbe set to the beam or dark beamand the side columntoform a border. Horizontal distributionofshear walls can from the shear effect,this design when building higher longeror framestructure reinforcement should be appropriatelyincreased, especially in the sensitiveparts of the beam position or temperature, stiffnesschange is bestappropriately increased, thenconsideration shouldbe givento the wallverticalreinforcement,because it is mainly from the bending effect, andtake in some multi-storeyshearwall structurereinforcedreinforcement rate -likelessconstrained edgeofthecomponent or components reinforcement of theedge component.References: [1 sad Hayashi,He Yaming. On the shortshear wall high-rise buildingdesign [J].Keyuan, 2008, (O2).高层框架剪力墙结构设计吴继成摘要: 本文从框架剪力墙结构设计的基本概念人手, 分析了框架剪力墙的构造设计内容, 包括抗震墙、剪跨比等的设计, 并出混凝土结构中最常用的框架剪力墙结构设计的注意要点。
毕业设计外文资料翻译学院:信息科学与工程学院专业:软件工程姓名: XXXXX学号: XXXXXXXXX外文出处: Think In Java (用外文写)附件: 1.外文资料翻译译文;2.外文原文。
附件1:外文资料翻译译文网络编程历史上的网络编程都倾向于困难、复杂,而且极易出错。
程序员必须掌握与网络有关的大量细节,有时甚至要对硬件有深刻的认识。
一般地,我们需要理解连网协议中不同的“层”(Layer)。
而且对于每个连网库,一般都包含了数量众多的函数,分别涉及信息块的连接、打包和拆包;这些块的来回运输;以及握手等等。
这是一项令人痛苦的工作。
但是,连网本身的概念并不是很难。
我们想获得位于其他地方某台机器上的信息,并把它们移到这儿;或者相反。
这与读写文件非常相似,只是文件存在于远程机器上,而且远程机器有权决定如何处理我们请求或者发送的数据。
Java最出色的一个地方就是它的“无痛苦连网”概念。
有关连网的基层细节已被尽可能地提取出去,并隐藏在JVM以及Java的本机安装系统里进行控制。
我们使用的编程模型是一个文件的模型;事实上,网络连接(一个“套接字”)已被封装到系统对象里,所以可象对其他数据流那样采用同样的方法调用。
除此以外,在我们处理另一个连网问题——同时控制多个网络连接——的时候,Java内建的多线程机制也是十分方便的。
本章将用一系列易懂的例子解释Java的连网支持。
15.1 机器的标识当然,为了分辨来自别处的一台机器,以及为了保证自己连接的是希望的那台机器,必须有一种机制能独一无二地标识出网络内的每台机器。
早期网络只解决了如何在本地网络环境中为机器提供唯一的名字。
但Java面向的是整个因特网,这要求用一种机制对来自世界各地的机器进行标识。
为达到这个目的,我们采用了IP(互联网地址)的概念。
IP以两种形式存在着:(1) 大家最熟悉的DNS(域名服务)形式。
我自己的域名是。
所以假定我在自己的域内有一台名为Opus的计算机,它的域名就可以是。
Database Management Systems( 3th Edition ),Wiley ,2004, 5-12A introduction to Database Management SystemRaghu RamakrishnanA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage”the stored data items and assemble the needed items from the common database in response to the queries ofthose who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystem that stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decision Customers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data. Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have manyemployees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table as its data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of information without regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers andtheir applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aidedengineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .Database Management Systems( 3th Edition ),Wiley ,2004, 5-12数据库管理系统的介绍Raghu Ramakrishnan数据库(database,有时拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。
南京理工大学紫金学院毕业设计(论文)外文资料翻译系:机械系专业:车辆工程专业姓名:宋磊春学号:070102234外文出处:EDU_E_CAT_VBA_FF_V5R9(用外文写)附件:1。
外文资料翻译译文;2.外文原文.附件1:外文资料翻译译文CATIA V5 的自动化CATIA V5的自动化和脚本:在NT 和Unix上:脚本允许你用宏指令以非常简单的方式计划CATIA。
CATIA 使用在MS –VBScript中(V5.x中在NT和UNIX3。
0 )的共用部分来使得在两个平台上运行相同的宏。
在NT 平台上:自动化允许CATIA像Word/Excel或者Visual Basic程序那样与其他外用分享目标。
ATIA 能使用Word/Excel对象就像Word/Excel能使用CATIA 对象。
在Unix 平台上:CATIA将来的版本将允许从Java分享它的对象。
这将提供在Unix 和NT 之间的一个完美兼容。
CATIA V5 自动化:介绍(仅限NT)自动化允许在几个进程之间的联系:CATIA V5 在NT 上:接口COM:Visual Basic 脚本(对宏来说),Visual Basic 为应用(适合前:Word/Excel ),Visual Basic。
COM(零部件目标模型)是“微软“标准于几个应用程序之间的共享对象。
Automation 是一种“微软“技术,它使用一种解释环境中的COM对象。
ActiveX 组成部分是“微软“标准于几个应用程序之间的共享对象,即使在解释环境里。
OLE(对象的链接与嵌入)意思是资料可以在一个其他应用OLE的资料里连结并且可以被编辑的方法(在适当的位置编辑).在VBScript,VBA和Visual Basic之间的差别:Visual Basic(VB)是全部的版本。
它能产生独立的计划,它也能建立ActiveX 和服务器。
它可以被编辑。
VB中提供了一个补充文件名为“在线丛书“(VB的5。
附录1 外文原文COLOR SYSTEM OVERVIEWIn the age of office automation and electronic imaging, office documents are being processed, transported, and displayed in a variety of ways. The scope of document processing is enormous; it encompasses page layout, document length, collation, simplex/duplex, color, image quality, finishing, and binding. If the office system is networked, then another dimension of network-related issues-protocol, file format, page description language, compression/decompression, job management, error handling, user interface, and device driver-has to be addressed. Digital color-imaging systems process electronic information from various sources; images may come from a local-area network, a remote-sensing device, different color workstations, or a local scanner. After processing, a document is usually compressed and transmitted to several places via a computer network for viewing, editing, or printing. Moreover, the trend in the industry is moving toward an open environment. This means that various devices such as scanners, computers, workstations, modems, and printers from multiple vendors are assembled into one system. Implementations should be based on public-domain technology rather than proprietary standards. This will allow vendors equal access to the market for system components and give users the widest choice in selecting components. It is a vastly large task to enable the communication of all system components regardless of differences in the operating system, file format, page description language, and information content. Ideally, the exchange should not cause information loss or alteration. A closer look at a document may reveal that it consists of different types of images, primarily text, graphs, and pictorial images. These all have different image characteristics and representations such as ASCII (American Standard Code for Information Interchange) for text, vector for graphs, and raster for pictorial images. Each type of image and its associated attributes like the font, font size, halftone, gray level, resolution, and color have to be dealt with differently. In such a complex environment, there is no doubt that many compatibility problems occur when an image is acquired, transmitted, displayed, and rendered. ?With the fast development of Internet technology, large volumes of data in the form of electronic documents from the Web. For the purposes of data integration and data exchange, more and moreexisting sources, such as relational databases, support public XML export, and increasing amount of public and private data is described in a semi-structured way. A number of issues need to be addressed when we integrate data from different sources, including heterogeneous and duplicate data, multiple divisions and partners, and changes.? Data heterogeneity results from the use of different information management systems to store data and each system has its own data structure and access methods. Relational database management systems benefit from the universal acceptance of Structured Query Language (SQL) as the primary means of getting answers whilst document and email repositories are generally accessed using text search engines with varying interfaces and capabilities. Because these systems were not designed with interoperability in mind, each must generally be accessed using source-specific applications or application programming interfaces (APIs).? Another difficulty in data integration is data duplication-different systems represent the same piece of data in different ways. For example, customers may be identified by name in one database, but by account number in a second repository, may identify the same customer by email address. Frequently a required piece of information is derived from multiple data points. Data integration is further complicated when customers do business with multiple divisions within a large company, or with other partners. Similarly, answering questions about the state of a company's supply chain requires access to vendor and distributor information sources. Doing business electronically across the firewall gives rise to security and data ownership issues. Finally, data integration has to deal with different types of changes; change in business requirements and strategies, in IT systems, mergers and acquisitions, and new product launches. This demands that a data integration solution be sufficiently flexible and adaptable.One possible solution for the data integration problems mentioned above is to provide an XML Web services break down the barriers between different computing platforms, development environments and communications networks, allowing organizations to work together electronically without the expense and delay of agreeing on semantics, schema, interfaces, and other application integration. XML provides the flexibility for handling data with differing structures. As XML is becoming the principal medium for data exchange over the Web and for information integration in general,increasing amounts of public and private data are described in XML. XML data is usually defined in a tree or graph based, self-describing object instance model (Boncz and Kersten, 1999). However, semi-structured data is incompatible with the flat structure of relational database tables, and so the growth of XML data requires new and complex query optimization techniques.Creating XML files with a text editor would be a lot easier if you didn't have to close all those HTML tags. First you have to add the XML declaration and the root opening and closing HTML tags. Next, you start adding element opening and closing tags one at a time. Of course, once you have the initial sequence completed you can just copy and paste to repeat the required elements. After doing this hundreds of times you'll be looking for a faster way to create XML files.Some XML editors will automatically add the closing tag after you have finished typing the opening tag but, you still have to type the brackets around the opening tag. I kept thinking this process should be easier. So, I came up with a solution that allows you to create XML files without using HTML tags.This console application will create an XML file based on user input. Just enter the file name, how many element fields you want, and the name of each field. Optionally, you can include a data type separated by a comma after the field name. You can just enter the field name because the data type is not required. The structure of the XML file that is created will be compatible with the .NET Dataset and can be easily added to a database.In addition to creating the XML file, an XSL file and HTML file are also created. The HTML file uses client side JavaScript to transform the XML file using the XSL file. This provides an easy way to view the new XML file by displaying it in a table layout.The download includes both the source code and the already compiled application. You can start using the executable right away or customize it to meet your needs. All you will need is the .NET Framework and a text editor, like Notepad, to build this application.Improving ASP Performance with Data CachingOne of the nicest features of is the ability to cache page content. This can be used to substantially reduce load on a website's database - which is an obvious attraction if the site uses Microsoft's Access to store data rather than SQL Server. Unfortunately there is no built in cachingsystem in classic ASP, but it is easy to build one by using the Application object to store data.When to use ASP Caching. Caching is most useful for data that changes - but not too often. For example an e-commerce store could display a list of popular products, or an information site could display a list of press releases.Don't forget that it is also possible to build functionality into the admin part of the site so that the cache would be flushed if new content is added to the database. That way the website administrator would not have to wait until the cache timed out in order for new content to appear on the website. Remember that data stored in Application variables is visible by all the users of the website。
本科毕业设计(论文)外文翻译基本规范一、要求1、与毕业论文分开单独成文。
2、两篇文献。
二、基本格式1、文献应以英、美等国家公开发表的文献为主(Journals from English speaking countries)。
2、毕业论文翻译是相对独立的,其中应该包括题目、作者(可以不翻译)、译文的出处(杂志的名称)(5号宋体、写在文稿左上角)、关键词、摘要、前言、正文、总结等几个部分。
3、文献翻译的字体、字号、序号等应与毕业论文格式要求完全一致。
4、文中所有的图表、致谢及参考文献均可以略去,但在文献翻译的末页标注:图表、致谢及参考文献已略去(见原文)。
(空一行,字体同正文)5、原文中出现的专用名词及人名、地名、参考文献可不翻译,并同原文一样在正文中标明出处。
二、毕业论文(设计)外文翻译(一)毕业论文(设计)外文翻译的内容要求外文翻译内容必须与所选课题相关,外文原文不少于6000个印刷符号。
译文末尾要用外文注明外文原文出处。
原文出处:期刊类文献书写方法:[序号]作者(不超过3人,多者用等或et al表示).题(篇)名[J].刊名(版本),出版年,卷次(期次):起止页次.原文出处:图书类文献书写方法:[序号]作者.书名[M].版本.出版地:出版者,出版年.起止页次.原文出处:论文集类文献书写方法:[序号]作者.篇名[A].编著者.论文集名[C]. 出版地:出版者,出版年.起止页次。
要求有外文原文复印件。
(二)毕业论文(设计)外文翻译的撰写与装订的格式规范第一部分:封面1.封面格式:见“毕业论文(设计)外文翻译封面”。
普通A4纸打印即可。
第二部分:外文翻译主题1.标题一级标题,三号字,宋体,顶格,加粗二级标题,四号字,宋体,顶格,加粗三级标题,小四号字,宋体,顶格,加粗2.正文小四号字,宋体。
第三部分:版面要求论文开本大小:210mm×297mm(A4纸)版芯要求:左边距:25mm,右边距:25mm,上边距:30mm,下边距:25mm,页眉边距:23mm,页脚边距:18mm字符间距:标准行距:1.25倍页眉页角:页眉的奇数页书写—浙江师范大学学士学位论文外文翻译。
华南理工大学广州学院本科生毕业设计(论文)翻译外文原文名Agency Cost under the Restriction of Free Cash Flow中文译名自由现金流量的限制下的代理成本学院管理学院专业班级会计学3班学生姓名陈洁玉学生学号200930191100指导教师余勍讲师填写日期2015年5月11日外文原文版出处:译文成绩:指导教师(导师组长)签名:译文:自由现金流量的限制下的代理成本摘要代理成本理论是资本结构理论的一个重要分支。
自由现金流代理成本有显着的影响。
在这两个领域相结合的研究,将有助于建立和扩大理论体系。
代理成本理论基础上,本研究首先分类自由现金流以及统计方法的特点。
此外,投资自由现金流代理成本的存在证明了模型。
自由现金流代理成本理论引入限制,分析表明,它会改变代理成本,进而将影响代理成本和资本结构之间的关系,最后,都会影响到最优资本结构点,以保持平衡。
具体地说,自由现金流增加,相应地,债务比例会降低。
关键词:资本结构,现金流,代理成本,非金钱利益1、介绍代理成本理论,金融契约理论,信号模型和新的啄食顺序理论,新的资本结构理论的主要分支。
财务con-道的理论侧重于限制股东的合同行为,解决股东和债权人之间的冲突。
信令模式和新的啄食顺序理论中心解决投资者和管理者之间的冲突。
这两种类型的冲突是在商业组织中的主要冲突。
代理成本理论认为,如何达到平衡这两种类型的冲突,资本结构是如何形成的,这是比前两次在一定程度上更多的理论更全面。
……Agency Cost under the Restriction of Free Cash FlowAbstractAgency cost theory is an important branch of capital structural theory. Free cash flow has significant impact on agency cost. The combination of research on these two fields would help to build and extend the theoretical system. Based on agency cost theory, the present study firstly categorized the characteristics of free cash flow as well as the statistical methodologies. Furthermore, the existence of investing free cash flow in agency cost was proved by a model. Then free cash flow was introduced into agency cost theory as restriction, the analysis shows that it will change agency cost, in turn, will have an impact on the relationship between agency cost and capital structure, finally, will influence the optimal capital structure point to maintain the equilibrium. Concretely, with the increasing free cash flow, correspondingly, debt proportion will decrease.Keywords:Capital Structure,Free Cash Flow,Agency Cost,Non-Pecuniary Benefit1. IntroductionAgency cost theory, financial contract theory, signaling model and new pecking order theory are the main branches of new capital structure theory. Financial con-tract theory focuses on restricting stockholders’ behavior by contract and solving the conflict between stockholders and creditors. Signaling model and new pecking order theory center on solving the conflict between investors and managers. These two types of conflict are the main conflict in business organizations. Agency cost theory considers how equilibrium is reached in both types of conflict and how capital structure is formed, which is more theory is more comprehensive than the previous two to some degree.……。
中英文翻译Selecting the Right Data Acquisition SystemEngineers often must monitor a handful of signals over extended periods of time, and then graph and analyze the resulting data. The need to monitor, record and analyze data arises in a wide range of applications, including the design-verification stage of product development, environmental chamber monitoring, component inspection, benchtop testing and process trouble-shooting.This application note describes the various methods and devices you can use to acquire, record and analyze data, from the simple pen-and-paper method to using today's sophisticated data acquisition systems. It discusses the advantages and disadvantages of each method and provides a list of questions that will guide you in selecting the approach that best suits your needs.IntroductionIn geotechnical engineering, we sometime encounter some difficulties such as monitoring instruments distributed in a large area, dangerous environment of working site that cause some difficulty for easy access. In this case, operators may adopt remote control, by which a large amount of measured data will be transmitted to a observation room where the data are to be collected, stored and processed.The automatic data acquisition control system is able to complete the tasks as regular automatic data monitoring, acquisition and store, featuring high automation, large data store capacity and reliable performance.The system is composed of acquisition control system and display system, with the following features:1. No. of Channels: 32 ( can be increased or decreased according to user's real needs.)2. Scanning duration: decided by user, fastest 32 points/second3. Store capacity: 20G( may be increased or decreased)4. Display: (a) Table of parameter (b) History tendency (c) Column graphics.5. Function: real time monitoring control, warning6. Overall dimension: 50cm×50cm×72cmData acquisition systems, as the name implies, are products and/or processes used to collect information to document or analyze some phenomenon. In the simplest form, a technician logging the temperature of an oven on a piece of paper is performing data acquisition. As technology has progressed, this type of process has been simplified and made more accurate, versatile, and reliable through electronic equipment. Equipment ranges from simple recorders to sophisticated computer systems. Data acquisition products serve as a focal point in a system, tying together a wide variety of products, such as sensors that indicate temperature, flow, level, or pressure. Some common data acquistion terms are shown below:Data acquisition technology has taken giant leaps forward over the last 30 to 40 years. For example, 40 years ago, in a typical college lab, apparatus for tracking the temperature rise in a crucible of sodiumtungsten- bronze consisted of a thermocouple, a bridge, a lookup table, a pad of paper and a pencil.Today's college students are much more likely to use an automated process and analyze the data on a PC Today, numerous options are available for gathering data. The optimal choice depends on several factors, including the complexity of the task, the speed and accuracy you require, and the documentation you want. Data acquisition systems range from the simple to the complex, with a range of performance and functionality.Pencil and paperThe old pencil and paper approach is still viable for some situations, and it is inexpensive, readily available, quick and easy to get started. All you need to do is hook up a digital multimeter (DMM) and begin recording data by hand. Unfortunately, this method is error-prone, tends to be slow and requires extensive manual analysis. In addition, it works only for a single channel of data; while you can use multiple DMMs, the system will quickly becomes bulky and awkward. Accuracy is dependent on the transcriber's level of fastidiousness and you may need to scaleinput manually. For example, if the DMM is not set up to handle temperature sensors, manual scaling will be required. Taking these limitations into account, this is often an acceptablemethod when you need to perform a quick experiment.Strip chart recorderModern versions of the venerable strip chart recorder allow you to capture data from several inputs. They provide a permanent paper record of the data, and because this data is in graphical format, they allow you to easily spot trends. Once set up, most recorders have sufficient internal intelligence to run unattended — without the aid of either an operator or a computer. Drawbacks include a lack of flexibility and relatively low accuracy, which is often constrained to a few percentage points. You can typically perceive only small changes in the pen plots. While recorders perform well when monitoring a few channels over a long period of time, their value can be limited. For example, they are unable to turn another device on or off. Other concerns include pen and paper maintenance, paper supply and data storage, all of which translate into paper overuse and waste. Still, recorders are fairly easy to set up and operate, and offer a permanent record of the data for quick and simple analysis.Scanning digital multimeterSomebenchtop DMMs offer an optional scanning capability. A slot in the rear of the instrument accepts a scanner card that can multiplex between multiple inputs, with 8 to 10 channels of mux being fairly common. DMM accuracy and the functionality inherent in the instrument's front panel are retained. Flexibility is limited in that it is not possible to expand beyond the number of channels available in the expansion slot. An external PC usually handles data acquisition and analysis.PC plug-in cardsPC plug-in cards are single-board measurement systems that take advantage of the ISA or PCI-bus expansion slots in a PC. They often have reading rates as high as 100,000 readings per second. Counts of 8 to 16 channels are common, and acquired data is stored directly into the computer, where it can then be analyzed. Because the card is essentially part of the computer, it is easy to set up tests. PC cards also arerelatively inexpensive, in part, because they rely on the host PC to provide power, the mechanical enclosure and the user interface.Data acquisition optionsIn the downside, PC plug-in cards often have only 12 bits of resolution, so you can't perceive small variations with the input signal. Furthermore, the electrical environment inside a PC tends to be noisy, with high-speed clocks and bus noise radiated throughout. Often, this electrical interference limits the accuracy of the PC plug-in card to that of a handheld DMM .These cards also measure a fairly limited range of dc voltage. To measure other input signals, such as ac voltage, temperature or resistance, you may need some sort of external signal conditioning. Additional concerns include problematic calibration and overall system cost, especially if you need to purchase additional signal conditioning accessories or a PC to accommodate the cards. Taking that into consideration, PC plug-in cards offer an attractive approach to data acquisition if your requirements fall within the capabilities and limitations of the card.Data loggersData loggers are typically stand-alone instruments that, once they are setup, can measure, record and display data without operator or computer intervention. They can handle multiple inputs, in some instances up to 120 channels. Accuracy rivals that found in standalone bench DMMs, with performance in the 22-bit, 0.004-percent accuracy range. Some data loggers have the ability to scale measurements, check results against user-defined limits, and output signals for control.One advantage of using data loggers is their built-in signal conditioning. Most are able to directly measure a number of different inputs without the need for additional signal conditioning accessories. One channel could be monitoring a thermocouple, another a resistive temperature device (RTD) and still another could be looking at voltage.Thermocouple reference compensation for accurate temperature measurement is typically built into the multiplexer cards. A data logger's built-in intelligence helpsyou set up the test routine and specify the parameters of each channel. Once you have completed the setup, data loggers can run as standalone devices, much like a recorder. They store data locally in internal memory, which can accommodate 50,000 readings or more.PC connectivity makes it easy to transfer data to your computer for in-depth analysis. Most data loggers are designed for flexibility and simple configuration and operation, and many provide the option of remote site operation via battery packs or other methods. Depending on the A/D converter technique used, certain data loggers take readings at a relatively slow rate, especially compared to many PC plug-in cards. Still, reading speeds of 250 readings/second are not uncommon. Keep in mind that many of the phenomena being monitored are physical in nature — such as temperature, pressure and flow — and change at a fairly slow rate. Additionally, because of a data logger's superior measurement accuracy, multiple readings and averaging are not necessary, as they often are in PC plug-in solutions.Data acquisition front endsData acquisition front ends are often modular and are typically connected to a PC or controller. They are used in automated test applications for gathering data and for controlling and routing signals in other parts of the test setup. Front end performance can be very high, with speed and accuracy rivaling the best standalone instruments. Data acquisition front ends are implemented in a number of formats, including VXI versions, such as the Agilent E1419A multifunction measurement and control VXI module, and proprietary card cages.. Although front-end cost has been decreasing, these systems can be fairly expensive, and unless you require the high performance they provide, you may find their price to be prohibitive. On the plus side, they do offer considerable flexibility and measurement capability.Data Logger ApplicationsA good, low-cost data logger with moderate channel count (20 - 60 channels) and a relatively slow scan rate is more than sufficient for many of the applications engineers commonly face. Some key applications include:• Product characterization• Thermal profiling of electronic products• Environmental testing; environmental monitoring• Component characteriza tion• Battery testing• Building and computer room monitoring• Process monitoring, evaluation and troubleshooting No single data acquisition system works for all applications. Answering the following questions may help you decide which will best meet your needs:1. Does the system match my application?What is the measurement resolution, accuracy and noise performance? How fast does it scan? What transducers and measurement functions are supported? Is it upgradeable or expandable to meet future needs? How portable is it? Can it operate as a standalone instrument?2. How much does it cost?Is software included, or is it extra? Does it require signal conditioning add-ons? What is the warranty period? How easy and inexpensive is it to calibrate?3. How easy is it to use?Can the specifications be understood? What is the user interface like? How difficult is it to reconfigure for new applications? Can data be transferred easily to new applications? Which application packages are supported?ConclusionData acquisition can range from pencil, paper and a measuring device, to a highly sophisticated system of hardware instrumentation and software analysis tools. The first step for users contemplating the purchase of a data acquisition device or system is to determine the tasks at hand and the desired output, and then select the type and scope of equipment that meets their criteria. All of the sophisticated equipment and analysis tools that are available are designed to help users understand the phenomena they are monitoring. The tools are merely a means to an end.正确选择数据采集系统工程师经常要对很长时间内的很多信号进行监测、画图和分析产生的数据。
本科毕业设计(论文)外文翻译译文学生姓名:院(系):油气资源学院专业班级:物探0502指导教师:完成日期:年月日地震驱动评价与发展:以玻利维亚冲积盆地的研究为例起止页码:1099——1108出版日期:NOVEMBER 2005THE LEADING EDGE出版单位:PanYAmericanYEnergyvBuenosYAiresvYArgentinaJPYBLANGYvYBPYExplorationvYHoustonvYUSAJ.C.YCORDOVAandYE.YMARTINEZvYChacoYS.A.vYSantaYCruzvYBolivia 通过整合多种地球物理地质技术,在玻利维亚冲积盆地,我们可以减少许多与白垩纪储集层勘探有关的地质技术风险。
通过对这些远景区进行成功钻探我们可以验证我们的解释。
这些方法包括盆地模拟,联井及地震叠前同时反演,岩石性质及地震属性解释,A VO/A V A,水平地震同相轴,光谱分解。
联合解释能够得到构造和沉积模式的微笑校正。
迄今为止,在新区有七口井已经进行了成功钻探。
基质和区域地质。
Tarija/Chaco盆地的subandean 褶皱和冲断带山麓的中部和南部,部分扩展到玻利维亚的Boomerange地区经历了集中的成功的开采。
许多深大的泥盆纪气田已经被发现,目前正在生产。
另外在山麓发现的规模较小较浅的天然气和凝析气田和大的油田进行价格竞争,如果他们能产出较快的油流而且成本低。
最近发现气田就是这种情况。
接下来,我们赋予Aguja的虚假名字就是为了讲述这些油田的成功例子。
图1 Aguja油田位于玻利维亚中部Chaco盆地的西北角。
基底构造图显示了Isarzama背斜的相对位置。
地层柱状图显示了主要的储集层和源岩。
该油田在Trija和冲积盆地附近的益背斜基底上,该背斜将油田和Ben i盆地分开(图1),圈闭类型是上盘背斜,它存在于连续冲断层上,Aguja有两个主要结构:Aguja中部和Aguja Norte,通过重要的转换压缩断层将较早开发的“Sur”油田分开Yantata Centro结构是一个三路闭合对低角度逆冲断层并伴随有小的摆幅。
编号:毕业设计(论文)外文翻译(原文)院(系):桂林电子科技大学专业:电子信息工程学生姓名: xx学号: xxxxxxxxxxxxx 指导教师单位:桂林电子科技大学姓名: xxxx职称: xx2014年x月xx日Timing on and off power supplyusesThe switching power supply products are widely used in industrial automation and control, military equipment, scientific equipment, LED lighting, industrial equipment,communications equipment,electrical equipment,instrumentation, medical equipment, semiconductor cooling and heating, air purifiers, electronic refrigerator, LCD monitor, LED lighting, communications equipment, audio-visual products, security, computer chassis, digital products and equipment and other fields.IntroductionWith the rapid development of power electronics technology, power electronics equipment and people's work, the relationship of life become increasingly close, and electronic equipment without reliable power, into the 1980s, computer power and the full realization of the switching power supply, the first to complete the computer Power new generation to enter the switching power supply in the 1990s have entered into a variety of electronic, electrical devices, program-controlled switchboards, communications, electronic testing equipment power control equipment, power supply, etc. have been widely used in switching power supply, but also to promote the rapid development of the switching power supply technology .Switching power supply is the use of modern power electronics technology to control the ratio of the switching transistor to turn on and off to maintain a stable output voltage power supply, switching power supply is generally controlled by pulse width modulation (PWM) ICs and switching devices (MOSFET, BJT) composition. Switching power supply and linear power compared to both the cost and growth with the increase of output power, but the two different growth rates. A power point, linear power supply costs, but higher than the switching power supply. With the development of power electronics technology and innovation, making the switching power supply technology to continue to innovate, the turning points of this cost is increasingly move to the low output power side, the switching power supply provides a broad space for development.The direction of its development is the high-frequency switching power supply, high frequency switching power supply miniaturization, and switching power supply into a wider range of application areas, especially in high-tech fields, and promote the miniaturization of high-tech products, light of. In addition, the development and application of the switching power supply in terms of energy conservation, resource conservation and environmental protection are of great significance.classificationModern switching power supply, there are two: one is the DC switching power supply; the other is the AC switching power supply. Introduces only DC switching power supply and its function is poor power quality of the original eco-power (coarse) - such as mains power or battery power, converted to meet the equipment requirements of high-quality DC voltage (Varitronix) . The core of the DC switching power supply DC / DC converter. DC switching power supply classification is dependent on the classification of DC / DC converter. In other words, the classification of the classification of the DC switching power supply and DC/DC converter is the classification of essentially the same, the DC / DC converter is basically a classification of the DC switching power supply.DC /DC converter between the input and output electrical isolation can be divided into two categories: one is isolated called isolated DC/DC converter; the other is not isolated as non-isolated DC / DC converter.Isolated DC / DC converter can also be classified by the number of active power devices. The single tube of DC / DC converter Forward (Forward), Feedback (Feedback) two. The double-barreled double-barreled DC/ DC converter Forward (Double Transistor Forward Converter), twin-tube feedback (Double Transistor Feedback Converter), Push-Pull (Push the Pull Converter) and half-bridge (Half-Bridge Converter) four. Four DC / DC converter is the full-bridge DC / DC converter (Full-Bridge Converter).Non-isolated DC / DC converter, according to the number of active power devices can be divided into single-tube, double pipe, and four three categories. Single tube to a total of six of the DC / DC converter, step-down (Buck) DC / DC converter, step-up (Boost) DC / DC converters, DC / DC converter, boost buck (Buck Boost) device of Cuk the DC / DC converter, the Zeta DC / DC converter and SEPIC, the DC / DC converter. DC / DC converters, the Buck and Boost type DC / DC converter is the basic buck-boost of Cuk, Zeta, SEPIC, type DC / DC converter is derived from a single tube in this six. The twin-tube cascaded double-barreled boost (buck-boost) DC / DC converter DC / DC converter. Four DC / DC converter is used, the full-bridge DC / DC converter (Full-Bridge Converter).Isolated DC / DC converter input and output electrical isolation is usually transformer to achieve the function of the transformer has a transformer, so conducive to the expansion of the converter output range of applications, but also easy to achieve different voltage output , or a variety of the same voltage output.Power switch voltage and current rating, the converter's output power is usually proportional to the number of switch. The more the number of switch, the greater the output power of the DC / DC converter, four type than the two output power is twice as large,single-tube output power of only four 1/4.A combination of non-isolated converters and isolated converters can be a single converter does not have their own characteristics. Energy transmission points, one-way transmission and two-way transmission of two DC / DC converter. DC / DC converter with bi-directional transmission function, either side of the transmission power from the power of lateral load power from the load-lateral side of the transmission power.DC / DC converter can be divided into self-excited and separately controlled. With the positive feedback signal converter to switch to self-sustaining periodic switching converter, called self-excited converter, such as the the Luo Yeer (Royer,) converter is a typical push-pull self-oscillating converter. Controlled DC / DC converter switching device control signal is generated by specialized external control circuit.the switching power supply.People in the field of switching power supply technology side of the development of power electronic devices, while the development of the switching inverter technology, the two promote each other to promote the switching power supply annual growth rate of more than two digits toward the light, small, thin, low-noise, high reliability, the direction of development of anti-jamming. Switching power supply can be divided into AC / DC and DC / DC two categories, AC / AC DC / AC, such as inverters, DC / DC converter is now modular design technology and production processes at home and abroad have already matured and standardization, and has been recognized by the user, but AC / DC modular, its own characteristics make the modular process, encounter more complex technology and manufacturing process. Hereinafter to illustrate the structure and characteristics of the two types of switching power supply.Self-excited: no external signal source can be self-oscillation, completely self-excited to see it as feedback oscillation circuit of a transformer.Separate excitation: entirely dependent on external sustain oscillations, excited used widely in practical applications. According to the excitation signal structure classification; can be divided into pulse-width-modulated and pulse amplitude modulated two pulse width modulated control the width of the signal is frequency, pulse amplitude modulation control signal amplitude between the same effect are the oscillation frequency to maintain within a certain range to achieve the effect of voltage stability. The winding of the transformer can generally be divided into three types, one group is involved in the oscillation of the primary winding, a group of sustained oscillations in the feedback winding, there is a group of load winding. Such as Shanghai is used in household appliances art technological production of switching power supply, 220V AC bridge rectifier, changing to about 300V DC filter added tothe collector of the switch into the transformer for high frequency oscillation, the feedback winding feedback to the base to maintain the circuit oscillating load winding induction signal, the DC voltage by the rectifier, filter, regulator to provide power to the load. Load winding to provide power at the same time, take up the ability to voltage stability, the principle is the voltage output circuit connected to a voltage sampling device to monitor the output voltage changes, and timely feedback to the oscillator circuit to adjust the oscillation frequency, so as to achieve stable voltage purposes, in order to avoid the interference of the circuit, the feedback voltage back to the oscillator circuit with optocoupler isolation.technology developmentsThe high-frequency switching power supply is the direction of its development, high-frequency switching power supply miniaturization, and switching power supply into the broader field of application, especially in high-tech fields, and promote the development and advancement of the switching power supply, an annual more than two-digit growth rate toward the light, small, thin, low noise, high reliability, the direction of the anti-jamming. Switching power supply can be divided into AC / DC and DC / DC two categories, the DC / DC converter is now modular design technology and production processes at home and abroad have already matured and standardized, and has been recognized by the user, but modular AC / DC, because of its own characteristics makes the modular process, encounter more complex technology and manufacturing process. In addition, the development and application of the switching power supply in terms of energy conservation, resource conservation and environmental protection are of great significance.The switching power supply applications in power electronic devices as diodes, IGBT and MOSFET.SCR switching power supply input rectifier circuit and soft start circuit, a small amount of applications, the GTR drive difficult, low switching frequency, gradually replace the IGBT and MOSFET.Direction of development of the switching power supply is a high-frequency, high reliability, low power, low noise, jamming and modular. Small, thin, and the key technology is the high frequency switching power supply light, so foreign major switching power supply manufacturers have committed to synchronize the development of new intelligent components, in particular, is to improve the secondary rectifier loss, and the power of iron Oxygen materials to increase scientific and technological innovation in order to improve the magnetic properties of high frequency and large magnetic flux density (Bs), and capacitor miniaturization is a key technology. SMT technology allows the switching power supply has made considerable progress, the arrangement of the components in the circuit board on bothsides, to ensure that the light of the switching power supply, a small, thin. High-frequency switching power supply is bound to the traditional PWM switching technology innovation, realization of ZVS, ZCS soft-switching technology has become the mainstream technology of the switching power supply, and a substantial increase in the efficiency of the switching power supply. Indicators for high reliability, switching power supply manufacturers in the United States by reducing the operating current, reducing the junction temperature and other measures to reduce the stress of the device, greatly improve the reliability of products.Modularity is the overall trend of switching power supply, distributed power systems can be composed of modular power supply, can be designed to N +1 redundant power system, and the parallel capacity expansion. For this shortcoming of the switching power supply running noise, separate the pursuit of high frequency noise will also increase, while the use of part of the resonant converter circuit technology to achieve high frequency, in theory, but also reduce noise, but some The practical application of the resonant converter technology, there are still technical problems, it is still a lot of work in this field, so that the technology to be practical.Power electronics technology innovation, switching power supply industry has broad prospects for development. To accelerate the pace of development of the switching power supply industry in China, it must take the road of technological innovation, out of joint production and research development path with Chinese characteristics and contribute to the rapid development of China's national economy.Developments and trends of the switching power supply1955 U.S. Royer (Roger) invented the self-oscillating push-pull transistor single-transformer DC-DC converter is the beginning of the high-frequency conversion control circuit 1957 check race Jen, Sen, invented a self-oscillating push-pull dual transformers, 1964, U.S. scientists canceled frequency transformer in series the idea of switching power supply, the power supply to the size and weight of the decline in a fundamental way. 1969 increased due to the pressure of the high-power silicon transistor, diode reverse recovery time shortened and other components to improve, and finally made a 25-kHz switching power supply.At present, the switching power supply to the small, lightweight and high efficiency characteristics are widely used in a variety of computer-oriented terminal equipment, communications equipment, etc. Almost all electronic equipment is indispensable for a rapid development of today's electronic information industry power mode. Bipolar transistor made of 100kHz, 500kHz power MOS-FET made, though already the practical switching power supply is currently available on the market, but its frequency to be further improved. Toimprove the switching frequency, it is necessary to reduce the switching losses, and to reduce the switching losses, the need for high-speed switch components. However, the switching speed will be affected by the distribution of the charge stored in the inductance and capacitance, or diode circuit to produce a surge or noise. This will not only affect the surrounding electronic equipment, but also greatly reduce the reliability of the power supply itself. Which, in order to prevent the switching Kai - closed the voltage surge, RC or LC buffers can be used, and the current surge can be caused by the diode stored charge of amorphous and other core made of magnetic buffer . However, the high frequency more than 1MHz, the resonant circuit to make the switch on the voltage or current through the switch was a sine wave, which can reduce switching losses, but also to control the occurrence of surges. This switch is called the resonant switch. Of this switching power supply is active, you can, in theory, because in this way do not need to greatly improve the switching speed of the switching losses reduced to zero, and the noise is expected to become one of the high-frequency switching power supply The main ways. At present, many countries in the world are committed to several trillion Hz converter utility.the principle of IntroductionThe switching power supply of the process is quite easy to understand, linear power supplies, power transistors operating in the linear mode and linear power, the PWM switching power supply to the power transistor turns on and off state, in both states, on the power transistor V - security product is very small (conduction, low voltage, large current; shutdown, voltage, current) V oltammetric product / power device is power semiconductor devices on the loss.Compared with the linear power supply, the PWM switching power supply more efficient process is achieved by "chopping", that is cut into the amplitude of the input DC voltage equal to the input voltage amplitude of the pulse voltage. The pulse duty cycle is adjusted by the switching power supply controller. Once the input voltage is cut into the AC square wave, its amplitude through the transformer to raise or lower. Number of groups of output voltage can be increased by increasing the number of primary and secondary windings of the transformer. After the last AC waveform after the rectifier filter the DC output voltage.The main purpose of the controller is to maintain the stability of the output voltage, the course of their work is very similar to the linear form of the controller. That is the function blocks of the controller, the voltage reference and error amplifier can be designed the same as the linear regulator. Their difference lies in the error amplifier output (error voltage) in the drive before the power tube to go through a voltage / pulse-width conversion unit.Switching power supply There are two main ways of working: Forward transformand boost transformation. Although they are all part of the layout difference is small, but the course of their work vary greatly, have advantages in specific applications.the circuit schematicThe so-called switching power supply, as the name implies, is a door, a door power through a closed power to stop by, then what is the door, the switching power supply using SCR, some switch, these two component performance is similar, are relying on the base switch control pole (SCR), coupled with the pulse signal to complete the on and off, the pulse signal is half attentive to control the pole voltage increases, the switch or transistor conduction, the filter output voltage of 300V, 220V rectifier conduction, transmitted through the switching transformer secondary through the transformer to the voltage increase or decrease for each circuit work. Oscillation pulse of negative semi-attentive to the power regulator, base, or SCR control voltage lower than the original set voltage power regulator cut-off, 300V power is off, switch the transformer secondary no voltage, then each circuit The required operating voltage, depends on this secondary road rectifier filter capacitor discharge to maintain. Repeat the process until the next pulse cycle is a half weeks when the signal arrival. This switch transformer is called the high-frequency transformer, because the operating frequency is higher than the 50HZ low frequency. Then promote the pulse of the switch or SCR, which requires the oscillator circuit, we know, the transistor has a characteristic, is the base-emitter voltage is 0.65-0.7V is the zoom state, 0.7V These are the saturated hydraulic conductivity state-0.1V-0.3V in the oscillatory state, then the operating point after a good tune, to rely on the deep negative feedback to generate a negative pressure, so that the oscillating tube onset, the frequency of the oscillating tube capacitor charging and discharging of the length of time from the base to determine the oscillation frequency of the output pulse amplitude, and vice versa on the small, which determines the size of the output voltage of the power regulator. Transformer secondary output voltage regulator, usually switching transformer, single around a set of coils, the voltage at its upper end, as the reference voltage after the rectifier filter, then through the optocoupler, this benchmark voltage return to the base of the oscillating tube pole to adjust the level of the oscillation frequency, if the transformer secondary voltage is increased, the sampling coil output voltage increases, the positive feedback voltage obtained through the optocoupler is also increased, this voltage is applied oscillating tube base, so that oscillation frequency is reduced, played a stable secondary output voltage stability, too small do not have to go into detail, nor it is necessary to understand the fine, such a high-power voltage transformer by switching transmission, separated and after the class returned by sampling the voltage from the opto-coupler pass separated after class, so before the mains voltage, and after the classseparation, which is called cold plate, it is safe, transformers before power is independent, which is called switching power supply.the DC / DC conversionDC / DC converter is a fixed DC voltage transformation into a variable DC voltage, also known as the DC chopper. There are two ways of working chopper, one Ts constant pulse width modulation mode, change the ton (General), the second is the frequency modulation, the same ton to change the Ts, (easy to produce interference). Circuit by the following categories:Buck circuit - the step-down chopper, the average output voltage U0 is less than the input voltage Ui, the same polarity.Boost Circuit - step-up chopper, the average output voltage switching power supply schematic U0 is greater than the input voltage Ui, the same polarity.Buck-Boost circuit - buck or boost chopper, the output average voltage U0 is greater than or less than the input voltage Ui, the opposite polarity, the inductance transmission.Cuk circuit - a buck or boost chopper, the output average voltage U0 is greater than or less than the input voltage Ui, the opposite polarity, capacitance transmission.The above-mentioned non-isolated circuit, the isolation circuit forward circuits, feedback circuit, the half-bridge circuit, the full bridge circuit, push-pull circuit. Today's soft-switching technology makes a qualitative leap in the DC / DC the U.S. VICOR company design and manufacture a variety of ECI soft-switching DC / DC converter, the maximum output power 300W, 600W, 800W, etc., the corresponding power density (6.2 , 10,17) W/cm3 efficiency (80-90)%. A the Japanese Nemic Lambda latest using soft-switching technology, high frequency switching power supply module RM Series, its switching frequency (200 to 300) kHz, power density has reached 27W/cm3 with synchronous rectifier (MOSFETs instead of Schottky diodes ), so that the whole circuit efficiency by up to 90%.AC / DC conversionAC / DC conversion will transform AC to DC, the power flow can be bi-directional power flow by the power flow to load known as the "rectification", referred to as "active inverter power flow returned by the load power. AC / DC converter input 50/60Hz AC due must be rectified, filtered, so the volume is relatively large filter capacitor is essential, while experiencing safety standards (such as UL, CCEE, etc.) and EMC Directive restrictions (such as IEC, FCC, CSA) in the AC input side must be added to the EMC filter and use meets the safety standards of the components, thus limiting the miniaturization of the volume of AC / DC power, In addition, due to internal frequency, high voltage, current switching, making the problem difficult to solve EMC also high demands on the internal high-density mountingcircuit design, for the same reason, the high voltage, high current switch makes power supply loss increases, limiting the AC / DC converter modular process, and therefore must be used to power system optimal design method to make it work efficiency to reach a certain level of satisfaction.AC / DC conversion circuit wiring can be divided into half-wave circuit, full-wave circuit. Press the power phase can be divided into single-phase three-phase, multiphase. Can be divided into a quadrant, two quadrant, three quadrants, four-quadrant circuit work quadrant.he selection of the switching power supplySwitching power supply input on the anti-jamming performance, compared to its circuit structure characteristics (multi-level series), the input disturbances, such as surge voltage is difficult to pass on the stability of the output voltage of the technical indicators and linear power have greater advantages, the output voltage stability up to (0.5)%. Switching power supply module as an integrated power electronic devices should be selected。
本科毕业设计外文文献及译文文献、资料题目:Transit Route Network Design Problem:Review文献、资料来源:网络文献、资料发表(出版)日期:2007.1院(部):xxx专业:xxx班级:xxx姓名:xxx学号:xxx指导教师:xxx翻译日期:xxx外文文献:Transit Route Network Design Problem:Review Abstract:Efficient design of public transportation networks has attracted much interest in the transport literature and practice,with manymodels and approaches for formulating the associated transit route network design problem _TRNDP_having been developed.The presentpaper systematically presents and reviews research on the TRNDP based on the three distinctive parts of the TRNDP setup:designobjectives,operating environment parameters and solution approach.IntroductionPublic transportation is largely considered as a viable option for sustainable transportation in urban areas,offering advantages such as mobility enhancement,traffic congestion and air pollution reduction,and energy conservation while still preserving social equity considerations. Nevertheless,in the past decades,factors such as socioeconomic growth,the need for personalized mobility,the increase in private vehicle ownership and urban sprawl have led to a shift towards private vehicles and a decrease in public transportation’s share in daily commuting (Sinha2003;TRB2001;EMTA2004;ECMT2002;Pucher et al.2007).Efforts for encouraging public transportation use focuses on improving provided services such as line capacity,service frequency,coverage,reliability,comfort and service quality which are among the most important parameters for an efficient public transportation system(Sinha2003;Vuchic2004.) In this context,planning and designing a cost and service efficientpublic transportation network is necessary for improving its competitiveness and market share. The problem that formally describes the design of such a public transportation network is referred to as the transit route network design problem(TRNDP);it focuses on the optimization of a number of objectives representing the efficiency of public transportation networks under operational and resource constraints such as the number and length of public transportation routes, allowable service frequencies,and number of available buses(Chakroborty2003;Fan and Machemehl2006a,b).The practical importance of designing public transportation networks has attractedconsiderable interest in the research community which has developed a variety of approaches and modelsfor the TRNDP including different levels of design detail and complexity as well as interesting algorithmic innovations.In thispaper we offer a structured review of approaches for the TRNDP;researchers will obtain a basis for evaluating existing research and identifying future research paths for further improving TRNDP models.Moreover,practitioners will acquire a detailed presentation of both the process and potential tools for automating the design of public transportation networks,their characteristics,capabilities,and strengths.Design of Public Transportation NetworksNetwork design is an important part of the public transportation operational planning process_Ceder2001_.It includes the design of route layouts and the determination of associated operational characteristics such as frequencies,rolling stock types,and so on As noted by Ceder and Wilson_1986_,network design elements are part of the overall operational planning process for public transportation networks;the process includes five steps:_1_design of routes;_2_ setting frequencies;_3_developing timetables;_4_scheduling buses;and_5_scheduling drivers. Route layout design is guided by passenger flows:routes are established to provide direct or indirect connection between locations and areas that generate and attract demand for transit travel, such as residential and activity related centers_Levinson1992_.For example,passenger flows between a central business district_CBD_and suburbs dictate the design of radial routes while demand for trips between different neighborhoods may lead to the selection of a circular route connecting them.Anticipated service coverage,transfers,desirable route shapes,and available resources usually determine the structure of the route network.Route shapes areusually constrained by their length and directness_route directness implies that route shapes are as straight as possible between connected points_,the usage of given roads,and the overlapping with other transit routes.The desirable outcome is a set of routesconnecting locations within a service area,conforming to given design criteria.For each route, frequencies and bus types are the operational characteristics typically determined through design. Calculations are based on expected passenger volumes along routes that are estimated empirically or by applying transit assignmenttechniques,under frequency requirement constraints_minimum and maximum allowedfrequencies guaranteeing safety and tolerable waiting times,respectively_,desired load factors, fleet size,and availability.These steps as well as the overall design.process have been largely based upon practical guidelines,the expert judgment of transit planners,and operators experience_Baaj and Mahmassani1991_.Two handbooks by Black _1995_and Vuchic_2004_outline frameworks to be followed by planners when designing a public transportation network that include:_1_establishing the objectives for the network;_2_ defining the operational environment of the network_road structure,demand patterns,and characteristics_;_3_developing;and_4_evaluating alternative public transportation networks.Despite the extensive use of practical guidelines and experience for designing transit networks,researchers have argued that empirical rules may not be sufficient for designing an efficient transit network and improvements may lead to better quality and more efficient services. For example,Fan and Machemehl_2004_noted that researchers and practitioners have been realizing that systematic and integrated approaches are essential for designing economically and operationally efficient transit networks.A systematic design process implies clear and consistent steps and associated techniques for designing a public transportation network,which is the scope of the TRNDP.TRNDP:OverviewResearch has extensively examined the TRNDP since the late1960s.In1979,Newell discussed previous research on the optimal design of bus routes and Hasselström_1981_ analyzed relevant studies and identified the major features of the TRNDP as demand characteristics,objective functions,constraints,passengerbehavior,solution techniques,and computational time for solving the problem.An extensive review of existing work on transit network design was provided by Chua_1984_who reported five types of transit system planning:_1_manual;_2_marketanalysis;_3_systems analysis;_4_systems analysis with interactive graphics;and_5_ mathematical optimization approach.Axhausemm and Smith_1984_analyzed existing heuristic algorithms for formulating the TRNDP in Europe,tested them,anddiscussed their potential implementation in the United States.Ceder and Wilson_1986_reportedprior work on the TRNDP and distinguished studies into those that deal with idealized networks and to those that focus on actual routes,suggesting that the main features of the TRNDP include demand characteristics,objectivesand constraints,and solution methods.At the same period,Van Nes et al._1988_grouped TRNDP models into six categories:_1_ analytical models for relating parameters of the public transportation system;_2_models determining the links to be used for public transportation route construction;_3_models determining routes only;_4_models assigning frequencies to a set of routes;_5_two-stage models for constructing routes and then assigning frequencies;and_6_models for simultaneously determining routes and frequencies.Spacovic et al._1994_and Spacovic and Schonfeld_1994_proposed a matrix organization and classified each study according to design parameters examined,objectives anticipated,network geometry,and demand characteristics. Ceder and Israeli_1997_suggested broad categorizations for TRNDP models into passenger flow simulation and mathematical programming models.Russo_1998_adopted the same categorization and noted that mathematical programming models guarantee optimal transit network design but sacrifice the level of detail in passenger representation and design parameters, while simulation models address passenger behavior but use heuristic procedures obtaining a TRNDP solution.Ceder_2001_enhanced his earlier categorization by classifying TRNDP models into simulation,ideal network,and mathematical programming models.Finally,in a recent series of studies,Fan and Machemehl_2004,2006a,b_divided TRNDP approaches into practical approaches,analytical optimization models for idealized conditions,and metaheuristic procedures for practical problems.The TRNDP is an optimization problem where objectives are defined,its constraints are determined,and a methodology is selected and validated for obtaining an optimal solution.The TRNDP is described by the objectives of the public transportation network service to be achieved, the operational characteristics and environment under which the network will operate,and the methodological approach for obtaining the optimal network design.Based on this description of the TRNDP,we propose a three-layer structure for organizing TRNDP approaches_Objectives, Parameters,and Methodology_.Each layer includes one or more items that characterize each study.The“Objectives”layer incorporates the goals set when designing a public transportation system such as the minimization of the costs of the system or the maximization of the quality of services provided.The“Parameters”layer describes the operating environment and includes both the design variables expected to be derived for the transit network_route layouts,frequencies_as well as environmental and operational parameters affecting and constraining that network_for example,allowable frequencies,desired load factors,fleet availability,demand characteristics and patterns,and so on_.Finally,the“Methodology”layer covers the logical–mathematical framework and algorithmic tools necessary to formulate and solve the TRNDP.The proposed structure follows the basic concepts toward setting up a TRNDP:deciding upon the objectives, selecting the transit network items and characteristics to be designed,setting the necessary constraints for the operating environment,and formulating and solving the problem. TRNDP:ObjectivesPublic transportation serves a very important social role while attempting to do this at the lowest possible operating cost.Objectives for designing daily operations of a public transportation system should encompass both angles.The literature suggests that most studies actually focus on both the service and economic efficiency when designing such a system. Practical goals for the TRNDP can be briefly summarized as follows_Fielding1987;van Oudheudsen et al.1987;Black1995_:_1_user benefit maximization;_2_operator cost minimization;_3_total welfare maximization;_4_capacity maximization;_5_energy conservation—protection of the environment;and_6_individual parameter optimization.Mandl_1980_indicated that public transportation systems have different objectives to meet. He commented,“even a single objective problem is difficult to attack”_p.401_.Often,these objectives are controversial since cutbacks in operating costs may require reductions in the quality of services.Van Nes and Bovy_2000_pointed out that selected objectives influence the attractiveness and performance of a public transportation network.According to Ceder and Wilson_1986_,minimization of generalized cost or time or maximization of consumer surplus were the most common objectives selected when developing transit network design models. Berechman_1993_agreed that maximization of total welfare is the most suitable objective for designing a public transportation system while Van Nes and Bovy_2000_argued that the minimization of total user and system costs seem the most suit able and less complicatedobjective_compared to total welfare_,while profit maximization leads to nonattractive public transportation networks.As can be seen in Table1,most studies seek to optimize total welfare,which incorporates benefits to the user and to the er benefits may include travel,access and waiting cost minimization,minimization of transfers,and maximization of coverage,while benefits for the system are maximum utilization and quality of service,minimization of operating costs, maximization of profits,and minimization of the fleet size used.Most commonly,total welfare is represented by the minimization of user and system costs.Some studies address specific objectives from the user,theoperator,or the environmental perspective.Passenger convenience,the number of transfers, profit and capacity maximization,travel time minimization,and fuel consumption minimization are such objectives.These studies either attempt to simplify the complex objective functions needed to setup the TRNDP_Newell1979;Baaj and Mahmassani1991;Chakroborty and Dwivedi2002_,or investigate specific aspects of the problem,such as objectives_Delle Site and Fillipi2001_,and the solution methodology_Zhao and Zeng2006;Yu and Yang2006_.Total welfare is,in a sense,a compromise between objectives.Moreover,as reported by some researchers such as Baaj and Mahmassani_1991_,Bielli et al._2002_,Chackroborty and Dwivedi_2002_,and Chakroborty_2003_,transit network design is inherently a multiobjective problem.Multiobjective models for solving the TRNDP have been based on the calculation of indicators representing different objectives for the problem at hand,both from the user and operator perspectives,such as travel and waiting times_user_,and capacity and operating costs _operator_.In their multiobjective model for the TRNDP,Baaj and Majmassani_1991_relied on the planner’s judgment and experience for selecting the optimal public transportation network,based on a set of indicators.In contrast,Bielli et al._2002_and Chakroborty and Dwivedi_2002_,combined indicators into an overall,weighted sum value, which served as the criterion for determining the optimaltransit network.TRNDP:ParametersThere are multiple characteristics and design attributes to consider for a realistic representation of a public transportation network.These form the parameters for the TRNDP.Part of these parameters is the problem set of decision variables that define its layout and operational characteristics_frequencies,vehicle size,etc._.Another set of design parameters represent the operating environment_network structure,demand characters,and patterns_, operational strategies and rules,and available resources for the public transportation network. These form the constraints needed to formulate the TRNDP and are,a-priori fixed,decided upon or assumed.Decision VariablesMost common decision variables for the TRNDP are the routes and frequencies of the public transportation network_Table1_.Simplified early studies derived optimal route spacing between predetermined parallel or radial routes,along with optimal frequencies per route_Holroyd1967; Byrne and Vuchic1972;Byrne1975,1976;Kocur and Hendrickson1982;Vaughan1986_,while later models dealt with the development of optimal route layouts and frequency determination. Other studies,additionally,considered fares_Kocur and Hendrickson1982;Morlok and Viton 1984;Chang and Schonfeld1991;Chien and Spacovic2001_,zones_Tsao and Schonfeld1983; Chang and Schonfeld1993a_,stop locations_Black1979;Spacovic and Schonfeld1994; Spacovic et al.1994;Van Nes2003;Yu and Yang2006_and bus types_Delle Site and Filippi 2001_.Network StructureSome early studies focused on the design of systems in simplified radial_Byrne1975;Black 1979;Vaughan1986_,or rectangular grid road networks_Hurdle1973;Byrne and Vuchic1972; Tsao and Schonfeld1984_.However,most approaches since the1980s were either applied to realistic,irregular grid networks or the network structure was of no importance for the proposed model and therefore not specified at all.Demand PatternsDemand patterns describe the nature of the flows of passengers expected to be accommodated by the public transportation network and therefore dictate its structure.For example,transit trips from a number of origins_for example,stops in a neighborhood_to a single destination_such as a bus terminal in the CBD of a city_and vice-versa,are characterized as many-to-one_or one-tomany_transit demand patterns.These patterns are typically encountered in public transportation systems connecting CBDs with suburbs and imply a structure of radial orparallel routes ending at a single point;models for patterns of that type have been proposed by Byrne and Vuchic_1972_,Salzborn_1972_,Byrne_1975,1976_,Kocur and Hendrickson _1982_,Morlok and Viton_1984_,Chang and Schonfeld_1991,1993a_,Spacovic and Schonfeld_1994_,Spacovic et al._1994_,Van Nes_2003_,and Chien et al._2003_.On the other hand,many-to-many demand patterns correspond to flows between multiple origins and destinations within an urban area,suggesting that the public transportation network is expected to connect various points in an area.Demand CharacteristicsDemand can be characterized either as“fixed”_or“inelastic”_or“elastic”;the later meaning that demand is affected by the performance and services provided by the public transportation network.Lee and Vuchic_2005_distinguished between two types of elastic demand:_1_demand per mode affected by transportation services,with total demand for travel kept constant;and_2_total demand for travel varying as a result of the performance of the transportation system and its modes.Fan and Machemehl_2006b_noted that the complexity of the TRNDP has led researchers intoassuming fixed demand,despite its inherent elastic nature.However,since the early1980s, studies included aspects of elastic demand in modeling the TRNDP_Hasselstrom1981;Kocur and Hendrickson1982_.Van Nes et al._1988_applied a simultaneous distribution-modal split model based on transit deterrence for estimatingdemand for public transportation.In a series of studies,Chang and Schonfeld_1991,1993a,b_ and Spacovic et al._1994_estimated demand as a direct function of travel times and fares with respect to their elasticities,while Chien and Spacovic2001_,followed the same approach assuming that demand is additionally affected by headways,route spacing and fares.Finally, studies by Leblanc_1988_,Imam_1998_,Cipriani et al._2005_,Lee and Vuchic_2005_;and Fan and Machemehl_2006a_based demand estimation on mode choice models for estimating transit demand as a function of total demand for travel.中文译文:公交路线网络设计问题:回顾摘要:公共交通网络的有效设计让交通理论与实践成为众人关注的焦点,随之发展出了很多规划相关公交路线网络设计问题(TRNDP)的模型与方法。
本科生毕业设计(论文)外文资料译文( 2011 届)译文题目Java开发2.0:使用Hibernate Shards 进行切分外文资料译文规范说明一、译文文本要求1.外文译文不少于2000汉字;2.外文译文本文格式参照论文正文规范(标题、字体、字号、图表、原文信息等);3.外文原文资料信息列文末,对应于论文正文的参考文献部分,标题用“外文原文资料信息”,内容包括:1)外文原文作者;2)书名或论文题目;3)外文原文来源:□出版社或刊物名称、出版时间或刊号、译文部分所在页码□网页地址二、外文原文资料(电子文本或数字化后的图片):1.外文原文不少于10000印刷字符(图表等除外);2.外文原文若是纸质的请数字化(图片)后粘贴于译文后的原文资料处,但装订时请用纸质原文复印件附于译文后。
指导教师意见:指导教师签名:年月日一、外文资料译文:Java开发2.0:使用Hibernate Shards 进行切分横向扩展的关系数据库Andrew Glover,作者兼开发人员,Beacon50摘要:Sharding并不适合所有网站,但它是一种能够满足大数据的需求方法。
对于一些商店来说,切分意味着可以保持一个受信任的 RDBMS,同时不牺牲数据可伸缩性和系统性能。
在Java 开发 2.0系列的这一部分中,您可以了解到切分何时起作用,以及何时不起作用,然后开始着手对一个可以处理数 TB 数据的简单应用程序进行切分。
日期:2010年8月31日级别:中级PDF格式:A4和信(64KB的15页)取得Adobe®Reader®软件当关系数据库试图在一个单一表中存储数TB 的数据时,总体性能通常会降低。
索引所有的数据读取,显然是很耗时的,而且其中有可能是写入,也可能是读出。
因为NoSQL 数据商店尤其适合存储大型数据,但是NoSQL 是一种非关系数据库方法。
对于倾向于使用ACID-ity 和实体结构关系数据库的开发人员及需要这种结构的项目来说,切分是一个令人振奋的选方法。
DATA WAREHOUSEData warehousing provides architectures and tools for business executives to systematically organize, understand, and use their data to make strategic decisions. A large number of organizations have found that data warehouse systems are valuable tools in today's competitive, fast evolving world. In the last several years, many firms have spent millions of dollars in building enterprise-wide data warehouses. Many people feel that with competition mounting in every industry, data warehousing is the latest must-have marketing weapon —— a way to keep customers by learning more about their needs.“So", you may ask, full of intrigue, “what exactly is a data warehouse?"Data warehouses have been defined in many ways, making it difficult to formulate a rigorous definition. Loosely speaking, a data warehouse refers to a database that is maintained separately from an organization's operational databases. Data warehouse systems allow for the integration of a variety of application systems. They support information processing by providing a solid platform of consolidated, historical data for analysis.According to W. H. Inmon, a leading architect in the construction of data warehouse systems, “a data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management's decision making process." This short, but comprehensive definition presents the major features of a data warehouse. The four keywords, subject-oriented, integrated, time-variant, and nonvolatile, distinguish data warehouses from other data repository systems, such as relational database systems, transaction processing systems, and file systems. Let's take a closer look at each of these key features.(1)Subject-oriented: A data warehouse is organized around major subjects, such as customer, vendor, product, and sales. Rather than concentrating on the day-to-day operations and transaction processing of an organization, a data warehouse focuses on the modeling and analysis of data for decision makers. Hence, data warehouses typically provide a simple and concise view around particular subject issues by excluding data that are not useful in the decision support process.(2)Integrated: A data warehouse is usually constructed by integrating multiple heterogeneous sources, such as relational databases, flat files, and on-line transaction records. Data cleaning and data integration techniques are applied to ensure consistency in naming conventions, encoding structures, attribute measures, and so on..(3)Time-variant: Data are stored to provide information from a historical perspective (e.g., the past 5-10 years). Every key structure in the data warehouse contains, either implicitly or explicitly, an element of time.(4)Nonvolatile: A data warehouse is always a physically separate store of data transformed from the application data found in the operational environment. Due to this separation, a data warehouse does not require transaction processing, recovery, and concurrency control mechanisms. It usually requires only two operations in data accessing: initial loading of data and access of data..In sum, a data warehouse is a semantically consistent data store that serves as a physical implementation of a decision support data model and stores the information on which an enterprise needs to make strategic decisions. A data warehouse is also often viewed as an architecture, constructed by integrating data from multiple heterogeneous sources to support structured and/or ad hoc queries, analytical reporting, and decision making.“OK", you now ask, “what, then, is data warehousing?"Based on the above, we view data warehousing as the process of constructing and using data warehouses. The construction of a data warehouse requires data integration, data cleaning, and data consolidation. The utilization of a data warehouse often necessitates a collection of decision support technologies. This allows “knowledge workers" (e.g., managers, analysts, and executives) to use the warehouse to quickly and conveniently obtain an overview of the data, and to make sound decisionsbased on information in the warehouse. Some authors use the term “data warehousing" to refer only to the process of data warehouse construction, while the term warehouse DBMS is used to refer to the management and utilization of data warehouses. We will not make this distinction here.“How are organizations using the information from data warehouses?" Many organizations are using this information to support business decision making activities, including:(1) increasing customer focus, which includes the analysis of customer buying patterns (such as buying preference, buying time, budget cycles, and appetites for spending).(2) repositioning products and managing product portfolios by comparing the performance of sales by quarter, by year, and by geographic regions, in order to fine-tune production strategies.(3) analyzing operations and looking for sources of profit.(4) managing the customer relationships, making environmental corrections, and managing the cost of corporate assets.Data warehousing is also very useful from the point of view of heterogeneous database integration. Many organizations typically collect diverse kinds of data and maintain large databases from multiple, heterogeneous, autonomous, and distributed information sources. To integrate such data, and provide easy and efficient access to it is highly desirable, yet challenging. Much effort has been spent in the database industry and research community towards achieving this goal.The traditional database approach to heterogeneous database integration is to build wrappers and integrators (or mediators) on top of multiple, heterogeneous databases. A variety of data joiner and data blade products belong to this category. When a query is posed to a client site, a metadata dictionary is used to translate the query into queries appropriate for the individual heterogeneous sites involved. These queries are then mapped and sent to local query processors. The results returned from the different sites are integrated into a global answer set. This query-driven approach requires complex information filtering and integration processes, and competes for resources with processing at local sources. It is inefficient and potentially expensive for frequent queries, especially for queries requiring aggregations.Data warehousing provides an interesting alternative to the traditional approach of heterogeneous database integration described above. Rather than using a query-driven approach, data warehousing employs an update-driven approach in which information from multiple, heterogeneous sources is integrated in advance and stored in a warehouse for direct querying and analysis. Unlike on-line transaction processing databases, data warehouses do not contain the most current information. However, a data warehouse brings high performance to the integrated heterogeneous database system since data are copied, preprocessed, integrated, annotated, summarized, and restructured into one semantic data store. Furthermore, query processing in data warehouses does not interfere with the processing at local sources. Moreover, data warehouses can store and integrate historical information and support complex multidimensional queries. As a result, data warehousing has become very popular in industry.1.Differences between operational database systems and data warehousesSince most people are familiar with commercial relational database systems, it is easy to understand what a data warehouse is by comparing these two kinds of systems.The major task of on-line operational database systems is to perform on-line transaction and query processing. These systems are called on-line transaction processing (OLTP) systems. They cover most of the day-to-day operations of an organization, such as, purchasing, inventory, manufacturing, banking, payroll, registration, and accounting. Data warehouse systems, on the other hand, serve users or “knowledge workers" in the role of data analysis and decision making. Such systems can organize and present data in various formats in order to accommodate the diverse needs of the different users. These systems are known as on-line analytical processing (OLAP) systems.The major distinguishing features between OLTP and OLAP are summarized as follows.(1)Users and system orientation: An OLTP system is customer-oriented and is used for transaction and query processing by clerks, clients, and information technology professionals. An OLAP system is market-oriented and is used for data analysis by knowledge workers, including managers, executives, and analysts.(2)Data contents: An OLTP system manages current data that, typically, are too detailed to be easily used for decision making. An OLAP system manages large amounts of historical data, provides facilities for summarization and aggregation, and stores and manages information at different levels of granularity. These features make the data easier for use in informed decision making.(3)Database design: An OLTP system usually adopts an entity-relationship (ER) data model and an application -oriented database design. An OLAP system typically adopts either a star or snowflake model, and a subject-oriented database design.(4)View: An OLTP system focuses mainly on the current data within an enterprise or department, without referring to historical data or data in different organizations. In contrast, an OLAP system often spans multiple versions of a database schema, due to the evolutionary process of an organization. OLAP systems also deal with information that originates from different organizations, integrating information from many data stores. Because of their huge volume, OLAP data are stored on multiple storage media.(5). Access patterns: The access patterns of an OLTP system consist mainly of short, atomic transactions. Such a system requires concurrency control and recovery mechanisms. However, accesses to OLAP systems are mostly read-only operations (since most data warehouses store historical rather than up-to-date information), although many could be complex queries.Other features which distinguish between OLTP and OLAP systems include database size, frequency of operations, and performance metrics and so on.2.But, why have a separate data warehouse?“Since operational databases store huge amounts of data", you observe, “why not perform on-line analytical processing directly on such databases instead of spending additional time and resources to construct a separate data warehouse?"A major reason for such a separation is to help promote the high performance of both systems. An operational database is designed and tuned from known tasks and workloads, such as indexing and hashing using primary keys, searching for particular records, and optimizing “canned" queries. On the other hand, data warehouse queries are often complex. They involve the computation of large groups of data at summarized levels, and may require the use of special data organization, access, and implementation methods based on multidimensional views. Processing OLAP queries in operational databases would substantially degrade the performance of operational tasks.Moreover, an operational database supports the concurrent processing of several transactions. Concurrency control and recovery mechanisms, such as locking and logging, are required to ensure the consistency and robustness of transactions. An OLAP query often needs read-only access of data records for summarization and aggregation. Concurrency control and recovery mechanisms, if applied for such OLAP operations, may jeopardize the execution of concurrent transactions and thus substantially reduce the throughput of an OLTP system.Finally, the separation of operational databases from data warehouses is based on the different structures, contents, and uses of the data in these two systems. Decision support requires historical data, whereas operational databases do not typically maintain historical data. In this context, the data in operational databases, though abundant, is usually far from complete for decision making. Decision support requires consolidation (such as aggregation and summarization) of data from heterogeneous sources, resulting in high quality, cleansed and integrated data. In contrast, operational databases contain only detailed raw data, such as transactions, which need to be consolidated before analysis. Since the two systems provide quite different functionalities and require different kinds of data, it is necessary to maintain separate databases.数据仓库数据仓库为商务运作提供了组织结构和工具,以便系统地组织、理解和使用数据进行决策。
Database Management Systems( 3th Edition ),Wiley ,2004, 5-12A introduction to Database Management SystemRaghu RamakrishnanA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items andassemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystem that stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decision Customers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data. Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels.Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table as its data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processingsuch as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of information without regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personalcomputers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world.Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .。