计算机专业外文翻译(文献翻译)
- 格式:pdf
- 大小:6.25 MB
- 文档页数:19
姓名:刘峻霖班级:通信143班学号:2014101108Computer Language and ProgrammingI. IntroductionProgramming languages, in computer science, are the artificial languages used to write a sequence of instructions (a computer program) that can be run by a computer. Simi lar to natural languages, such as English, programming languages have a vocabulary, grammar, and syntax. However, natural languages are not suited for programming computers because they are ambiguous, meaning that their vocabulary and grammatical structure may be interpreted in multiple ways. The languages used to program computers must have simple logical structures, and the rules for their grammar, spelling, and punctuation must be precise.Programming languages vary greatly in their sophistication and in their degree of versatility. Some programming languages are written to address a particular kind of computing problem or for use on a particular model of computer system. For instance, programming languages such as FORTRAN and COBOL were written to solve certain general types of programming problems—FORTRAN for scientific applications, and COBOL for business applications. Although these languages were designed to address specific categories of computer problems, they are highly portable, meaning that the y may be used to program many types of computers. Other languages, such as machine languages, are designed to be used by one specific model of computer system, or even by one specific computer in certain research applications. The most commonly used progra mming languages are highly portable and can be used to effectively solve diverse types of computing problems. Languages like C, PASCAL and BASIC fall into this category.II. Language TypesProgramming languages can be classified as either low-level languages or high-level languages. Low-level programming languages, or machine languages, are the most basic type of programming languages and can be understood directly by a computer. Machine languages differ depending on the manufacturer and model of computer. High-level languages are programming languages that must first be translated into a machine language before they can be understood and processed by a computer. Examples of high-levellanguages are C, C++, PASCAL, and FORTRAN. Assembly languages are intermediate languages that are very close to machine languages and do not have the level of linguistic sophistication exhibited by other high-level languages, but must still be translated into machine language.1. Machine LanguagesIn machine languages, instructions are written as sequences of 1s and 0s, called bits, that a computer can understand directly. An instruction in machine language generally tells the computer four things: (1) where to find one or two numbers or simple pieces of data in the main computer memory (Random Access Memory, or RAM), (2) a simple operation to perform, such as adding the two numbers together, (3) where in the main memory to put the result of this simple operation, and (4) where to find the next instruction to perform. While all executable programs are eventually read by the computer in machine language, they are not all programmed in machine language. It is extremely difficult to program directly in machine language because the instructions are sequences of 1s and 0s. A typical instruction in a machine language might read 10010 1100 1011 and mean add the contents of storage register A to the contents of storage register B.2. High-Level LanguagesHigh-level languages are relatively sophisticated sets of statements utilizing word s and syntax from human language. They are more similar to normal human languages than assembly or machine languages and are therefore easier to use for writing complicated programs. These programming languages allow larger and more complicated programs to be developed faster. However, high-level languages must be translated into machine language by another program called a compiler before a computer can understand them. For this reason, programs written in a high-level language may take longer to execute and use up more memory than programs written in an assembly language.3. Assembly LanguagesComputer programmers use assembly languages to make machine-language programs easier to write. In an assembly language, each statement corresponds roughly to one machine language instruction. An assembly language statement is composed with the aid of easy to remember commands. The command to add the contents of the storage register A to the contents of storage register B might be written ADD B, A in a typical assembl ylanguage statement. Assembly languages share certain features with machine languages. For instance, it is possible to manipulate specific bits in both assembly and machine languages. Programmers use assemblylanguages when it is important to minimize the time it takes to run a program, because the translation from assembly language to machine language is relatively simple. Assembly languages are also used when some part of the computer has to be controlled directly, such as individual dots on a monitor or the flow of individual characters to a printer.III. Classification of High-Level LanguagesHigh-level languages are commonly classified as procedure-oriented, functional, object-oriented, or logic languages. The most common high-level languages today are procedure-oriented languages. In these languages, one or more related blocks of statements that perform some complete function are grouped together into a program module, or procedure, and given a name such as “procedure A.” If the same sequence of oper ations is needed elsewhere in the program, a simple statement can be used to refer back to the procedure. In essence, a procedure is just amini- program. A large program can be constructed by grouping together procedures that perform different tasks. Procedural languages allow programs to be shorter and easier for the computer to read, but they require the programmer to design each procedure to be general enough to be usedin different situations. Functional languages treat procedures like mathematical functions and allow them to be processed like any other data in a program. This allows a much higher and more rigorous level of program construction. Functional languages also allow variables—symbols for data that can be specified and changed by the user as the program is running—to be given values only once. This simplifies programming by reducing the need to be concerned with the exact order of statement execution, since a variable does not have to be redeclared , or restated, each time it is used in a program statement. Many of the ideas from functional languages have become key parts of many modern procedural languages. Object-oriented languages are outgrowths of functional languages. In object-oriented languages, the code used to write the program and the data processed by the program are grouped together into units called objects. Objects are further grouped into classes, which define the attributes objects must have. A simpleexample of a class is the class Book. Objects within this class might be No vel and Short Story. Objects also have certain functions associated with them, called methods. The computer accesses an object through the use of one of the object’s methods. The method performs some action to the data in the object and returns this value to the computer. Classes of objects can also be further grouped into hierarchies, in which objects of one class can inherit methods from another class. The structure provided in object-oriented languages makes them very useful for complicated programming tasks. Logic languages use logic as their mathematical base. A logic program consists of sets of facts and if-then rules, which specify how one set of facts may be deduced from others, for example: If the statement X is true, then the statement Y is false. In the execution of such a program, an input statement can be logically deduced from other statements in the program. Many artificial intelligence programs are written in such languages.IV. Language Structure and ComponentsProgramming languages use specific types of statements, or instructions, to provide functional structure to the program. A statement in a program is a basic sentence that expresses a simple idea—its purpose is to give the computer a basic instruction. Statements define the types of data allowed, how data are to be manipulated, and the ways that procedures and functions work. Programmers use statements to manipulate common components of programming languages, such as variables and macros (mini-programs within a program). Statements known as data declarations give names and properties to elements of a program called variables. Variables can be assigned different values within the program. The properties variables can have are called types, and they include such things as what possible values might be saved in the variables, how much numerical accuracy is to be used in the values, and how one variable may represent a collection of simpler values in an organized fashion, such as a table or array. In many programming languages, a key data type is a pointer. Variables that are pointers do not themselves have values; instead, they have information that the computer can use to locate some other variable—that is, they point to another variable. An expression is a piece of a statement that describe s a series of computations to be performed on some of the program’s variables, such as X+Y/Z, in which the variables are X, Y, and Z and the computations are addition and division. An assignment statement assigns a variable a value derived fromsome expression, while conditional statements specify expressions to be tested and then used to select which other statements should be executed next.Procedure and function statements define certain blocks of code as procedures or functions that can then be returned to later in the program. These statements also define the kinds of variables and parameters the programmer can choose and the type of value that the code will return when an expression accesses the procedure or function. Many programming languages also permit mini translation programs called macros. Macros translate segments of code that have been written in a language structure defined by the programmer into statements that the programming language understands.V. HistoryProgramming languages date back almost to the invention of the digital computer in the 1940s. The first assembly languages emerged in the late 1950s with the introduction of commercial computers. The first procedural languages were developed in the late 1950s to early 1960s: FORTRAN, created by John Backus, and then COBOL, created by Grace Hopper The first functional language was LISP, written by John McCarthy4 in the late 1950s. Although heavily updated, all three languages are still widely used today. In the late 1960s, the first object-oriented languages, such as SIMULA, emerged. Logic languages became well known in the mid 1970swith the introduction of PROLOG6, a language used to program artificial intelligence software. During the 1970s, procedural languages continued to develop with ALGOL, BASIC, PASCAL, C, and A d a SMALLTALK was a highly influential object-oriented language that led to the merging ofobject- oriented and procedural languages in C++ and more recently in JAVA10. Although pure logic languages have declined in popularity, variations have become vitally important in the form of relational languages for modern databases, such as SQL.计算机程序一、引言计算机程序是指导计算机执行某个功能或功能组合的一套指令。
外文文献原稿和译文原稿DATABASEA database may be defined as a collection interrelated data store together with as little redundancy as possible to serve one or more applications in an optimal fashion .the data are stored so that they are independent of programs which use the data .A common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base .One system is said to contain a collection of database if they are entirely separate in structure .A database may be designed for batch processing , real-time processing ,or in-line processing .A data base system involves application program, DBMS, and database.THE INTRODUCTION TO DATABASE MANAGEMENT SYSTEMSThe term database is often to describe a collection of related files that is organized into an integrated structure that provides different people varied access to the same data. In many cases this resource is located in different files in different departments throughout the organization, often known only to the individuals who work with their specific portion of the total information. In these cases, the potential value of the information goes unrealized because a person in other departments who may need it does not know it or it cannot be accessed efficiently. In an attempt to organize their information resources and provide for timely and efficient access, many companies have implemented databases.A database is a collection of related data. By data, we mean known facts that can be recorded and that have implicit meaning. For example, the names, telephone numbers, and addresses of all the people you know. You may have recorded this data in an indexed address book, or you may have stored it on a diskette using a personalcomputer and software such as DBASE Ⅲor Lotus 1-2-3. This is a collection of related data with an implicit meaning and hence is a database.The above definition of database is quite general. For example, we may consider the collection of words that made up this page of text to be usually more restricted. A database has the following implicit properties:● A database is a logically coherent collection of data with some inherent meaning. A random assortment of data cannot be referred to as a database.● A database is designed, built, and populated with data for a specific purpose. It has an intended group of user and some preconceived applications in which these users are interested.● A database represents some aspect of the real world, sometimes called the miniworld. Changes to the miniworld are reflected in the database.In other words, a database has some source from which data are derived, some degree of interaction with events in the real world, and an audience that is actively interested in the contents of the database.A database management system (DBMS) is composed of three major parts: (1) a storage subsystem that stores and retrieves data in files; (2)a modeling and manipulation subsystem that provides the means with which to organize the data and to add, delete, maintain, and update the data; and (3) an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems.●Managers who require more up-to-date information to make effective decisions.●Customers who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.●Users who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.●Organizations that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.A DBMS can organize, process, and present selected data elements from the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or p oorly defined, but people can “browse” through the database until they have the needed information. In short, the DBMS will “mange” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers. In a file-oriented system, user needing special information may communicate their needs to a programmer, who, when time permits, will write one or more programs to extract the data and prepare the information. The availability of a DBMS, however, offers users a much faster alternative communications path.DATABASE QUERYIf the DBMS provides a way to interactively enter and update the database ,as well as interrogate it ,this capability allows for managing personal database. However, it does not automatically leave an audit trail of actions and does not provide the kinds of controls necessary in a multi-user organization .There controls are only available when a set of application programs is customized for each data entry and updating function.Software for personal computers that perform some of the DBMS functions has been very popular .Individuals for personal information storage and processing intended personal computers for us .Small enterprises, professionals like doctors, architects, engineers, lawyers and so on have also used these machines extensively. By the nature of intended usage ,database system on there machines are except from several of the requirements of full-fledged database systems. Since data sharing is not intended, concurrent operations even less so ,the software can be less complex .Security and integrity maintenance are de-emphasized or absent .as data volumes will be small, performance efficiency is also less important .In fact, the only aspect of a database system that is important is data independence. Data independence ,as stated earlier ,means that application programs and user queries need not recognize physical organization of data on secondary storage. The importance of this aspect , particularly for the personal computer user ,is that this greatly simplifies database usage . The user can store ,access and manipulate data at ahigh level (close to the application)and be totally shielded from the low level (close to the machine )details of data organization.DBMS STRUCTURING TECHNIQUESSpatial data management has been an active area of research in the database field for two decades ,with much of the research being focused on developing data structures for storing and indexing spatial data .however, no commercial database system provides facilities for directly de fining and storing spatial data ,and formulating queries based on research conditions on spatial data.There are two components to data management: history data management and version management .Both have been the subjects of research for over a decade. The troublesome aspect of temporal data management is that the boundary between applications and database systems has not been clearly drawn. Specifically, it is not clear how much of the typical semantics and facilities of temporal data management can and should be directly incorporated in a database system, and how much should be left to applications and users. In this section, we will provide a list of short-term research issues that should be examined to shed light on this fundamental question.The focus of research into history data management has been on defining the semantics of time and time interval, and issues related to understanding the semantics of queries and updates against history data stored in an attribute of a record. Typically, in the context of relational databases ,a temporal attribute is defined to hold a sequence of history data for the attribute. A history data consists of a data item and a time interval for which the data item is valid. A query may then be issued to retrieve history data for a specified time interval for the temporal attribute. The mechanism for supporting temporal attributes is to that for supporting set-valued attributes in a database system, such as UniSQL.In the absence of a support for temporal attributes, application developers who need to model and history data have simply simulated temporal attributes by creating attribute for the time interval ,along with the “temporal” attribute. This of course may result in duplication of records in a table, and more complicated search predicates in queries. The one necessary topic of research in history data management is to quantitatively establish the performance (and even productivity) differences betweenusing a database system that directly supports attributes and using a conventional database system that does not support either the set-valued attributes or temporal attributes.Data security, integrity, and independenceData security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database of the database, called subschemas. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.Data integrity refers to the accuracy, correctness, or validity of the data in the database. In a database system, data integrity means safeguarding the data against invalid alteration or destruction. In large on-line database system, data integrity becomes a more severe problem and two additional complications arise. The first has to do with many users accessing the database concurrently. For example, if thousands of travel agents book the same seat on the same flight, the first agent’s booking will be lost. In such cases the technique of locking the record or field provides the means for preventing one user from accessing a record while another user is updating the same record.The second complication relates to hardware, software or human error during the course of processing and involves database transaction which is a group of database modifications treated as a single unit. For example, an agent booking an airline reservation involves several database updates (i.e., adding the passenger’s name and address and updating the seats-available field), which comprise a single transaction. The database transaction is not considered to be completed until all updates have been completed; otherwise, none of the updates will be allowed to take place.An important point about database systems is that the database should exist independently of any of the specific applications. Traditional data processing applications are data dependent.When a DMBS is used, the detailed knowledge of the physical organization of the data does not have to be built into every application program. The application program asks the DBMS for data by field name, for example, a coded representationof “give me customer name and balance due” would be sent to the DBMS. Without a DBMS the programmer must reserve space for the full structure of the record in the program. Any change in data structure requires changes in all the applications programs.Data Base Management System (DBMS)The system software package that handles the difficult tasks associated with creating ,accessing and maintaining data base records is called a data base management system (DBMS). A DBMS will usually be handing multiple data calls concurrently.It must organize its system buffers so that different data operations can be in process together .It provides a data definition language to specify the conceptual schema and most likely ,some of the details regarding the implementation of the conceptual schema by the physical schema.The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model “.At the present time ,there are four underling structures for database management systems. They are :List structures.Relational structures.Hierarchical (tree) structures.Network structures.Management Information System(MIS)An MIS can be defined as a network of computer-based data processing procedures developed in an organization and integrated as necessary with manual and other procedures for the purpose of providing timely and effective information to support decision making and other necessary management functions.One of the most difficult tasks of the MIS designer is to develop the information flow needed to support decision making .Generally speaking ,much of the information needed by managers who occupy different levels and who have different levels and have different responsibilities is obtained from a collection of exiting information system (or subsystems)Structure Query Language (SQL)SQL is a data base processing language endorsed by the American NationalStandards Institute. It is rapidly becoming the standard query language for accessing data on relational databases .With its simple ,powerful syntax ,SQL represents a great progress in database access for all levels of management and computing professionals.SQL falls into two forms : interactive SQL and embedded SQL. Embedded SQL usage is near to traditional programming in third generation languages .It is the interactive use of SQL that makes it most applicable for the rapid answering of ad hoc queries .With an interactive SQL query you just type in a few lines of SQL and you get the database response immediately on the screen.译文数据库数据库可以被定义为一个相互联系的数据库存储的集合。
外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。
软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。
Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。
本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。
Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。
Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。
Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。
, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。
, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。
, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。
, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。
, 使用代码分析工具,以检查你的应用程序中的内存管理问题。
, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。
, 轻松地访问信息集成的上下文敏感的Qt帮助系统。
外文文献原文SMTP Service Extension for AuthenticationThis document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited.Copyright NoticeCopyright (C) The Internet Society (1999). All Rights Reserved.1. IntroductionThis document defines an SMTP service extension [ESMTP] whereby an SMTP client may indicate an authentication mechanism to the server,perform an authentication protocol exchange, and optionally negotiatea security layer for subsequent protocol interactions. This extension is a profile of the Simple Authentication and Security Layer [SASL].2. Conventions Used in this DocumentIn examples, "C:" and "S:" indicate lines sent by the client and server respectively. The key words "MUST", "MUST NOT", "SHOULD", "SHOULD NOT", and "MAY" in this document are to be interpreted as defined in "Key words for use in RFCs to Indicate Requirement Levels" [KEYWORDS].3. The Authentication service extension(1) the name of the SMTP service extension is "Authentication"(2) the EHLO keyword value associated with this extension is "AUTH"(3) The AUTH EHLO keyword contains as a parameter a space separated list of the names of supported SASL mechanisms.(4) a new SMTP verb "AUTH" is defined(5) an optional parameter using the keyword "AUTH" is added to the MAIL FROM command, and extends the maximum line length of the MAIL FROM command by 500 characters.(6) this extension is appropriate for the submission protocol [SUBMIT].4. The AUTH command AUTH mechanism [initial-response]Arguments:a string identifying a SASL authentication mechanism. an optional base64-encoded responseRestrictions:After an AUTH command has successfully completed, no more AUTH commands may be issued in the same session. After a successful AUTH command completes, a server MUST reject any further AUTH commands with a 503 reply. The AUTH command is not permitted during a mail transaction.Discussion:The AUTH command indicates an authentication mechanism to the server. If the server supports the requested authentication mechanism, it performs an authentication protocol exchange to authenticate and identify the user. Optionally, it also negotiates a security layer for subsequent protocol interactions. If the requested authentication mechanism is not supported, the server rejects the AUTH command with a 504 reply.The authentication protocol exchange consists of a series of server challenges and client answers that are specific to the authentication mechanism. A server challenge, otherwise known as a ready response, is a 334 reply with the text part containing a BASE64 encoded string. The client answer consists of a line containing a BASE64 encoded string. If the client wishes to cancel an authentication exchange, it issues a line with a single "*". If the server receives such an answer, it MUST reject the AUTH command by sending a 501 reply.The optional initial-response argument to the AUTH command is used to save a round trip when using authentication mechanisms that are defined to send no data in the initial challenge.When the initial-response argument is used with such a mechanism, the initial empty challenge is not sent to the client and the server uses the data in the initial-response argument as if it were sent in response to the empty challenge. Unlike a zero-length client answer to a 334 reply, a zero- length initial response is sent as a single equals sign ("="). If the client uses an initial-response argument to the AUTH command with a mechanism that sends data in the initial challenge, the server rejects the AUTH command with a 535 reply.If the server cannot BASE64 decode the argument, it rejects the AUTH command with a 501 reply. If the server rejects the authentication data, it SHOULD reject the AUTH command with a 535 reply unless a more specific error code, such as one listed in section 6, is appropriate. Should the client successfully complete the authentication exchange, the SMTP server issues a 235 reply.The service name specified by this protocol's profile of SASL is "smtp".If a security layer is negotiated through the SASL authentication exchange, it takes effect immediately following the CRLF that concludes the authentication exchange for the client, and the CRLF of the success reply for the server. Upon a security layer's taking effect, the SMTP protocol is reset to the initial state (the state in SMTP after a server issues a 220 service ready greeting). The server MUST discard any knowledge obtained from the client, such as the argument to the EHLO command, which was not obtained from the SASL negotiation itself. The client MUST discard any knowledge obtained from the server, such as the list of SMTP service extensions, which was not obtained from the SASL negotiation itself (with the exception that a client MAY compare the list of advertised SASL mechanisms before and after authentication in order to detect an active down-negotiation attack). The client SHOULD send an EHLO command as the first command after a successful SASL negotiation which results in the enabling of a security layer.The server is not required to support any particular authentication mechanism, nor are authentication mechanisms required to support any security layers. If an AUTH command fails, the client may try another authentication mechanism by issuing another AUTH command.If an AUTH command fails, the server MUST behave the same as if the client had not issued the AUTH command.The BASE64 string may in general be arbitrarily long. Clients and servers MUST be able to support challenges and responses that are as long as are generated by the authentication mechanisms they support, independent of any line length limitations the client or server may have in other parts of its protocol implementation.Examples:S: 220 ESMTP server readyC: EHLO S: S: 250 AUTH CRAM-MD5 DIGEST-MD5C: AUTH FOOBARS: 504 Unrecognized authentication type.C: AUTH CRAM-MD5S: 334PENCeUxFREJoU0NnbmhNWitOMjNGNndAZWx3b29kLmlubm9zb2Z0LmNvbT4= C: ZnJlZCA5ZTk1YWVlMDljNDBhZjJiODRhMGMyYjNiYmFlNzg2ZQ==S: 235 Authentication successful.5. The AUTH parameter to the MAIL FROM commandAUTH=addr-specArguments:An addr-spec containing the identity which submitted the message to the delivery system, or the two character sequence "<>" indicating such an identity is unknown or insufficiently authenticated. To comply with the restrictions imposed on ESMTP parameters, the addr-spec is encoded inside an xtext. The syntax of an xtext is described in section 5 of [ESMTP-DSN].Discussion:The optional AUTH parameter to the MAIL FROM command allows cooperating agents in a trusted environment to communicate the authentication of individual messages.If the server trusts the authenticated identity of the client toassert that the message was originally submitted by the supplied addr-spec, then the server SHOULD supply the same addr-spec in an AUTH parameter when relaying the message to anyserver which supports the AUTH extension.A MAIL FROM parameter of AUTH=<> indicates that the original submitter of the message is not known. The server MUST NOT treat the message as having been originally submitted by the client.If the AUTH parameter to the MAIL FROM is not supplied, the client has authenticated, and the server believes the message is an original submission by the client, the server MAY supply the client's identity in the addr-spec in an AUTH parameter when relaying the message to any server which supports the AUTH extension.If the server does not sufficiently trust the authenticated identity of the client, or if the client is not authenticated, then the server MUST behave as if the AUTH=<> parameter was supplied. The server MAY, however, write the value of the AUTH parameter to a log file.If an AUTH=<> parameter was supplied, either explicitly or due to the requirement in the previous paragraph, then the server MUST supply the AUTH=<> parameter when relaying the message to any server which it has authenticated to using the AUTH extension.A server MAY treat expansion of a mailing list as a new submission, setting the AUTH parameter to the mailing list address or mailing list administration address when relaying the message to list subscribers.It is conforming for an implementation to be hard-coded to treat all clients as being insufficiently trusted. In that case, the implementation does nothing more than parse and discard syntactically valid AUTH parameters to the MAIL FROM command and supply AUTH=<> parameters to any servers to which it authenticates using the AUTH extension.Examples:C: MAIL FROM:<e=mc2@> AUTH=e+3Dmc2@S: 250 OK6. Error CodesThe following error codes may be used to indicate various conditions as described.432 A password transition is neededThis response to the AUTH command indicates that the user needs to transition to the selected authentication mechanism. This typically done by authenticating once using the PLAIN authentication mechanism.534 Authentication mechanism is too weakThis response to the AUTH command indicates that the selected authentication mechanism is weaker than server policy permits for that user.538 Encryption required for requested authentication mechanismThis response to the AUTH command indicates that the selected authentication mechanism may only be used when the underlying SMTP connection is encrypted.454 Temporary authentication failureThis response to the AUTH command indicates that the authentication failed due to a temporary server failure.530 Authentication requiredThis response may be returned by any command other than AUTH, EHLO, HELO, NOOP, RSET, or QUIT. It indicates that server policy requires authentication in order to perform the requested action.7. Formal SyntaxThe following syntax specification uses the augmented Backus-Naur Form (BNF) notation as specified in [ABNF].Except as noted otherwise, all alphabetic characters are case- insensitive. The use of upper or lower case characters to define token strings is for editorial clarity only. Implementations MUST accept these strings in a case-insensitive fashion.UPALPHA = %x41-5A ;; Uppercase: A-ZLOALPHA = %x61-7A ;; Lowercase: a-zALPHA = UPALPHA / LOALPHA ;; case insensitiveDIGIT = %x30-39 ;; Digits 0-9HEXDIGIT = %x41-46 / DIGIT ;; hexidecimal digit (uppercase)hexchar = "+" HEXDIGIT HEXDIGITxchar = %x21-2A / %x2C-3C / %x3E-7E;; US-ASCII except for "+", "=", SPACE and CTLxtext = *(xchar / hexchar)AUTH_CHAR = ALPHA / DIGIT / "-" / "_"auth_type = 1*20AUTH_CHARauth_command = "AUTH" SPACE auth_type [SPACE (base64 / "=")]*(CRLF [base64]) CRLFauth_param = "AUTH=" xtext;; The decoded form of the xtext MUST be either;; an addr-spec or the two characters "<>"base64 = base64_terminal /( 1*(4base64_CHAR) [base64_terminal] )base64_char = UPALPHA / LOALPHA / DIGIT / "+" / "/";; Case-sensitivebase64_terminal = (2base64_char "==") / (3base64_char "=")continue_req = "334" SPACE [base64] CRLFCR = %x0C ;; ASCII CR, carriage returnCRLF = CR LFCTL = %x00-1F / %x7F ;; any ASCII control character and DELLF = %x0A ;; ASCII LF, line feedSPACE = %x20 ;; ASCII SP, space8. References[ABNF] Crocker, D. and P. Overell, "Augmented BNF for Syntax Specifications: ABNF", RFC2234, November 1997.[CRAM-MD5] Klensin, J., Catoe, R. and P. Krumviede, "IMAP/POP AUTHorizeExtension for Simple Challenge/Response", RFC 2195, September 1997.[ESMTP] Klensin, J., Freed, N., Rose, M., Stefferud, E. and D. Crocker, "SMTP Service Extensions", RFC1869, November 1995.[ESMTP-DSN] Moore, K, "SMTP Service Extension for Delivery Status Notifications", RFC1891, January 1996.[KEYWORDS] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC2119, March 1997[SASL] Myers, J., "Simple Authentication and Security Layer (SASL)", RFC2222, October 1997.[SUBMIT] Gellens, R. and J. Klensin, "Message Submission", RFC 2476, December 1998.[RFC821] Postel, J., "Simple Mail Transfer Protocol", STD 10, RFC 821, August 1982.[RFC822] Crocker, D., "Standard for the Format of ARPA Internet Text Messages", STD 11, RFC822, August 1982.9. Security ConsiderationsSecurity issues are discussed throughout this memo.If a client uses this extension to get an encrypted tunnel through an insecure network to a cooperating server, it needs to be configured to never send mail to that server when the connection is not mutually authenticated and encrypted. Otherwise, an attacker could steal the client's mail by hijacking the SMTP connection and either pretending the server does not support the Authentication extension or causing all AUTH commands to fail.Before the SASL negotiation has begun, any protocol interactions are performed in the clear and may be modified by an active attacker. For this reason, clients and servers MUST discard any knowledge obtained prior to the start of the SASL negotiation upon completion of a SASL negotiation which results in a security layer.This mechanism does not protect the TCP port, so an active attacker may redirect a relay connection attempt to the submission port [SUBMIT]. The AUTH=<> parameter prevents such an attack from causing an relayed message without an envelope authentication to pick up the authentication of the relay client.A message submission client may require the user to authenticate whenever a suitable SASL mechanism is advertised. Therefore, it may not be desirable for a submission server [SUBMIT] to advertise a SASL mechanism when use of that mechanism grants the client no benefits over anonymous submission.This extension is not intended to replace or be used instead of end- to-end message signature and encryption systems such as S/MIME or PGP. This extension addresses a different problem than end-to-end systems; it has the following key differences:(1) it is generally useful only within a trusted enclave(2) it protects the entire envelope of a message, not just the message's body.(3) it authenticates the message submission, not authorship of the message content(4) it can give the sender some assurance the message was delivered to the next hop in the case where the sender mutually authenticates with the next hop and negotiates an appropriate security layer.Additional security considerations are mentioned in the SASL specification [SASL].译文SMTP服务扩展的认证机制这个文档详细说明了因特网团体的一个标准的协议的发展,以及对其改进和建议提出了要求。
外文原文:Database1.1Database conceptThe database concept has evolved since the 1960s to ease increasing difficulties in designing, building, and maintaining complex information systems (typically with many concurrent end-users, and with a large amount of diverse data). It has evolved together with database management systems which enable the effective handling of databases. Though the terms database and DBMS define different entities, they are inseparable: a database's properties are determined by its supporting DBMS and vice-versa. The Oxford English dictionary cites[citation needed] a 1962 technical report as the first to use the term "data-base." With the progress in technology in the areas of processors, computer memory, computer storage and computer networks, the sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitudes. For decades it has been unlikely that a complex information system can be built effectively without a proper database supported by a DBMS. The utilization of databases is now spread to such a wide degree that virtually every technology and product relies on databases and DBMSs for its development and commercialization, or even may have such embedded in it. Also, organizations and companies, from small to large, heavily depend on databases for their operations.No widely accepted exact definition exists for DBMS. However, a system needs to provide considerable functionality to qualify as a DBMS. Accordingly its supported data collection needs to meet respective usability requirements (broadly defined by the requirements below) to qualify as a database. Thus, a database and its supporting DBMS are defined here by a set of general requirements listed below. Virtually all existing mature DBMS products meet these requirements to a great extent, while less mature either meet them or converge to meet them.1.2Evolution of database and DBMS technologyThe introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing.In the earliest database systems, efficiency was perhaps the primary concern, but it was already recognized that there were other important objectives. One of the key aims was to make the data independent of the logic of application programs, so that the same data could be made available to different applications.The first generation of database systems were navigational,[2] applications typically accessed data by following pointers from one record to another. The two main data models at this time were the hierarchical model, epitomized by IBM's IMS system, and the Codasyl model (Network model), implemented in a number ofproducts such as IDMS.The Relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. This was considered necessary to allow the content of the database to evolve without constant rewriting of applications. Relational systems placed heavy demands on processing resources, and it was not until the mid 1980s that computing hardware became powerful enough to allow them to be widely deployed. By the early 1990s, however, relational systems were dominant for all large-scale data processing applications, and they remain dominant today (2012) except in niche areas. The dominant database language is the standard SQL for the Relational model, which has influenced database languages also for other data models.Because the relational model emphasizes search rather than navigation, it does not make relationships between different entities explicit in the form of pointers, but represents them rather using primary keys and foreign keys. While this is a good basis for a query language, it is less well suited as a modeling language. For this reason a different model, the Entity-relationship model which emerged shortly later (1976), gained popularity for database design.In the period since the 1970s database technology has kept pace with the increasing resources becoming available from the computing platform: notably the rapid increase in the capacity and speed (and reduction in price) of disk storage, and the increasing capacity of main memory. This has enabled ever larger databases and higher throughputs to be achieved.The rigidity of the relational model, in which all data is held in tables with a fixed structure of rows and columns, has increasingly been seen as a limitation when handling information that is richer or more varied in structure than the traditional 'ledger-book' data of corporate information systems: for example, document databases, engineering databases, multimedia databases, or databases used in the molecular sciences. Various attempts have been made to address this problem, many of them gathering under banners such as post-relational or NoSQL. Two developments of note are the Object database and the XML database. The vendors of relational databases have fought off competition from these newer models by extending the capabilities of their own products to support a wider variety of data types.1.3General-purpose DBMSA DBMS has evolved into a complex software system and its development typically requires thousands of person-years of development effort.[citation needed] Some general-purpose DBMSs, like Oracle, Microsoft SQL Server, and IBM DB2, have been undergoing upgrades for thirty years or more. General-purpose DBMSs aim to satisfy as many applications as possible, which typically makes them even more complex than special-purpose databases. However, the fact that they can be used "off the shelf", as well as their amortized cost over many applications and instances, makes them an attractive alternative (Vsone-time development) whenever they meet an application's requirements.Though attractive in many cases, a general-purpose DBMS is not always the optimal solution: When certain applications are pervasive with many operating instances, each with many users, a general-purpose DBMS may introduce unnecessary overhead and too large "footprint" (too large amount of unnecessary, unutilized software code). Such applications usually justify dedicated development.Typical examples are email systems, though they need to possess certain DBMS properties: email systems are built in a way that optimizes email messages handling and managing, and do not need significant portions of a general-purpose DBMS functionality.1.4Database machines and appliancesIn the 1970s and 1980s attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine. Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).1.5Database researchDatabase research has been an active and diverse area, with many specializations, carried out since the early days of dealing with the database concept in the 1960s. It has strong ties with database technology and DBMS products. Database research has taken place at research and development groups of companies (e.g., notably at IBM Research, who contributed technologies and ideas virtually to any DBMS existing today), research institutes, and Academia. Research has been done both through Theory and Prototypes. The interaction between research and database related product development has been very productive to the database area, and many related key concepts and technologies emerged from it. Notable are the Relational and the Entity-relationship models, the Atomic transaction concept and related Concurrency control techniques, Query languages and Query optimization methods, RAID, and more. Research has provided deep insight to virtually all aspects of databases, though not always has been pragmatic, effective (and cannot and should not always be: research is exploratory in nature, and not always leads to accepted or useful ideas). Ultimately market forces and real needs determine the selection of problem solutions and related technologies, also among those proposed by research. However, occasionally, not the best and most elegant solution wins (e.g., SQL). Along their history DBMSs and respective databases, to a great extent, have been the outcome of such research, while real product requirements and challenges triggered database research directions and sub-areas.The database research area has several notable dedicated academic journals (e.g., ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE, and more) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE, and more), as well as an active and quite heterogeneous (subject-wise) research community all over the world.1.6Database architectureDatabase architecture (to be distinguished from DBMS architecture; see below) may be viewed, to some extent, as an extension of Data modeling. It is used to conveniently answer requirements of different end-users from a same database, as well as for other benefits. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but not other many details about employees, that are the interest of the human resources department. Thus different departments need different views of the company's database, that both include the employees' payments, possibly in a different level of detail (and presented in different visual forms). To meet such requirement effectively database architecture consists of three levels: external, conceptual and internal. Clearly separating the three levels was a major feature of the relational database model implementations that dominate 21st century databases.[13]The external level defines how each end-user type understands the organization of its respective relevant data in the database, i.e., the different needed end-user views.A single database can have any number of views at the external level.The conceptual level unifies the various external views into a coherent whole, global view.[13] It provides the common-denominator of all the external views. It comprises all the end-user needed generic data, i.e., all the data from which any view may be derived/computed. It is provided in the simplest possible way of such generic data, and comprises the back-bone of the database. It is out of the scope of the various database end-users, and serves database application developers and defined by database administrators that build the database.The Internal level (or Physical level) is as a matter of fact part of the database implementation inside a DBMS (see Implementation section below). It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the conceptual level, provides supporting storage-structures like indexes, to enhance performance, and occasionally stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in attempt to optimize the overall database usage by all its end-uses according to the database goals and priorities.All the three levels are maintained and updated according to changing needs by database administrators who often also participate in the database design.The above three-level database architecture also relates to and being motivated by the concept of data independence which has been described for long time as a desired database property and was one of the major initial driving forces of the Relational model. In the context of the above architecture it means that changes made at a certain level do not affect definitions and software developed with higher level interfaces, and are being incorporated at the higher level automatically. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which saves substantial change work that would be needed otherwise.In summary, the conceptual is a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it is uncomplicated by details of how the data is stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation (see Implementation section below), requires a different levelof detail and uses its own data structure types, typically different in nature from the structures of the external and conceptual levels which are exposed to DBMS users (e.g., the data models above): While the external and conceptual levels are focused on and serve DBMS users, the concern of the internal level is effective implementation details.中文译文:数据库1.1 数据库的概念数据库的概念已经演变自1960年以来,以缓解日益困难,在设计,建设,维护复杂的信息系统(通常与许多并发的最终用户,并用大量不同的数据)。
云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
中英文资料外文翻译计算机网络计算机网络,通常简单的被称作是一种网络,是一家集电脑和设备为一体的沟通渠道,便于用户之间的沟通交流和资源共享。
网络可以根据其多种特点来分类。
计算机网络允许资源和信息在互联设备中共享。
一.历史早期的计算机网络通信始于20世纪50年代末,包括军事雷达系统、半自动地面防空系统及其相关的商业航空订票系统、半自动商业研究环境。
1957年俄罗斯向太空发射人造卫星。
十八个月后,美国开始设立高级研究计划局(ARPA)并第一次发射人造卫星。
然后用阿帕网上的另外一台计算机分享了这个信息。
这一切的负责者是美国博士莱德里尔克。
阿帕网于来于自印度,1969年印度将其名字改为因特网。
上世纪60年代,高级研究计划局(ARPA)开始为美国国防部资助并设计高级研究计划局网(阿帕网)。
因特网的发展始于1969年,20世纪60年代起开始在此基础上设计开发,由此,阿帕网演变成现代互联网。
二.目的计算机网络可以被用于各种用途:为通信提供便利:使用网络,人们很容易通过电子邮件、即时信息、聊天室、电话、视频电话和视频会议来进行沟通和交流。
共享硬件:在网络环境下,每台计算机可以获取和使用网络硬件资源,例如打印一份文件可以通过网络打印机。
共享文件:数据和信息: 在网络环境中,授权用户可以访问存储在其他计算机上的网络数据和信息。
提供进入数据和信息共享存储设备的能力是许多网络的一个重要特征。
共享软件:用户可以连接到远程计算机的网络应用程序。
信息保存。
安全保证。
三.网络分类下面的列表显示用于网络分类:3.1连接方式计算机网络可以据硬件和软件技术分为用来连接个人设备的网络,如:光纤、局域网、无线局域网、家用网络设备、电缆通讯和G.hn(有线家庭网络标准)等等。
以太网的定义,它是由IEEE 802标准,并利用各种媒介,使设备之间进行通信的网络。
经常部署的设备包括网络集线器、交换机、网桥、路由器。
无线局域网技术是使用无线设备进行连接的。
附录1 外文参考文献(译文)JSP内置对象有些对象不用声明就可以在JSP页面的Java程序片和表达式部分使用,这就是JSP 的内置对象。
JSP的内置对象有:request、response、session、application、out.response和request对象是JSP内置对象中较重要的两个,这两个对象提供了对服务器和浏览器通信方法的控制。
直接讨论这两个对象前,要先对HTTP协议—Word Wide Wed底层协议做简单介绍。
Word Wide Wed是怎样运行的呢?在浏览器上键入一个正确的网址后,若一切顺利,网页就出现了。
使用浏览器从网站获取HTML页面时,实际在使用超文本传输协议。
HTTP规定了信息在Internet上的传输方法,特别是规定吧浏览器与服务器的交互方法。
从网站获取页面时,浏览器在网站上打开了一个对网络服务器的连接,并发出请求。
服务器收到请求后回应,所以HTTP协议的核心就是“请求和响应”。
一个典型的请求通常包含许多头,称作请求的HTTP头。
头提供了关于信息体的附加信息及请求的来源。
其中有些头是标准的,有些和特定的浏览器有关。
一个请求还可能包含信息体,例如,信息体可包含HTML表单的内容。
在HTML表单上单击Submit 键时,该表单使用ACTION=”POST”或ACTION=”GET”方法,输入表单的内容都被发送到服务器上。
该表单内容就由POST方法或GET方法在请求的信息体中发送。
服务器发送请求时,返回HTTP响应。
响应也有某种结构,每个响应都由状态行开始,可以包含几个头及可能的信息体,称为响应的HTTP头和响应信息体,这些头和信息体由服务器发送给客户的浏览器,信息体就是客户请求的网页的运行结果,对于JSP 页面,就是网页的静态信息。
用户可能已经熟悉状态行,状态行说明了正在使用的协议、状态代码及文本信息。
例如,若服务器请求出错,则状态行返回错误及对错误描述,比如HTTP/1.1 404 Object Not Found。
外文资料译文及原文院(系):计算机学院专业:计算机科学与技术班级:2401102学号:20023011059姓名:指导教师:2005年6月简介有关如何调优数据库系统和应用程序的好的建议的来源有很多。
比如OLTP 应用程序的DB2调优技巧(以前在IBM® DB2® 开发者园地上发表)之类的文章通过使用事务和数据并行性以及分析查询方案,给出了从表空间和索引设计到缓冲池的内存分配等方面的建议。
这些方面的内容是性能调优的基础知识。
但是,有关如何组织存储过程自身中的逻辑并着眼于其性能的专门建议却并不多见。
本文就提供了这样一种建议。
尽管本文着重于介绍 SQL 过程,但是这里所提供的大多数信息同样适用于用其它语言编写的在应用程序中或存储过程中嵌入的 SQL 逻辑。
背景知识和术语在深入研究详细问题之前,让我们先想想DB2 中有关过程化 SQL 的一些基本术语和概念。
过程化 SQL 构造(例如标量变量、IF 语句和 WHILE 循环)是在DB2 Universal Database™ (UDB) V7 发行版中引入 DB2 的。
以前的 DB2 发行版支持 C 和Java™ 作为存储过程的语言。
V7 引入了 SQL 存储过程,以及其它许多可以促进 OLTP 应用程序开发的特性(例如临时表、应用程序保存点和标识列)。
当创建 SQL 过程时,DB2 将过程主体中的 SQL 查询与过程逻辑区分开来。
为了使性能最优,SQL 查询被静态地编译成包中的节。
(对于静态编译的查询而言,节主要是由 DB2 优化器为该查询选择的存取方案构成的。
包是节的集合。
在过程的执行期间,每当控制从过程逻辑流向 SQL 语句时,在 DLL 和 DB2 引擎之间就存在“上下文切换”。
(在 DB2 V8 中,SQL 过程是在“不受保护的方式”下运行的,即与 DB2 引擎在相同的寻址空间中。
因此我们这里谈及的上下文切换并不是操作系统级别上的完全的上下文切换,而是指 DB2 中层的更换。
软件工程专业毕业设计外文文献翻译1000字本文将就软件工程专业毕业设计的外文文献进行翻译,能够为相关考生提供一定的参考。
外文文献1: Software Engineering Practices in Industry: A Case StudyAbstractThis paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process, practices, and techniques that lead to the production of quality software. The software engineering practices were identified through a survey questionnaire and a series of interviews with the company’s software development managers, software engineers, and testers. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.IntroductionSoftware engineering is the discipline of designing, developing, testing, and maintaining software products. There are a number of software engineering practices that are used in industry to ensure that software products are of high quality, reliable, and maintainable. These practices include software development processes, software configuration management, software testing, requirements engineering, and project management. Software engineeringpractices have evolved over the years as a result of the growth of the software industry and the increasing demands for high-quality software products. The software industry has developed a number of software development models, such as the Capability Maturity Model Integration (CMMI), which provides a framework for software development organizations to improve their software development processes and practices.This paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The objective of the study was to identify the software engineering practices used by the company and to investigate how these practices contribute to the production of quality software.Research MethodologyThe case study was conducted with a large US software development company that produces software for aerospace and medical applications. The study was conducted over a period of six months, during which a survey questionnaire was administered to the company’s software development managers, software engineers, and testers. In addition, a series of interviews were conducted with the company’s software development managers, software engineers, and testers to gain a deeper understanding of the software engineering practices used by the company. The survey questionnaire and the interview questions were designed to investigate the software engineering practices used by the company in relation to software development processes, software configuration management, software testing, requirements engineering, and project management.FindingsThe research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company’s software development process consists of five levels of maturity, starting with an ad hoc process (Level 1) and progressing to a fully defined and optimized process (Level 5). The company has achieved Level 3 maturity in its software development process. The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The software engineering practices used by the company include:Software Configuration Management (SCM): The company uses SCM tools to manage software code, documentation, and other artifacts. The company follows a branching and merging strategy to manage changes to the software code.Software Testing: The company has adopted a formal testing approach that includes unit testing, integration testing, system testing, and acceptance testing. The testing process is automated where possible, and the company uses a range of testing tools.Requirements Engineering: The company has a well-defined requirements engineering process, which includes requirements capture, analysis, specification, and validation. The company uses a range of tools, including use case modeling, to capture and analyze requirements.Project Management: The company has a well-defined project management process that includes project planning, scheduling, monitoring, and control. The company uses a range of tools to support project management, including project management software, which is used to track project progress.ConclusionThis paper has reported a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process,practices, and techniques that lead to the production of quality software. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company uses a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.外文文献2: Agile Software Development: Principles, Patterns, and PracticesAbstractAgile software development is a set of values, principles, and practices for developing software. The Agile Manifesto represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. This paper presents an overview of agile software development, including its principles, patterns, and practices. The paper also discusses the benefits and challenges of agile software development.IntroductionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases.Agile Software Development PrinciplesAgile software development is based on a set of principles. These principles are:Customer satisfaction through early and continuous delivery of useful software.Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.Deliver working software frequently, with a preference for the shorter timescale.Collaboration between the business stakeholders and developers throughout the project.Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.Working software is the primary measure of progress.Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.Continuous attention to technical excellence and good design enhances agility.Simplicity – the art of maximizing the amount of work not done – is essential.The best architectures, requirements, and designs emerge from self-organizing teams.Agile Software Development PatternsAgile software development patterns are reusable solutions to common software development problems. The following are some typical agile software development patterns:The Single Responsibility Principle (SRP)The Open/Closed Principle (OCP)The Liskov Substitution Principle (LSP)The Dependency Inversion Principle (DIP)The Interface Segregation Principle (ISP)The Model-View-Controller (MVC) PatternThe Observer PatternThe Strategy PatternThe Factory Method PatternAgile Software Development PracticesAgile software development practices are a set ofactivities and techniques used in agile software development. The following are some typical agile software development practices:Iterative DevelopmentTest-Driven Development (TDD)Continuous IntegrationRefactoringPair ProgrammingAgile Software Development Benefits and ChallengesAgile software development has many benefits, including:Increased customer satisfactionIncreased qualityIncreased productivityIncreased flexibilityIncreased visibilityReduced riskAgile software development also has some challenges, including:Requires discipline and trainingRequires an experienced teamRequires good communicationRequires a supportive management cultureConclusionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. Agile software development has many benefits, including increased customer satisfaction, increased quality, increased productivity, increased flexibility, increased visibility, and reduced risk. Agile software development also has some challenges, including the requirement for discipline and training, the requirement for an experienced team, the requirement for good communication, and the requirement for a supportive management culture.。
Web Server for Embedded SystemsAfter the “everybody-in-the-Internet-wave” now obviously follows the“everything-in-the-Internet-wave”.The most coffee, vending and washingmachines are still not available about the worldwide net. However the embeddedInternet integration for remote maintenance and diagnostic as well as the so-calledM2M communication is growing with a considerable speed rate.Just the remote maintenance and diagnostic of components and systems by Webbrowsers via the Internet, or a local Intranet has a very high weight for manydevelopment projects. In numerous development departments people work oncompletely Web based configurations and services for embedded systems. Theremaining days of the classic user interface made by a small LC-display with frontpanel and a few function keys are over. Through future evolutions in the field ofthe mobile Internet, Bluetooth-based PAN s (Personal Area Network's) andthe rapidly growing M2M communication (M2M=Machine-to-Machine)a further innovating advance is to be expected.The central function unit to get access on an embedded system via Web browser isthe Web server. Such Web servers bring the desired HTML pages (HTML=HyperText Markup Language) and pictures over the worldwide Internetor a local network to the Web browser. This happens HTTP-based (HyperText Transfer Protocol). A TCP/IP protocol stack –that means it is based onsophisticated and established standards–manages the entire communication.Web server (HTTP server) and browser (HTTP client) build TCP/IP-applications. HTTP achieved a phenomenal distribution in the last years.Meanwhile millions of user around the world surf HTTP-based in the WorldWide Web. Today almost every personal computer offers the necessaryassistance for this protocol. This status is valid more and more for embeddedsystems also. The HTTP spreads up with a fast rate too.1. TCP/IP-based HTTP as Communication PlatformHTTP is a simple protocol that is based on a TCP/IP protocol stack (picture 1.A).HTTP uses TCP (Transmission Control Protocol). TCP is a relative complex andhigh-quality protocol to transfer data by the subordinate IP protocol. TCP itselfalways guarantees a safeguarded connection between two communication partnersbased on an extensive three-way-handshake procedure. As aresult the data transfer via HTTP is always protected. Due tothe extensive TCP protocol mechanisms HTTP offers only a low-gradeperformance.Figure 1: TCP/IP stack and HTTP programming modelHTTP is based on a simple client/server-concept. HTTP server and clientcommunicate via a TCP connection. As default TCP port value the port number80 will be used. The server works completely passive. He waits for a request(order) of a client. This request normally refers to the transmition of specificHTML documents. This HTML documents possibly have to be generateddynamically by CGI. As result of the requests, the server will answer with aresponse that usually contains the desired HTML documents among others(picture 1.B).GET /test.htm HTTP/1.1Accept]: image/gif, image/jpeg, */*User selling agent: Mozilla/4.0Host: 192.168.0.1Listing 1.A: HTTP GET-requestHTTP/1.1 200 OKDate: Mon, 06 Dec 1999 20:55:12 GMTServer: Apache/1.3.6 (Linux)Content-length: 82Content-type: text/html<html><head><title>Test-Seite</title></head><body>Test-SeiteThe DIL/NetPCs DNP/1110 – Using the Embedded Linux</body></html>Listing 1.B: HTTP response as result of the GET-request from listing 1.AHTTP requests normally consist of several text lines, which are transmitted to theserver by TCP. The listing 1.A shows an example. The first line characterizes therequest type (GET), the requested object (/test1.htm) and the used HTTP version(HTTP/1.1). In the second request line the client tells the server, which kind offiles it is able to evaluate. The third line includes information about theclient- software. The fourth and last line of the request from listing 1.A is used toinform the server about the IP address of the client. In according to the type ofrequest and the used client software there could follow some further lines. Asan end of the request a blank line is expected.The HTTP responses as request answer mostly consist of two parts. At first thereis a header of individual lines of text. Then follows a content object (optional).This content object maybe consists of some text lines –in case of a HTML file– ora binary file when a GIF or JPEG image should be transferred. The first line of theheader is especially important. It works as status or error message. If anerror occurs, only the header or a part of it will be transmitted as answer.2. Functional principle of a Web ServerSimplified a Web server can be imagined like a special kind of a file server.Picture 2.A shows an overview. The Web server receives a HTTP GET-requestfrom the Web browser. By this request, a specific file is required as answer (seestep 1 into picture 2.A). After that, the Web server tries to get access on the filesystem of the requested computer. Then it attempts to find the desired file (step 2).After the successful search the Web server read the entire file(step 3) and transmit it as an answer (HTTP response comprising of headerand content object) to the Web browser (step 4). If the Web server cannot findthe appropriate file in the file system, an error message (HTTP response whichonly contains the header) is simply be send as response to the client.Figure 2: Functional principle from Web server and browserThe web content is build by individual files. The base is build by static files withHTML pages. Within such HTML files there are references to further filesembedded –these files are typically pictures in GIF or JPEG format. However,also references to other objects, for example Java-Applets, are possible. After aWeb browser has received a HTML file of a Web server, this file will beevaluated and then searched for external references. Now the steps 1 to 4 frompicture 2.A will run again for every external reference in order to request therespective file from the corresponding Web server. Please note, that such areference consists of the name or IP address of a Web server (e.g. ""),as well as the name of the desired file (e.g. "picture1.gif"). So virtually everyreference can refer to another Web server. In other words, a HTML file could belocated on the server "ssv-embedded.de" but the required picture -which isexternal referenced by this HTML file- is located on the Web server"". Finally this (worldwide) networking of separate objects is thecause for the name World Wide Web (WWW). All files, which are required by aWeb server, are requested from a browser like the procedure shown on picture2.A. Normally these files are stored in the file system of the server. TheWebmaster has to update these files from time to time.A further elementary functionality of a Web server is the CommonGateway Interface(CGI) -we have mentioned before. Originally this technologyis made only for simple forms, which are embedded into HTML pages. The data,resulting from the padding of a form, will be transmitted to a Web server viaHTTP-GET or POST-request (see step 1 into picture 2.B). In such a GET- orPOST-request the name of the CGI program, which is needed for theevaluation of a form, is fundamentally included. This program has to be on theWeb server. Normally the directory "/cgi-bin" is used as storage location.As result of the GET- or POST-request the Web server starts the CGI programlocated in the subdirectory "/cgi-bin" and delivers the received data in form ofparameters (step 2). The outputs of a CGI program are guided to the Web server(step 3). Then the Web server sends them all as responses to the Web browser(step 4).3. Dynamic generated HTML PagesIn contradiction to a company Web site server, which informs people about theproduct program and services by static pages and pictures, an embeddedWeb server has to supply dynamically generated contents. The embedded Webserver will generate the dynamic pages in the moment of the first access by abrowser. How else could we check the actual temperature of a system viaInternet? Static HTML files are not interesting for an embedded Web server.The most information about the firmware version and service instructions arestored in HTML format. All other tasks are normally made via dynamic generatedHTML.There are two different technologies to generate a specific HTML page in themoment of the request: First the so-called server-side-scripting and secondthe CGI programming. At the server-side-scripting, script code is embeddedinto a HTML page. If required, this code will be carried out on the server (server-sided).For this, there are numerous script languages available. All these languages areusable inside a HTML-page. In the Linux community PHP is used mostly. Thefavourite of Microsoft is VBScript. It is also possible to insert Java directly intoHTML pages. Sun has named this technology JSP(Java Server Pages).The HTML page with the script code is statically stored in the file system of theWeb server. Before this server file is delivered to the client, a special programreplaces the entire script code with dynamic generated standard HTML. The Webbrowser will not see anything from the script language.Figure 3: Single steps of the Server-Side-ScriptingPicture 3 shows the single steps of the server-side-scripting. In step 1 the Webbrowser requests a specific HTML file via HTTP GET-request. The Web serverrecognizes the specific extension of the desired file (for example *.ASP or *.PHPinstead of *.HTM and/or *.HTML) and starts a so-called scripting engine(see step 2). This program gets the desired HTML file including the script codefrom the file system (step 3), carry out the script code and make a newHTML file without script code (step 4). The included script code will be replacedby dynamic generated HTML. This new HTML file will be read by the Webserver (step 5) and send to the Web browser (step 6). If a server-sided scripting issupposed to be used by an embedded Web server, so you haveto consider the necessary additional resources. A simple example: In orderto carry out the embedded PHP code into a HTML page, additional programmodules are necessary for the server. A scripting engine together with theembedded Web server has to be stored in the Flash memory chip of an embeddedsystem. Through that, during run time more main memory is required.4. Web Server running under LinuxOnce spoken about Web servers in connection with Linux most peopleimmediately think of Apache. After investigations of the Netcraft Surveythis program is the mostly used Web server worldwide. Apache is anenhancement of the legendary NCSA server. The name Apache itself hasnothing to do with Red Indians. It is a construct from "A Patchy Server" becausethe first version was put together from different code and patch files.Moreover there are numerous other Web servers - even for Linux. Most of this arestanding under the GPL (like Apache) and can be used license free. Avery extensive overview you can find at "/". EveryWeb server has his advantages and disadvantages. Some are developed forspecific functions and have very special qualities. Other distinguishes at bestthrough their reaction rate at many simultaneous requests, as wellas the variety of theirconfiguration settings. Others are designed to need minimal resources and offer very small setting possibilities, as well as only one connection to a client.The most important thing by an embedded Web server is the actual resource requirements. Sometimes embedded systems offer only minimal resources, which mostly has to be shared with Linux. Meanwhile there are numerous high- performance 32-bit-386/486-microcontroller or (Strong)ARM-based embedded systems that own just 8 Mbytes RAM and 2 Mbytes Flash-ROM (picture 4). Outgoing from this ROM (Read-only-Memory, i.e. Flash memory chips) a complete Linux, based on a 2.2- or 2.4-Kernel with TCP/IP protocol stack and Web server, will be booted. HTML pages and programs are also stored in the ROM to generate the dynamic Web pages. The space requirements of an embedded system are similar to a little bigger stamp. There it is quite understandable that there is no place for a powerful Web server like Apache.Figure 4: Embedded Web Server Module with StrongARM and LinuxBut also the capability of an Apache is not needed to visualize the counter of a photocopier or the status of a percolator by Web servers and browsers. In most cases a single Web server is quite enough. Two of such representatives are boa () and thttpd (). At first, both Web servers are used in connection with embedded systems running under Linux. The configuration settings for boa and thttpd are poor, but quite enough. By the way, the source code is available to the customer. The practicable binary files for these servers are always smaller than 80 Kbytes and can be integrated in the most embedded systems without problems. For the dynamic generation of HTML pages both servers only offer CGI (Common Gateway Interface) as enlargement. Further technologies, like server-side-includes (SSI) are not available.The great difference between an embedded Web server and Apache is, next to the limited configuration settings, the maximal possible number of simultaneous requests. High performance servers like Apache immediately make an own process for every incoming call request of a client. Inside of this process allfurther steps will then be executed. This requires a very good programming and a lot of free memory resources during run time. But, on the other hand many Web browsers can access such a Web server simultaneously. Embedded Web server like boa and thttpd work only with one single process. If two users need to get access onto a embedded Web server simultaneously, one of both have to wait a few fractions of a second. But in the environment of the embedded systems that is absolutely justifiable. In this case it is first of all a question of remote maintenance, remote configuration and similar tasks. There are not many simultaneous requests expected.The DIL/NetPCs DNP/1110 – Using the Embedded LinuxList of FiguresFigure 1: TCP/IP stack and HTTP programming modelFigure 2: Functional principle from Web server and browserFigure 3: Single steps of the Server-Side-ScriptingFigure 4: Embedded Web Server Module with StrongARM and LinuxListingsListing 1.A: HTTP GET-requestListing 1.B: HTTP response as result of the GET-request from listing 1.A ContactSSV Embedded SystemsHeisterbergallee 72D-30453 HannoverTel. +49-(0)511-40000-0Fax. +49-(0)511-40000-40Email: sales@ist1.deWeb: www.ssv-embedded.deDocument History (Sadnp05.Doc)Revision Date Name1.00 24.05.2002FirstVersion KDWThis document is meant only for the internal application. The contents ofthis document can change any time without announcement. There is takenover no guarantee for the accuracy of the statements. Copyright ©SSV EMBEDDED SYSTEMS 2002. All rights reserved.INFORMATION PROVIDED IN THIS DOCUMENT IS PROVIDED 'ASIS' WITHOUT WARRANTY OF ANY KIND. The user assumes the entirerisk as to the accuracy and the use of this document. Some names withinthis document can be trademarks of their respective holders.北京工业大学毕业设计(译文)译文:嵌入式系统的网络服务器在“每个人都处在互联网的浪潮中”之后,现在很明显随之而来的是“每件事都处在互联网的浪潮中”。
中文翻译:1 什么是 FlashFlash 是一种创作工具,设计人员和开发人员可使用它来创建演示文稿、应用程序和其它允许用户交互的内容。
Flash 可以包含简单的动画、视频内容、复杂演示文稿和应用程序以及介于它们之间的任何内容。
通常,使用 Flash 创作的各个内容单元称为应用程序,即使它们可能只是很简单的动画。
您可以通过添加图片、声音、视频和特殊效果,构建包含丰富媒体的 Flash 应用程序。
Flash 特别适用于创建通过 Internet 提供的内容,因为它的文件非常小。
Flash 是通过广泛使用矢量图形做到这一点的。
与位图图形相比,矢量图形需要的内存和存储空间小很多,因为它们是以数学公式而不是大型数据集来表示的。
位图图形之所以更大,是因为图像中的每个像素都需要一组单独的数据来表示。
要在 Flash 中构建应用程序,可以使用 Flash 绘图工具创建图形,并将其它媒体元素导入 Flash 文档。
接下来,定义如何以及何时使用各个元素来创建设想中的应用程序。
在 Flash 中创作内容时,需要在 Flash 文档文件中工作。
Flash 文档的文件扩展名为 .fla (FLA)。
Flash 文档有四个主要部分:舞台是在回放过程中显示图形、视频、按钮等内容的位置。
时间轴用来通知 Flash 显示图形和其它项目元素的时间,也可以使用时间轴指定舞台上各图形的分层顺序。
位于较高图层中的图形显示在较低图层中的图形的上方。
库面板是 Flash 显示 Flash 文档中的媒体元素列表的位置。
ActionScript代码可用来向文档中的媒体元素添加交互式内容。
例如,可以添加代码以便用户在单击某按钮时显示一幅新图像,还可以使用 ActionScript 向应用程序添加逻辑。
逻辑使应用程序能够根据用户的操作和其它情况采取不同的工作方式。
Flash 包括两个版本的 ActionScript,可满足创作者的不同具体需要。
有关编写 ActionScript 的详细信息,请参阅"帮助"面板中的"学习 Flash 中的 ActionScript 2.0"。
毕业设计(论文)文献翻译英文资料:Computer Networks and DatabaseworksSome reasons are causing centralized computer systems to give way to networks.The first one is that many organizations already have a substantial number of computers in operation ,often located far apart .Initially ,each of these computers may have worked in isolation from the other ones ,but at a certain time ,management may have decided to connect them to be able to correlate information about the entire organization .Generally speaking ,this goal is to make all programs ,data ,and other resources available to anyone on the network without regard to the physical location of the resource and the user .The second one is to provide high reliability by having alternative sources of supply .With a network ,the temporary loss of a single computer is much less serious ,because its users can often be accommodated elsewhere until the service is restored .Yet another reason of setting up a computer network is computer network can provide a powerful communication medium among widely separated people .Application of NetworksOne of the main areas of potential network sue is access to remote database .It may someday be easy for people sitting at their terminals at home to make reservations for airplanes trains , buses , boats , restaurants ,theaters ,hotels ,and so on ,at anywhere in the world with instant confirmation .Home banking ,automated newspaper and fully automated library also fall in this category .Computer aided education is another possible field for using network ,with many different courses being offered.Teleconferencing is a whole new form communication. With it widely separated people can conduct a meeting by typing messages at their terminals .Attendees may leave at will and find out what they missed when they come back .International contacts by human begin may be greatly enhanced by network based communication facilities .Network StructureBroadly speaking,there are two general types of designs for the communication subnet:(1)Point –to –point channels(2)Broadcast channelsIn the first one ,the network contains numerous cables or lesased telephone lines ,each one connecting a pair of nodes .If two nodes that do not share a cablewish to communicate ,they must do this indirectly via other nodes .When a message is sent from node to another via one or more inter mediate modes ,each intermediate node will receive the message and store it until the required output line is free so that it can transmit the message forward .The subnet using this principle is called a point –to –piont or store –and –forward subnet .When a point –to –point subnet is used ,the important problem is how to design the connected topology between the nodes .The second kind of communication architecture uses broadcasting.In this design there is a single communication channel shared by all nodes .The inherence in broadcast systems is that messages sent by any node are received by all other nodes .The ISO Reference ModelThe Reference Model of Open System Interconnection (OSI),as OSI calls it ,has seven layers .The major ones of the principles ,from which OSI applied to get the seven layers ,are as follows:(1)A layer should be created where a different level of abstraction is needed.(2)Each layer should perform a well defined function .(3)The function of each layer should be chosen with an eye toward defininginternationally standardized protocols.(4)The layer boundaries should be chosen to minimize the information flow acrossthe interfaces .(5)The number of layers should be large enough so that distinct need not be puttogether in the same layer without necessity ,and small enough so that the architecture will not become out control .The Physical LayerThe physical layer is concerned with transmitting raw bits over a communication channel .Typical questions here are how many volts shoule be used to represent an 1 and how many a 0,how many microseconds a bit occupies ,whether transmission may proceed simultaneously in both are finished ,how to establish the initial connection and what kind of function each pin has .The design issues here largely deal with mechanical ,electrical and procedural interfacing to the subnet .The data link layerThe task of the data link layer is to obtain a raw transmission facility and to transform it into a line that appears free of transmission errors to the network layer .It accomplishes this task by breading the input data up into dataframes ,transmitting the frames sequentially ,and processing the acknowledgment frames sent back the receiver .Since the physical layer merely accepts and transmits a stream of bits without any regard to meaning or structure ,it can create and recognize frame boundaries until the data link layer .This can be accomplished by attaching special bits patterns to the beginning and the end of the frame .But it produce two problems :one is a noise burst on the line can destroy a frame completely .In this case ,the software in the source machine must retransmit the frame .The other is that some mechanismmust be employed to let the transmitter know how much buffer space the receiver has at the moment .The network layerThe network layer controls the operation of subnet .It determines the chief characteristics of the node-host interface ,and how packets ,the units of information exchanged in this layer ,are routed within the subnet .What this layer if software does ,basically ,is to accept messages from the source host ,convert them to packets ,and observe the packets to get the destination .The key design issue is how the route is determined .It could not only base on static table ,either are “wired into”the network and rarely changed ,by also adopt highly dynamic manner ,which can determine packet again to reflect the current network load .The transport layerThe basic function of transport layer is to accept data from the session layer ,split it up into smaller units ,if necessary ,pass these to the network layer ,and ensure that the pieces all arrive correctly at the other end .This layer is a true end-to-end layer .In other words ,a program on the source machine carries on a convene station with as similar program on the destination machine , using the message header and control messages .The session layerWith the session layer , the user must negotiate to establish a connection with a process on another machine .The connection is usually called a session. A session might be used to allow a user to log into a remote time-sharing system or to transfer a file between two machines .The operation of setting up a session between two processes is often called binding .Another function of the session layer is to manage the session once it has been setup .The presentation layerThe presentation layer could be designed to accept ASCⅡstrings as input and produce compressed bit patterns as output .This function of the presentation layer is called text compression .In addition ,this layer can also perform other trans formations .Encryption provide security is one possibility .Conversion between character codes ,such as ASCⅡto EBCDIC,might often be useful .More generally ,different computers usually have incompatible file formats ,so a file conversion option might be useful at times .The application layerMany issues occur here .For example ,all the issues of network transparency ,hiding the physical distribution of resources from user .Another issue is problem partitioning :how to divide the problem among the various machine in order to take maximum advantage of the network .2.Database systemThe conception used for describing files and databases has varied substantially in the same organization .A database may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve one or more applications in an optimal fashion ;the data are stored so that they are independent of programs which use the data ;a common and retrieving existing data within the databases if they are entirely separate in structure .A database may be designed for batch processing ,real-time processing ,or in-line processing .A database system involve application program ,DBMS ,and database.One of the most important characteristics of most databases is that they will constantly need to change and grow .Easy restructuring of the database must be possible as new data types and new applications are added .The restructuring should be possible without having to rewrite the ap0plication program and in general should cause as little upheaval as possible .The ease with which a database can be changed will have a major effect on the rate at which data-processing can be developed in a corporation .The tem data independence is often quoted as being one of the main attributes of a data base .It implies that the data and the application programs which use them are independent so that either may be changed without changing the other .When a single set of data items serves a variety of applications ,different application programs perceive different relationships between the data items .To a large extent ,data-base organization is concerned with the representation between the data item about which we store information referred to as entities .An entity may be a tangible object or nontangible .It has various properties which we may wish to record .It can describes the real world .The data item represents an attribute ,and the attribute must be associated with the relevant entity .We design values to the attributes ,one attribute has a special significance in that it identifies the entity .An attribute or set of attributes which the computer uses to identify a record or tuple is referred to as a key .The primary key is defined as that key used to uniquely identify one record or tuple .The entity identifier consisting of one or more attributes .The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm .If the function of a data base were merely to store data ,its organization would be simple .Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored .It is different to describe the data in logical or physical .The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used .It gives the entities and attributes ,and specifics the relations between them .It is formwork into which the values of the data-items can be fitted .We must distinguish between a record type and a instance of the record .When we talk about a “personnel record”,this is really a record typed .There are no data vales associated with it .The term schema is used to mean an overall chart of all of the data-types and record types stored in a data base .The term subschema refers to an application programmer’s view of the data he uses .Many different sub schemas can be derived from one schema .The schema and the subschema are both used by the data-base management system ,the primary function of which is to serve the application programs by executing their data operations .A DBMS will usually be handing multiple data calls concurrently .It must organize its system buffers so that different data operations can be in process together .It provides a data definition language to specify the conceptual schema and most likely ,some of the details regarding the implementation of the conceptual schema by the physical schema .The data definition language is a high-level language ,enabling one to describe the conceptual schema in terms of a “data model”.The choice of a data model is a difficult one ,since it must be rich enough in structure to describe significant aspects of the real world ,yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema .It should be emphasized that while a DBMS might be used to build small data bases ,many data bases involve millions of bytes ,and an inefficient implementation can be disastrous .We will discuss the data model in the following and the .NET Framework is part of Microsoft's overall .NET framework, which contains a vast set of programming classes designed to satisfy any conceivable programming need. In the following two sections, you learn how fits within the .NET framework, and you learn about the languages you can use in your pages.The .NET Framework Class LibraryImagine that you are Microsoft. Imagine that you have to support multiple programming languages—such as Visual Basic, JScript, and C++. A great deal of the functionality of these programming languages overlaps. For example, for each language, you would have to include methods for accessing the file system, working with databases, and manipulating strings.Furthermore, these languages contain similar programming constructs. Every language, for example, can represent loops and conditionals. Even though the syntax of a conditional written in Visual Basic differs from the syntax of a conditional written in C++, the programming function is the same.Finally, most programming languages have similar variable data types. In most languages, you have some means of representing strings and integers, for example. The maximum and minimum size of an integer might depend on the language, but the basic data type is the same.Maintaining all this functionality for multiple languages requires a lot of work. Why keep reinventing the wheel? Wouldn't it be easier to create all this functionality once and use it for every language?The .NET Framework Class Library does exactly that. It consists of a vast set of classes designed to satisfy any conceivable programming need. For example, the .NET framework contains classes for handling database access, working with the file system, manipulating text, and generating graphics. In addition, it contains more specialized classes for performing tasks such as working with regular expressions and handling network protocols.The .NET framework, furthermore, contains classes that represent all the basic variable data types such as strings, integers, bytes, characters, and arrays.Most importantly, for purposes of this book, the .NET Framework Class Library contains classes for building pages. You need to understand, however, that you can access any of the .NET framework classes when you are building your pages.Understanding NamespacesAs you might guess, the .NET framework is huge. It contains thousands of classes (over 3,400). Fortunately, the classes are not simply jumbled together. The classes of the .NET framework are organized into a hierarchy of namespaces.ASP Classic NoteIn previous versions of Active Server Pages, you had access to only five standard classes (the Response, Request, Session, Application, and Server objects). , in contrast, provides you with access to over 3,400 classes!A namespace is a logical grouping of classes. For example, all the classes that relate to working with the file system are gathered together into the System.IO namespace.The namespaces are organized into a hierarchy (a logical tree). At the root of the tree is the System namespace. This namespace contains all the classes for the base data types, such as strings and arrays. It also contains classes for working with random numbers and dates and times.You can uniquely identify any class in the .NET framework by using the full namespace of the class. For example, to uniquely refer to the class that represents a file system file (the File class), you would use the following:System.IO.FileSystem.IO refers to the namespace, and File refers to the particular class. NOTEYou can view all the namespaces of the standard classes in the .NET Framework Class Library by viewing the Reference Documentation for the .NET Framework. Standard NamespacesThe classes contained in a select number of namespaces are available in your pages by default. (You must explicitly import other namespaces.) These default namespaces contain classes that you use most often in your applications:•System—Contains all the base data types and other useful classes such as those related to generating random numbers and working with dates and times. •System.Collections— Contains classes for working with standard collection types such as hash tables, and array lists.•System.Collections.Specialized— Contains classes that represent specialized collections such as linked lists and string collections.•System.Configuration— Contains classes for working with configuration files (Web.config files).•System.Text— Contains classes for encoding, decoding, and manipulating the contents of strings.•System.Text.RegularExpressions— Contains classes for performing regular expression match and replace operations.•System.Web— Contains the basic classes for working with the World Wide Web, including classes for representing browser requests and server responses. •System.Web.Caching—Contains classes used for caching the content of pages and classes for performing custom caching operations.•System.Web.Security— Contains classes for implementing authentication and authorization such as Forms and Passport authentication.•System.Web.SessionState— Contains classes for implementing session state. •System.Web.UI—Contains the basic classes used in building the user interface of pages.•System.Web.UI.HTMLControls— Contains the classes for the HTML controls. •System.Web.UI.WebControls— Contains the classes for the Web controls..NET Framework-Compatible LanguagesFor purposes of this book, you will write the application logic for your pages using Visual Basic as your programming language. It is the default language for pages (and the most popular programming language in the world). Although you stick to Visual Basic in this book, you also need to understand that you can create pages by using any language that supports the .NET Common Language Runtime. Out of the box, this includes C# (pronounced See Sharp), (the .NET version of JavaScript), and the Managed Extensions to C++.NOTEThe CD included with this book contains C# versions of all the code samples. Dozens of other languages created by companies other than Microsoft have been developed to work with the .NET framework. Some examples of these other languages include Python, SmallTalk, Eiffel, and COBOL. This means that you could, if you really wanted to, write pages using COBOL.Regardless of the language that you use to develop your pages, you need to understand that pages are compiled before they are executed. This means that pages can execute very quickly.The first time you request an page, the page is compiled into a .NET class, and the resulting class file is saved beneath a special directory on yourserver named Temporary Files. For each and every page, a corresponding class file appears in the Temporary Files directory. Whenever you request the same page in the future, the corresponding class file is executed.When an page is compiled, it is not compiled directly into machine code. Instead, it is compiled into an intermediate-level language called Microsoft Intermediate Language (MSIL). All .NET-compatible languages are compiled into this intermediate language.An page isn't compiled into native machine code until it is actually requested by a browser. At that point, the class file contained in the Temporary Files directory is compiled with the .NET framework Just in Time (JIT) compiler and executed.The magical aspect of this whole process is that it happens automatically in the background. All you have to do is create a text file with the source code for your page, and the .NET framework handles all the hard work of converting it into compiled code for you.ASP CLASSIC NOTEWhat about VBScript? Before , VBScript was the most popular language for developing Active Server Pages. does not support VBScript, and this is good news. Visual Basic is a superset of VBScript, which means that Visual Basic has all the functionality of VBScript and more. So, you have a richer set of functions and statements with Visual Basic.Furthermore, unlike VBScript, Visual Basic is a compiled language. This means that if you use Visual Basic to rewrite the same code that you wrote with VBScript, you can get better performance.If you have worked only with VBScript and not Visual Basic in the past, don't worry. Since VBScript is so closely related to Visual Basic, you'll find it easy to make the transition between the two languages.NOTEMicrosoft includes an interesting tool named the IL Disassembler (ILDASM) with the .NET framework. You can use this tool to view the disassembled code for any of the classes in the Temporary Files directory. It lists all the methods and properties of the class and enables you to view the intermediate-level code.This tool also works with all the controls discussed in this chapter. For example, you can use the IL Disassembler to view the intermediate-level code for the TextBox control (located in a file named System.Web.dll).About ModemTelephone lines were designed to carry the human voice, not electronic data from a computer. Modems were invented to convert digital computer signals into a form that allows them to travel over the phone lines. Those are the scratchy sounds you hear from a modem's speaker. A modem on the other end of the line can understand it and convert the sounds back into digital information that the computer can understand. By the way, the word modem stands for MOdulator/DEModulator.Buying and using a modem used to be relatively easy. Not too long ago, almost all modems transferred data at a rate of 2400 Bps (bits per second). Today, modems not only run faster, they are also loaded with features like error control and data compression. So, in addition to converting and interpreting signals, modems also act like traffic cops, monitoring and regulating the flow of information. That way, one computer doesn't send information until the receiving computer is ready for it. Each of these features, modulation, error control, and data compression, requires a separate kind of protocol and that's what some of those terms you see like V.32, V.32bis, V.42bis and MNP5 refer to.If your computer didn't come with an internal modem, consider buying an external one, because it is much easier to install and operate. For example, when your modem gets stuck (not an unusual occurrence), you need to turn it off and on to get it working properly. With an internal modem, that means restarting your computer--a waste of time. With an external modem it's as easy as flipping a switch.Here's a tip for you: in most areas, if you have Call Waiting, you can disable it by inserting *70 in front of the number you dial to connect to the Internet (or any online service). This will prevent an incoming call from accidentally kicking you off the line.This table illustrates the relative difference in data transmission speeds for different types of files. A modem's speed is measured in bits per second (bps). A 14.4 modem sends data at 14,400 bits per second. A 28.8 modem is twice as fast, sending and receiving data at a rate of 28,800 bits per second.Until nearly the end of 1995, the conventional wisdom was that 28.8 Kbps was about the fastest speed you could squeeze out of a regular copper telephone line. Today, you can buy 33.6 Kbps modems, and modems that are capable of 56 Kbps. The key question for you, is knowing what speed modems your Internet service provider (ISP) has. If your ISP has only 28.8 Kbps modems on its end of the line, you could have the fastest modem in the world, and only be able to connect at 28.8 Kbps. Before you invest in a 33.6 Kbps or a 56 Kbps modem, make sure your ISP supports them.Speed It UpThere are faster ways to transmit data by using an ISDN or leased line. In many parts of the U.S., phone companies are offering home ISDN at less than $30 a month. ISDN requires a so-called ISDN adapter instead of a modem, and a phone line with a special connection that allows it to send and receive digital signals. You have to arrange with your phone company to have this equipment installed. For more about ISDN, visit Dan Kegel's ISDN Page.An ISDN line has a data transfer rate of between 57,600 bits per second and 128,000 bits per second, which is at least double the rate of a 28.8 Kbps modem. Leased lines come in two configurations: T1 and T3. A T1 line offers a data transfer rate of 1.54 million bits per second. Unlike ISDN, a T-1 line is a dedicated connection, meaning that it is permanently connected to the Internet. This is useful for web servers or other computers that need to be connected to the Internet all the time. It is possible to lease only a portion of a T-1 line using one of two systems:fractional T-1 or Frame Relay. You can lease them in blocks ranging from 128 Kbps to 1.5 Mbps. The differences are not worth going into in detail, but fractional T-1 will be more expensive at the slower available speeds and Frame Relay will be slightly more expensive as you approach the full T-1 speed of 1.5 Mbps. A T-3 line is significantly faster, at 45 million bits per second. The backbone of the Internet consists of T-3 lines.Leased lines are very expensive and are generally only used by companies whose business is built around the Internet or need to transfer massive amounts of data. ISDN, on the other hand, is available in some cities for a very reasonable price. Not all phone companies offer residential ISDN service. Check with your local phone company for availability in your area.Cable ModemsA relatively new development is a device that provides high-speed Internet access via a cable TV network. With speeds of up to 36 Mbps, cable modems can download data in seconds that might take fifty times longer with a dial-up connection. Because it works with your TV cable, it doesn't tie up a telephone line. Best of all, it's always on, so there is no need to connect--no more busy signals! This service is now available in some cities in the United States and Europe.The download times in the table above are relative and are meant to give you a general idea of how long it would take to download different sized files at different connection speeds, under the best of circumstances. Many things can interfere with the speed of your file transfer. These can range from excessive line noise on your telephone line and the speed of the web server from which you are downloading files, to the number of other people who are simultaneously trying to access the same file or other files in the same directory.DSLDSL (Digital Subscriber Line) is another high-speed technology that is becoming increasingly popular. DSL lines are always connected to the Internet, so you don'tneed to dial-up. Typically, data can be transferred at rates up to 1.544 Mbps downstream and about 128 Kbps upstream over ordinary telephone lines. Since a DSL line carries both voice and data, you don't have to install another phone line. You can use your existing line to establish DSL service, provided service is available in your area and you are within the specified distance from the telephone company's central switching office.DSL service requires a special modem. Prices for equipment, DSL installation and monthly service can vary considerably, so check with your local phone company and Internet service provider. The good news is that prices are coming down as competition heats up.The NetWorksBirth of the NetThe Internet has had a relatively brief, but explosive history so far. It grew out of an experiment begun in the 1960's by the U.S. Department of Defense. The DoD wanted to create a computer network that would continue to function in the event of a disaster, such as a nuclear war. If part of the network were damaged or destroyed, the rest of the system still had to work. That network was ARPANET, which linked U.S. scientific and academic researchers. It was the forerunner of today's Internet.In 1985, the National Science Foundation (NSF) created NSFNET, a series of networks for research and education communication. Based on ARPANET protocols, the NSFNET created a national backbone service, provided free to any U.S. research and educational institution. At the same time, regional networks were created to link individual institutions with the national backbone service.NSFNET grew rapidly as people discovered its potential, and as new software applications were created to make access easier. Corporations such as Sprint and MCI began to build their own networks, which they linked to NSFNET. As commercial firms and other regional network providers have taken over the operation of the major Internet arteries, NSF has withdrawn from the backbone business.。
外文文献PLC technique discussion and future developmentAlong with the development of the ages, the technique that is nowadays is also gradually perfect, the competition plays more strong; the operation that list depends the artificial has already can't satisfied with the current manufacturing industry foreground, also can't guarantee the request of the higher quantity and high new the image of the technique business enterprise.The people see in produce practice, automate brought the tremendous convenience and the product quantities for people up of assurance, also eased the personnel's labor strength, reduce the establishment on the personnel. The target control of the hard realization in many complicated production lines, whole and excellent turn, the best decision etc., well-trained operation work, technical personnel or expert, governor but can judge and operate easily, can acquire the satisfied result. The research target of the artificial intelligence makes use of the calculator exactly to carry out, imitate these intelligences behavior, moderating the work through person's brain and calculators, with the mode that person's machine combine, for resolve the very complicated problem to look for the best pathWe come in sight of the control that links after the electric appliances in various situation, that is already the that time generation past, now of after use in the mold a perhaps simple equipments of grass-roots control that the electric appliances can do for the low level only; And the PLC emergence also became the epoch-making topic, adding the vivid software control through a very and stable hardware, making the automation head for the new high tide.The PLC biggest characteristics lie in: The electrical engineering teacher already no longer electric hardware up too many calculations of cost, as long as order the importation that the button switch or the importation of the sensors order to link thePLC up can solve problem, pass to output to order the conjunction contact machine or control the start equipments of the big power after the electric appliances, but the exportation equipments direct conjunction of the small power can.PLC internal containment have the CPU of the CPU, and take to have an I/ O for expand of exterior to connect a people's address and saving machine three big pieces to constitute, CPU core is from an or many is tired to add the machine to constitute, mathematics that they have the logic operation ability, and can read the procedure save the contents of the machine to drive the homologous saving machine and I/ Os to connect after pass the calculation; The I/ O add inner part is tired the input and output system of the machine and exterior link, and deposit the related data into the procedure saving machine or data saving machine; The saving machine can deposit the data that the I/ O input in the saving machine, and in work adjusting to become tired to add the machine and I/ Os to connect, saving machine separately saving machine RAM of the procedure saving machine ROM and dates, the ROM can do deposit of the data permanence in the saving machine, but RAM only for the CPU computes the temporary calculation usage of hour of buffer space.The PLC anti- interference is very and excellent, our root need not concern its service life and the work situation bad, these all problems have already no longer become the topic that we fail, but stay to our is a concern to come to internal resources of make use of the PLC to strengthen the control ability of the equipments for us, make our equipments more gentle.PLC language is not we imagine of edit collected materials the language or language of Cs to carry on weaving the distance, but the trapezoid diagram that the adoption is original after the electric appliances to control, make the electrical engineering teacher while weaving to write the procedure very easy comprehended the PLC language, and a lot of non- electricity professional also very quickly know and go deep into to the PLC.Is PLC one of the advantage above and only, this is also one part that the people comprehend more and easily, in a lot of equipments, the people have already nolonger hoped to see too many control buttons, they damage not only and easily and produce the artificial error easiest, small is not a main error perhaps you can still accept; But lead even is a fatal error greatly is what we can't is tolerant of. New technique always for bringing more safe and convenient operation for us, make we a lot of problems for face on sweep but light, do you understand the HMI? Says the HMI here you basically not clear what it is, also have no interest understanding, change one inside text explains it into the touch to hold or man-machine interface you knew, and it combines with the PLC to our larger space.HMI the control not only is reduced the control press button, increase the vivid of the control, more main of it is can sequence of, and at can the change data input to output the feedback with data, control in the temperature curve of imitate but also can keep the manifestation of view to come out. And can write the function help procedure through a plait to provide the help of various what lies in one's power, the one who make operate reduces the otiose error. Currently the HMI factory is also more and more, the function is also more and more strong, the price is also more and more low, and the noodles of the usage are wide more and more. The HMI foreground can say that think ° to be good very.At a lot of situations, the list is a smooth movement that can't guarantee the equipments by the control of the single machine, but pass the information exchanges of the equipments and equipments to attain the result that we want. For example fore pack and the examination of the empress work preface, we will arrive wrapping information feedback to examine the place, and examine the information of the place to also want the feedback to packing. Pass the information share thus to make both the chain connect, becoming a total body, the match of your that thus make is more close, at each other attain to reflect the result that mutually flick.The PLC correspondence has already come more body now its value, at the PLC and correspondence between PLCs, can pass the communication of the information and the share of the dates to guarantee that of the equipments moderates mutually, the result that arrive already to repair with each other. Data conversion the adoptionRS232 between PLC connect to come to the transmission data, but the RS232 pick up a people and can guarantee 10 meters only of deliver the distance, if in the distance of 1000 meters we can pass the RS485 to carry on the correspondence, the longer distance can pass the MODEL only to carry on deliver.The PLC data transmission is just to be called a form to it in a piece of and continuous address that the data of the inner part delivers the other party, we, the PLC of the other party passes to read data in the watch to carry on the operation. If the data that data in the watch is a to establish generally, that is just the general data transmission, for example today of oil price rise, I want to deliver the price of the oil price to lose the oil ally on board, that is the share of the data; But take data in the watch for an instruction procedure that controls the PLC, that had the difficulty very much, for example you have to control one pedestal robot to press the action work that you imagine, you will draw up for it the form that a procedure combine with the data sends out to pass by.The form that information transport contain single work, the half a work and the difference of a workers .The meaning of the single work also is to say both, a can send out only, but a can receive only, for example a spy he can receive the designation of the superior only, but can't give the superior reply; A work of half is also 2 and can send out similar to accept the data, but can't send out and accept at the same time, for example when you make a phone call is to can't answer the phone, the other party also; But whole pair works is both can send out and accept the data, and can send out and accept at the same time. Be like the Internet is a typical example.The process that information transport also has synchronous and different step cent: The data line and the clock lines are synchronous when synchronous meaning lie in sending out the data, is also the data signal and the clock signals to be carry on by the CPU to send out at the same time, this needs to all want the specialized clock signal each other to carry on the transmission and connect to send, and is constrained, the characteristics of this kind of method lies in its speed very quick, but correspond work time of take up the CPU and also want to be long oppositely, at the same timethe technique difficulty also very big. Its request lies in canting have an error margins in a dates deliver, otherwise the whole piece according to compare the occurrence mistake, this on the hardware is a bigger difficulty. Applied more and more extensive in some appropriative equipments, be like the appropriative medical treatment equipments, the numerical signal equipments...etc., in compare the one data deliver, its result is very good.And the different step is an application the most extensive, this receive benefit in it of technique difficulty is opposite and want to be small, at the same time not need to prepare the specialized clock signal, its characteristics to lie in, its data is partition, the long-lost send out and accept, be the CPU is too busy of time can grind to a stop sex to work, also reduced the difficulty on the hardware, the data throw to lose at the same time opposite want to be little, we can pass the examination of the data to observe whether the data that we send out has the mistake or not, be like strange accidentally the method, tired addition and eight efficacies method etc., can use to helps whether the data that we examine to send out have or not the mistake occurrence, pass the feedback to carry on the discriminator.A line of transmission of the information contains a string of and combine the cent of: The usual PLC is 8 machines, certainly also having 16 machines. We can be an at the time of sending out the data a send out to the other party, also can be 88 send out the data to the other party, an and 8 differentiations are also the as that we say to send out the data and combine sends out the data. A speed is more and slowly, but as long as 2 or three lines can solve problem, and can use the telephone line to carry on the long range control. But combine the ocular transmission speed is very quick of, it is a string of ocular of 25600%, occupy the advantage in the short distance, the in view of the fact TTL electricity is even, being limited by the scope of one meter generally, it combine unwell used for the data transmission of the long pull, thus the cost is too expensive.Under a lot of circumstances we are total to like to adopt the string to combine the conversion chip to carry on deliver, under this kind of circumstance not need us tocarry on to deposited the machine to establish too and complicatedly, but carry on the data exchanges through the data transmission instruction directly, but is not a very viable way in the correspondence, because the PLC of the other party must has been wait for your data exportation at the time of sending out the data, it can't do other works.When you are reading the book, you hear someone knock on door, you stop to start up of affair, open the door and combine to continue with the one who knock on door a dialogue, the telephone of this time rang, you signal hint to connect a telephone, after connecting the telephone through, return overdo come together knock on door to have a conversation, after dialogue complete, you continue again to see your book, this kind of circumstance we are called the interruption to it, it has the authority, also having sex of have the initiative, the PLC had such function .Its characteristics lie in us and may meet the urgently abrupt affairs in the operation process of the equipments, we want to stop to start immediately up of work, the whereabouts manages the more important affair, this kind of circumstance is we usually meet of, PLC while carry out urgent mission, total will keep the current appearance first, for example the address of the procedure, CPU of tired add the machine data etc., be like to stick down which the book that we see is when we open the door the page or simply make a mark, because we treat and would still need to continue immediately after book of see the behind. The CPU always does the affair that should do according to our will, but your mistake of give it an affair, it also would be same to do, this we must notice.The interruption is not only a, sometimes existing jointly with the hour several inside break, break off to have the preferred Class, they will carry out the interruption of the higher Class according to person's request. This kind of breaks off the medium interruption to also became to break off the set. The Class that certainly break off is relevant according to various resources of CPU with internal PLC, also following a heap of capacity size of also relevant fasten.The contents that break off has a lot of kinds, for example the exterior break off, correspondence in of send out and accept the interruption and settle and the clock thatcount break off, still have the WDT to reset the interruption etc., they enriched the CPU to respond to the category while handle various business. Speak thus perhaps you can't comprehend the internal structure and operation orders of the interruption completely also, we do a very small example to explain.Each equipment always will not forget a button, it also is at we meet the urgent circumstance use of, which is nasty to stop the button. When we meet the Human body trouble and surprised circumstances we as long as press it, the machine stops all operations immediately, and wait for processing the over surprised empress recover the operation again. Nasty stop the internal I/ O of the internal CPU of the button conjunction PLC to connect up, be to press button an exterior to trigger signal for CPU, the CPU carries on to the I/ O to examine again, being to confirm to have the exterior to trigger the signal, CPU protection the spot breaks off procedure counts the machine turn the homologous exterior I/ O automatically in the procedure to go to also, be exterior interruption procedure processing complete, the procedure counts the machine to return the main procedure to continue to work. Have 1:00 can what to explain is we generally would nasty stop the button of exterior break off to rise to the tallest Class, thus guarantee the safety.When we are work a work piece, giving the PLC a signal, counting PLC inner part the machine add 1 to compute us for a day of workload, a count the machine and can solve problem in brief, certainly they also can keep the data under the condition of dropping the electricity, urging the data not to throw to lose, this is also what we hope earnestly.The PLC still has the function that the high class counts the machine, being us while accept some dates of high speed, the high speed that here say is the data of the in all aspects tiny second class, for example the bar code scanner is scanning the data continuously, calculating high-speed signal of the data processor DSP etc., we will adopt the high class to count the machine to help we carry on count. It at the PLC carries out the procedure once discover that the high class counts the machine to should of interruption, will let go of the work on the hand immediately. The trapezoiddiagram procedure that passes by to weave the distance again explains the high class for us to carry out procedure to count machine would automatic performance to should of work, thus rise the Class that the high class counts the machine to high one Class.You heard too many this phrases perhaps:" crash", the meaning that is mostly is a workload of CPU to lead greatly, the internal resources shortage etc. the circumstance can't result in procedure circulate. The PLC also has the similar circumstance, there is a watchdog WDT in the inner part of PLC, we can establish time that a procedure of WDT circulate, being to appear the procedure to jump to turn the mistake in the procedure movement process or the procedure is busy, movement time of the procedure exceeds WDT constitution time, the CPU turn but the WDT reset the appearance. The procedure restarts the movement, but will not carry on the breakage to the interruption.The PLC development has already entered for network ages of correspondence from the mode of the one, and together other works control the net plank and I/ O card planks to carry on the share easily. A state software can pass all se hardwires link, more animation picture of keep the view to carries on the control, and cans pass the Internet to carry on the control in the foreign land, the blast-off that is like the absolute being boat No.5 is to adopt this kind of way to make airship go up the sky.The development of the higher layer needs our continuous effort to obtain. The PLC emergence has already affected a few persons fully, we also obtained more knowledge and precepts from the top one experience of the generation, coming to the continuous development PLC technique, push it toward higher wave tide.Knowing the available PLC network options and their best applications will ensure an efficient and flexible control system design.The programmable logic controller's (PLC's) ability to support a range of communication methods makes it an ideal control and data acquisition device for a wide variety of industrial automation and facility control applications. However, thereis some confusion because so many possibilities exist. To help eliminate this confusion, let's list what communications are available and when they would be best applied.To understand the PLC's communications versatility, let's first define the terms used in describing the various systems.ASCII: This stands for "American Standard Code for Information Interchange." As shown in Fig. 1, when the letter "A" is transmitted, for instance, it's automatically coded as "65" by the sending equipment. The receiving equipment translates the "65" back to the letter "A." Thus, different devices can communicate with each other as long as both use ASCII code.ASCII module: This intelligent PLC module is used for connecting PLCs to other devices also capable of communicating using ASCII code as a vehicle.Bus topology: This is a linear local area network (LAN) arrangement, as shown in Fig. 2A, in which individual nodes are tapped into a main communications cable at a single point and broadcast messages. These messages travel in both directions on the bus from the point of connection until they are dissipated by terminators at each end of the bus.CPU: This stands for "central processing unit," which actually is that part of a computer, PLC, or other intelligent device where arithmetic and logical operations are performed and instructions are decoded and executed.Daisy chain: This is a description of the connection of individual devices in a PLC network, where, as shown in Fig. 3, each device is connected to the next and communications signals pass from one unit to the next in a sequential fashion.Distributed control: This is an automation concept in which portions of an automated system are controlled by separate controllers, which are located in close proximity to their area of direct control (control is decentralized and spread out over the system).Host computer: This is a computer that's used to transfer data to, or receive data from, a PLC in a PLC/computer network.Intelligent device: This term describes any device equipped with its own CPU.I/O: This stands for "inputs and outputs," which are modules that handle data to the PLC (inputs) or signals from the PLC (outputs) to an external device.Kbps: This stands for "thousand bits per second," which is a rate of measure for electronic data transfer.Mbps: This stands for "million bits per second."Node: This term is applied to any one of the positions or stations in a network. Each node incorporates a device that can communicate with all other devices on the network.Protocol: The definition of how data is arranged and coded for transmission on a network.Ring topology. This is a LAN arrangement, as shown in Fig. 2C, in which each node is connected to two other nodes, resulting in a continuous, closed, circular path or loop for messages to circulate, usually in one direction. Some ring topologies have a special "loop back" feature that allows them to continue functioning even if the main cable is severed.RS232. This is an IEEE standard for serial communications that describes specific wiring connections, voltage levels, and other operating parameters for electronic data communications. There also are several other RS standards defined.Serial: This is an electronic data transfer scheme in which information is transmitted one bit at a time.Serial port: This the communications access point on a device that is set up for serial communications.Star topology. This is a LAN arrangement in which, as shown in Fig. 2B, nodes are connected to one another through a central hub, which can be active or passive. An active hub performs network duties such as message routing and maintenance. A passive central hub simply passes the message along to all the nodes connected to it.Topology: This relates to a specific arrangement of nodes in a LAN in relation to one another.Transparent: This term describes automatic events or processes built into a system that require no special programming or prompting from an operator.Now that we're familiar with these terms, let's see how they are used in describing the available PLC network options.PLC network optionsPLC networks provide you with a variety of networking options to meet specific control and communications requirements. Typical options include remote I/O, peer-to-peer, and host computer communications, as well as LANs. These networks can provide reliable and cost-effective communications between as few as two or as many as several hundred PLCs, computers, and other intelligent devices.Many PLC vendors offer proprietary networking systems that are unique and will not communicate with another make of PLC. This is because of the different communications protocols, command sequences, error-checking schemes, and communications media used by each manufacturer.However, it is possible to make different PLCs "talk" to one another; what's required is an ASCII interface for the connection(s), along with considerable work with software.Remote I/0 systemsA remote I/O configuration, as shown in Fig. 4A, has the actual inputs andoutputs at some distance from the controller and CPU. This type of system, which can be described as a "master-and-slave" configuration, allows many distant digital and analog points to be controlled by a single PLC. Typically, remote I/Os are connected to the CPU via twisted pair or fiber optic cables.Remote I/O configurations can be extremely cost-effective control solutions where only a few I/O points are needed in widely separated areas. In this situation, it's not always necessary, or practical for that matter, to have a controller at each site. Nor is it practical to individually hard wire each I/O point over long distances back to the CPU. For example, remote I/O systems can be used in acquiring data from remote plant or facility locations. Information such as cycle times, counts, duration or events, etc. then can be sent back to the PLC for maintenance and management reporting.In a remote I/O configuration, the master controller polls the slaved I/O for its current I/O status. The remote I/O system responds, and the master PLC then signals the remote I/O to change the state of outputs as dictated by the control program in the PLC's memory. This entire cycle occurs hundreds of times per second.Peer-to-peer networksPeer-to-peer networks, as shown in Fig. 4B, enhance reliability by decentralizing the control functions without sacrificing coordinated control. In this type of network, numerous PLCs are connected to one another in a daisy-chain fashion, and a common memory table is duplicated in the memory of each. In this way, when any PLC writes data to this memory area, the information is automatically transferred to all other PLCs in the network. They then can use this information in their own operating programs.With peer-to-peer networks, each PLC in the network is responsible for its own control site and only needs to be programmed for its own area of responsibility. This aspect of the network significantly reduces programming and debugging complexity; because all communications occur transparently to the user, communications programming is reduced to simple read-and-write statements.In a peer-to-peer system, there's no master PLC. However, it's possible to designate one of the PLCs as a master for use as a type of group controller. This PLC then can be used to accept input information from an operator input terminal, for example, sending all the necessary parameters to other PLCs and coordinating the sequencing of various events.Host computer linksPLCs also can be connected with computers or other intelligent devices. In fact, most PLCs, from the small to the very large, can be directly connected to a computer or part of a multi drop host computer network via RS232C or RS422 ports. This combination of computer and controller maximizes the capabilities of the PLC, for control and data acquisition, as well as the computer, for data processing, documentation, and operator interface.In a PLC/computer network, as shown in Fig. 4C, all communications are initiated by the host computer, which is connected to all the PLCs in a daisy-chain fashion. This computer individually addresses each of its networked PLCs and asks for specific information. The addressed PLC then sends this information to the computer for storage and further analysis. This cycle occurs hundreds of times per second.Host computers also can aid in programming PLCs; powerful programming and documentation software is available for program development. Programs then can be written on the computer in relay ladder logic and downloaded into the PLC. In this way, you can create, modify, debug, and monitor PLC programs via a computer terminal.In addition to host computers, PLCs often must interface with other devices, such as operator interface terminals for large security and building management systems. Although many intelligent devices can communicate directly with PLCs via conventional RS232C ports and serial ASCII code, some don't have the software ability to interface with individual PLC models. Instead, they typically send andreceive data in fixed formats. It's the PLC programmer's responsibility to provide the necessary software interface.The easiest way to provide such an interface to fixed-format intelligent devices is to use an ASCII/BASIC module on the PLC. This module is essentially a small computer that plugs into the bus of the PLC. Equipped with RS232 ports and programmed in BASIC, the module easily can handle ASCII communications with peripheral devices, data acquisition functions, programming sequences, "number crunching," report and display generation, and other requirements.Access, protocol, and modulation functions of LANsBy using standard interfaces and protocols, LANs allow a mix of devices (PLCs, PCs, mainframe computers, operator interface terminals, etc.) from many different vendors to communicate with others on the network.Access: A LAN's access method prevents the occurrence of more than one message on the network at a time. There are two common access methods.Collision detection is where the nodes "listen" to the network and transmit only if there are no other messages on the network. If two nodes transmit simultaneously, the collision is detected and both nodes retransmit until their messages get through properly.Token passing allows each node to transmit only if it's in possession of a special electronic message called a token. The token is passed from node to node, allowing each an opportunity to transmit without interference. Tokens usually have a time limit to prevent a single node from tying up the token for a long period of time.Protocol: Network protocols define the way messages are arranged and coded for transmission on the LAN. The following are two common types.Proprietary protocols are unique message arrangements and coding developed by a specific vendor for use with that vendor's product only.。