计算机专业指纹识别操作系统毕业论文外文文献翻译及原文
- 格式:doc
- 大小:73.00 KB
- 文档页数:14
毕业设计(论文)外文资料原文及译文原文出处:《Cloud Computing》作者:M ichael Miller以下为英文原文Beyond the Desktop: An Introduction to Cloud ComputingIn a world that sees new technological trends bloom and fade on almost a daily basis, one new trend promises more longevity. This trend is called cloud computing, and it will change the way you use your computer and the Internet.Cloud computing portends a major change in how we store information and run applications. Instead of running programs and data on an individual desktop computer, everything is hosted in the “cloud”—a nebulous assemblage of computers and servers accessed via the Internet. Cloud computing lets you access all your applications and documents from anywhere in the world, freeing you from the confines of the desktop and making it easier for group members in different locations to collaborate.The emergence of cloud computing is the computing equivalent of the electricity revolution of a century ago. Before the advent of electrical utilities, every farm and business produced its own electricity from freestanding generators. After the electrical grid was created, farms and businesses shut down their generators and bought electricity from the utilities, at a much lower price (and with much greater reliability) than they could produce on their own.Look for the same type of revolution to occur as cloud computing takes hold. Thedesktop-centric notion of computing that we hold today is bound to fall by the wayside as we come to expect the universal access, 24/7 reliability, and ubiquitous collaboration promised by cloud computing.It is the way of the future.Cloud Computing: What It Is—and What It Isn’tWith traditional desktop computing, you run copies of software programs on each computer you own. The documents you create are stored on the computer on which they were created. Although documents can be accessed from other computers on the network, they can’t be accessed by computers outside the network.The whole scene is PC-centric.With cloud computing, the software programs you use aren’t run from your personal computer, but are rather stored on servers accessed via the Internet. If your computer crashes, the software is still available for others to use. Same goes for the documents you create; they’re stored on a collection of servers accessed via the Internet. Anyone with permission can not only access the documents, but can also edit and collaborate on those documents in real time. Unlike traditional computing, this cloud computing model isn’t PC-centric, it’sdocument-centric. Which PC you use to access a document simply isn’t important.But that’s a simplification. Let’s look in more detail at what cloud computing is—and, just as important, what it isn’t.What Cloud Computing Isn’tFirst, cloud computing isn’t network computing. With network computing,applications/documents are hosted on a single company’s server and accessed over the company’s network. Cloud computing is a lot bigger than that. It encompasses multiple companies, multiple servers, and multiple networks. Plus, unlike network computing, cloud services and storage are accessible from anywhere in the world over an Internet connection; with network computing, access is over the company’s network only.Cloud computing also isn’t traditional outsourcing, where a company farms out (subcontracts) its computing services to an outside firm. While an outsourcing firm might host a company’s data or applications, those documents and programs are only accessible to the company’s employees via the company’s network, not to the entire world via the Internet.So, despite superficial similarities, networking computing and outsourcing are not cloud computing.What Cloud Computing IsKey to the definition of cloud computing is the “cloud”itself. For our purposes, the cloud is a large group of interconnected computers. These computers can be personal computers or network servers; they can be public or private. For example, Google hosts a cloud that consists of both smallish PCs and larger servers. Google’s cloud is a private one (that is, Google owns it) that is publicly accessible (by Google’s users).This cloud of computers extends beyond a single company or enterprise. The applications and data served by the cloud are available to broad group of users, cross-enterprise andcross-platform. Access is via the Internet. Any authorized user can access these docs and apps from any computer over any Internet connection. And, to the user, the technology and infrastructure behind the cloud is invisible. It isn’t apparent (and, in most cases doesn’t matter) whether cloud services are based on HTTP, HTML, XML, JavaScript, or other specific technologies.It might help to examine how one of the pioneers of cloud computing, Google, perceives the topic. From Google’s perspective, there are six key properties of cloud computing:·Cloud computing is user-centric.Once you as a user are connected to the cloud, whatever is stored there—documents, messages, images, applications, whatever—becomes yours. In addition, not only is the data yours, but you can also share it with others. In effect, any device that accesses your data in the cloud also becomes yours.·Cloud computing is task-centric.Instead of focusing on the application and what it can do, the focus is on what you need done and how the application can do it for you., Traditional applications—word processing, spreadsheets, email, and so on—are becoming less important than the documents they create.·Cloud computing is powerful. Connecting hundreds or thousands of computers together in a cloud creates a wealth of computing power impossible with a single desktop PC. ·Cloud computing is accessible. Because data is stored in the cloud, users can instantly retrieve more information from multiple repositories. You’re not limited to a single source of data, as you are with a desktop PC.·Cloud computing is intelligent. With all the various data stored on the computers in a cloud, data mining and analysis are necessary to access that information in an intelligent manner.·Cloud computing is programmable.Many of the tasks necessary with cloud computing must be automated. For example, to protect the integrity of the data, information stored on a single computer in the cloud must be replicated on other computers in the cloud. If that one computer goes offline, the cloud’s programming automatically redistributes that computer’s data to a new computer in the cloud.All these definitions behind us, what constitutes cloud computing in the real world?As you’ll learn throughout this book, a raft of web-hosted, Internet-accessible,group-collaborative applications are currently available, with many more on the way. Perhaps the best and most popular examples of cloud computing applications today are the Google family of applications—Google Docs & Spreadsheets, Google Calendar, Gmail, Picasa, and the like. All of these applications are hosted on Google’s servers, are accessible to any user with an Internet connection, and can be used for group collaboration from anywhere in the world.In short, cloud computing enables a shift from the computer to the user, from applications to tasks, and from isolated data to data that can be accessed from anywhere and shared with anyone. The user no longer has to take on the task of data management; he doesn’t even have to remember where the data is. All that matters is that the data is in the cloud, and thus immediately available to that user and to other authorized users.From Collaboration to the Cloud: A Short History of CloudComputingCloud computing has as its antecedents both client/server computing and peer-to-peer distributed computing. It’s all a matter of how centralized storage facilitates collaboration and how multiple computers work together to increase computing power.Client/Server Computing: Centralized Applications and StorageIn the antediluvian days of computing (pre-1980 or so), everything operated on the client/server model. All the software applications, all the data, and all the control resided on huge mainframe computers, otherwise known as servers. If a user wanted to access specific data or run a program, he had to connect to the mainframe, gain appropriate access, and then do his business while essentially “renting”the program or data from the server.Users connected to the server via a computer terminal, sometimes called a workstation or client. This computer was sometimes called a dumb terminal because it didn’t have a lot (if any!) memory, storage space, or processing power. It was merely a device that connected the user to and enabled him to use the mainframe computer.Users accessed the mainframe only when granted permission, and the information technology (IT) staff weren’t in the habit of handing out access casually. Even on a mainframe computer, processing power is limited—and the IT staff were the guardians of that power. Access was not immediate, nor could two users access the same data at the same time.Beyond that, users pretty much had to take whatever the IT staff gave them—with no variations. Want to customize a report to show only a subset of the normal information? Can’t do it. Want to create a new report to look at some new data? You can’t do it, although the IT staff can—but on their schedule, which might be weeks from now.The fact is, when multiple people are sharing a single computer, even if that computer is a huge mainframe, you have to wait your turn. Need to rerun a financial report? No problem—if you don’t mind waiting until this afternoon, or tomorrow morning. There isn’t always immediate access in a client/server environment, and seldom is there immediate gratification.So the client/server model, while providing similar centralized storage, differed from cloud computing in that it did not have a user-centric focus; with client/server computing, all the control rested with the mainframe—and with the guardians of that single computer. It was not a user-enabling environment.Peer-to-Peer Computing: Sharing ResourcesAs you can imagine, accessing a client/server system was kind of a “hurry up and wait”experience. The server part of the system also created a huge bottleneck. All communications between computers had to go through the server first, however inefficient that might be.The obvious need to connect one computer to another without first hitting the server led to the development of peer-to-peer (P2P) computing. P2P computing defines a network architecture inwhich each computer has equivalent capabilities and responsibilities. This is in contrast to the traditional client/server network architecture, in which one or more computers are dedicated to serving the others. (This relationship is sometimes characterized as a master/slave relationship, with the central server as the master and the client computer as the slave.)P2P was an equalizing concept. In the P2P environment, every computer is a client and a server; there are no masters and slaves. By recognizing all computers on the network as peers, P2P enables direct exchange of resources and services. There is no need for a central server, because any computer can function in that capacity when called on to do so.P2P was also a decentralizing concept. Control is decentralized, with all computers functioning as equals. Content is also dispersed among the various peer computers. No centralized server is assigned to host the available resources and services.Perhaps the most notable implementation of P2P computing is the Internet. Many of today’s users forget (or never knew) that the Internet was initially conceived, under its original ARPAnet guise, as a peer-to-peer system that would share computing resources across the United States. The various ARPAnet sites—and there weren’t many of them—were connected together not as clients and servers, but as equals.The P2P nature of the early Internet was best exemplified by the Usenet network. Usenet, which was created back in 1979, was a network of computers (accessed via the Internet), each of which hosted the entire contents of the network. Messages were propagated between the peer computers; users connecting to any single Usenet server had access to all (or substantially all) the messages posted to each individual server. Although the users’connection to the Usenet server was of the traditional client/server nature, the relationship between the Usenet servers was definitely P2P—and presaged the cloud computing of today.That said, not every part of the Internet is P2P in nature. With the development of the World Wide Web came a shift away from P2P back to the client/server model. On the web, each website is served up by a group of computers, and sites’visitors use client software (web browsers) to access it. Almost all content is centralized, all control is centralized, and the clients have no autonomy or control in the process.Distributed Computing: Providing More Computing PowerOne of the most important subsets of the P2P model is that of distributed computing, where idle PCs across a network or across the Internet are tapped to provide computing power for large, processor-intensive projects. It’s a simple concept, all about cycle sharing between multiple computers.A personal computer, running full-out 24 hours a day, 7 days a week, is capable of tremendous computing power. Most people don’t use their computers 24/7, however, so a good portion of a computer’s resources go unused. Distributed computing uses those resources.When a computer is enlisted for a distributed computing project, software is installed on the machine to run various processing activities during those periods when the PC is typically unused. The results of that spare-time processing are periodically uploaded to the distributedcomputing network, and combined with similar results from other PCs in the project. The result, if enough computers are involved, simulates the processing power of much larger mainframes and supercomputers—which is necessary for some very large and complex computing projects.For example, genetic research requires vast amounts of computing power. Left to traditional means, it might take years to solve essential mathematical problems. By connecting together thousands (or millions) of individual PCs, more power is applied to the problem, and the results are obtained that much sooner.Distributed computing dates back to 1973, when multiple computers were networked togetherat the Xerox PARC labs and worm software was developed to cruise through the network looking for idle resources. A more practical application of distributed computing appeared in 1988, when researchers at the DEC (Digital Equipment Corporation) System Research Center developed software that distributed the work to factor large numbers among workstations within their laboratory. By 1990, a group of about 100 users, utilizing this software, had factored a 100-digit number. By 1995, this same effort had been expanded to the web to factor a 130-digit number.It wasn’t long before distributed computing hit the Internet. The first major Internet-based distributed computing project was , launched in 1997, which employed thousands of personal computers to crack encryption codes. Even bigger was SETI@home, launched in May 1999, which linked together millions of individual computers to search for intelligent life in outer space.Many distributed computing projects are conducted within large enterprises, using traditional network connections to form the distributed computing network. Other, larger, projects utilize the computers of everyday Internet users, with the computing typically taking place offline, and then uploaded once a day via traditional consumer Internet connections.Collaborative Computing: Working as a GroupFrom the early days of client/server computing through the evolution of P2P, there has been a desire for multiple users to work simultaneously on the same computer-based project. This type of collaborative computing is the driving force behind cloud computing, but has been aroundfor more than a decade.Early group collaboration was enabled by the combination of several different P2P technologies. The goal was (and is) to enable multiple users to collaborate on group projects online, in real time.To collaborate on any project, users must first be able to talk to one another. In today’s environment, this means instant messaging for text-based communication, with optionalaudio/telephony and video capabilities for voice and picture communication. Most collaboration systems offer the complete range of audio/video options, for full-featured multiple-user video conferencing.In addition, users must be able to share files and have multiple users work on the same document simultaneously. Real-time whiteboarding is also common, especially in corporate andeducation environments.Early group collaboration systems ranged from the relatively simple (Lotus Notes and Microsoft NetMeeting) to the extremely complex (the building-block architecture of the Groove Networks system). Most were targeted at large corporations, and limited to operation over the companies’private networks.Cloud Computing: The Next Step in CollaborationWith the growth of the Internet, there was no need to limit group collaboration to asingle enterprise’s network environment. Users from multiple locations within a corporation, and from multiple organizations, desired to collaborate on projects that crossed company and geographic boundaries. To do this, projects had to be housed in the “cloud”of the Internet, and accessed from any Internet-enabled location.The concept of cloud-based documents and services took wing with the development of large server farms, such as those run by Google and other search companies. Google already had a collection of servers that it used to power its massive search engine; why not use that same computing power to drive a collection of web-based applications—and, in the process, provide a new level of Internet-based group collaboration?That’s exactly what happened, although Google wasn’t the only company offering cloud computing solutions. On the infrastructure side, IBM, Sun Systems, and other big iron providers are offering the hardware necessary to build cloud networks. On the software side, dozens of companies are developing cloud-based applications and storage services.Today, people are using cloud services and storage to create, share, find, and organize information of all different types. Tomorrow, this functionality will be available not only to computer users, but to users of any device that connects to the Internet—mobile phones, portable music players, even automobiles and home television sets.The Network Is the Computer: How Cloud Computing WorksSun Microsystems’s slogan is “The network is the computer,”and that’s as good as any to describe how cloud computing works. In essence, a network of computers functions as a single computer to serve data and applications to users over the Internet. The network exists in the “cloud”of IP addresses that we know as the Internet, offers massive computing power and storage capability, and enables widescale group collaboration.But that’s the simple explanation. Let’s take a look at how cloud computing works in more detail.Understanding Cloud ArchitectureThe key to cloud computing is the “cloud”—a massive network of servers or even individual PCs interconnected in a grid. These computers run in parallel, combining the resources of eachto generate supercomputing-like power.What, exactly, is the “cloud”? Put simply, the cloud is a collection of computers and servers that are publicly accessible via the Internet. This hardware is typically owned and operated by a third party on a consolidated basis in one or more data center locations. The machines can run any combination of operating systems; it’s the processing power of the machines that matter, not what their desktops look like.As shown in Figure 1.1, individual users connect to the cloud from their own personal computers or portable devices, over the Internet. To these individual users, the cloud is seen as a single application, device, or document. The hardware in the cloud (and the operating system that manages the hardware connections) is invisible.FIGURE 1.1How users connect to the cloud.This cloud architecture is deceptively simple, although it does require some intelligent management to connect all those computers together and assign task processing to multitudes of users. As you can see in Figure 1.2, it all starts with the front-end interface seen by individual users. This is how users select a task or service (either starting an application or opening a document). The user’s request then gets passed to the system management, which finds the correct resources and then calls the system’s appropriate provisioning services. These services carve out the necessary resources in the cloud, launch the appropriate web application, and either creates or opens the requested document. After the web application is launched, the system’s monitoring and metering functions track the usage of the cloud so that resources are apportioned and attributed to the proper user(s).FIGURE 1.2The architecture behind a cloud computing system.As you can see, key to the notion of cloud computing is the automation of many management tasks. The system isn’t a cloud if it requires human management to allocate processes to resources. What you have in this instance is merely a twenty-first-century version ofold-fashioned data center–based client/server computing. For the system to attain cloud status, manual management must be replaced by automated processes.Understanding Cloud StorageOne of the primary uses of cloud computing is for data storage. With cloudstorage, data is stored on multiple third-party servers, rather than on the dedicated servers used in traditional networked data storage.When storing data, the user sees a virtual server—that is, it appears as if the data is stored in a particular place with a specific name. But that place doesn’t exist in reality. It’s just a pseudonym used to reference virtual space carved out of the cloud. In reality, the user’s data could be stored on any one or more of the computers used to create the cloud. The actual storage location may even differ from day to day or even minute to minute, as the cloud dynamically manages available storage space. But even though the location is virtual, the user sees a “static”location for his data—and can actually manage his storage space as if it were connected to his own PC.Cloud storage has both financial and security-associated advantages.Financially, virtual resources in the cloud are typically cheaper than dedicated physical resources connected to a personal computer or network. As for security, data stored in the cloud is secure from accidental erasure or hardware crashes, because it is duplicated across multiple physical machines; since multiple copies of the data are kept continually, the cloud continues to function as normal even if one or more machines go offline. If one machine crashes, the data is duplicated on other machines in the cloud.Understanding Cloud ServicesAny web-based application or service offered via cloud computing is called a cloud service. Cloud services can include anything from calendar and contact applications to word processing and presentations. Almost all large computing companies today, from Google to Amazon to Microsoft, are developing various types of cloud services.With a cloud service, the application itself is hosted in the cloud. An individual user runs the application over the Internet, typically within a web browser. The browser accesses the cloud service and an instance of the application is opened within the browser window. Once launched, the web-based application operates and behaves like a standard desktop application. The only difference is that the application and the working documents remain on the host’s cloud servers.Cloud services offer many advantages. If the user’s PC crashes, it doesn’t affect either the host application or the open document; both remain unaffected in the cloud. In addition, an individual user can access his applications and documents from any location on any PC. He doesn’t have to have a copy of every app and file with him when he moves from office to home to remote location. Finally, because documents are hosted in the cloud, multiple users can collaborate on the same document in real time, using any available Internet connection. Documents are no longer machine-centric. Instead, they’re always available to any authorized user.Companies in the Cloud: Cloud Computing TodayWe’re currently in the early days of the cloud computing revolution. Although many cloud services are available today, more and more interesting applications are still in development. That said, cloud computing today is attracting the best and biggest companies from across the computing industry, all of whom hope to establish profitable business models based in the cloud.As discussed earlier in this chapter, perhaps the most noticeable company currently embracing the cloud computing model is Google. As you’ll see throughout this book, Google offers a powerful collection of web-based applications, all served via its cloud architecture. Whether you want cloud-based word processing (Google Docs), presentation software (Google Presentations), email (Gmail), or calendar/scheduling functionality (Google Calendar), Google has an offering. And best of all, Google is adept in getting all of its web-based applications to interface with each other; their cloud services are interconnected to the user’s benefit.Other major companies are also involved in the development of cloud services. Microsoft, for example, offers its Windows Live suite of web-based applications, as well as the Live Mesh initiative that promises to link together all types of devices, data, and applications in a common cloud-based platform. Amazon has its Elastic Compute Cloud (EC2), a web service that provides cloud-based resizable computing capacity for application developers. IBM has established a Cloud Computing Center to deliver cloud services and research to clients. And numerous smaller companies have launched their own webbased applications, primarily (but not exclusively) to exploit the collaborative nature of cloud services.As we work through this book, we’ll examine many of these companies and their offerings. All you need to know for now is that there’s a big future in cloud computing—and everybody’s jumping on the bandwagon.Why Cloud Computing MattersWhy is cloud computing important? There are many implications of cloud technology, for both developers and end users.For developers, cloud computing provides increased amounts of storage and processing power to run the applications they develop. Cloud computing also enables new ways to access information, process and analyze data, and connect people and resources from any location anywhere in the world. In essence, it takes the lid off the box; with cloud computing, developers are no longer boxed in by physical constraints.For end users, cloud computing offers all those benefits and more. A person using a web-based application isn’t physically bound to a single PC, location, or network. His applications and documents can be accessed wherever he is, whenever he wants. Gone is the fear of losing data if a computer crashes. Documents hosted in the cloud always exist, no matter what happens to the user’s machine. And then there’s the benefit of group collaboration. Users from around the world can collaborate on the same documents, applications, and projects, in real time. It’s a whole new world of collaborative computing, all enabled by the notion of cloud computing.And cloud computing does all this at lower costs, because the cloud enables more efficient sharing of resources than does traditional network computing. With cloud computing, hardware doesn’t have to be physically adjacent to a firm’s office or data center. Cloud infrastructure can be located anywhere, including and especially areas with lower real estate and electricity costs. Inaddition, IT departments don’t have to engineer for peak-load capacity, because the peak load can be spread out among the external assets in the cloud. And, because additional cloud resources are always at the ready, companies no longer have to purchase assets for infrequent intensive computing tasks. If you need more processing power, it’ s always there in the cloud—and accessible on a cost-efficient basis.。
英文参考文献及翻译Linux - Operating system of cybertimesThough for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 , Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely,because the characteristic of the freedom software makes it not almost have advertisement that support (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose byothers, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。
英文参考文献及翻译Linux - Operating system of cybertimesThough for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 , Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely,because the characteristic of the freedom software makes it not almost have advertisement that support (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose byothers, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。
1 . Introduction To Objects1.1The progress of abstractionAll programming languages provide abstractions. It can be argued that the complexity of the problems you’re able to solve is directly related to the kind and quality of abstraction。
By “kind” I mean,“What is it that you are abstracting?” Assembly language is a small abstraction of the underlying machine. Many so—called “imperative” languages that followed (such as FORTRAN,BASIC, and C) were abstractions of assembly language。
These languages are big improvements over assembly language,but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve。
The programmer must establish the association between the machine model (in the “solution space,” which is the place where you’re modeling that problem, such as a computer) and the model of the problem that is actually being solved (in the “problem space,” which is the place where the problem exists). The effort required to perform this mapping, and the fact that it is extrinsic to the programming language,produces programs that are difficult to write and expensive to maintain,and as a side effect created the entire “programming methods” industry.The alter native to modeling the machine is to model the problem you’re trying to solve。
计算机专业中英文文献翻译CNCCNC stands for Computerized Numerical Control and has been around since the early1970s. prior to this, it was called NC,for numerical control. While people in most walks of life have never heard of this term, CNC has touched almost every form of manufacturing process in one way or another. If you'll be working in manufacturing, it's likely that you'll be dealing with CNC on a regular basis.Before CNCWhile there are exceptions to this statement,CNC machines typically replace (or work in conjunction with) some existing manufacturing processes. Take one of the simplest manufacturing processes,drilling holes,for example.A drill press can of course be used to machine holes. A person can place a drill in the drill chuck that is secured in the spindle of the drill press. They can then (manually) select the desired speed for rotation (commonly by switching belt pulleys), and activate the spindle. Then they manually pull on the quill lever to drive the drill into the workpiece being machined.As you can easily see, there is a lot of manual intervention required to use a drill press to holes. A person is required to do something almost every step along the way! While this manual intervention may be acceptable for manufacturing companies if but a small number of holes workpieces must be machined, as quantities grow, so does the likelihood for fatigue due to the tediousness of the operation. And do note that we've used one of the simplest machining operations(drilling) for our example. There are more complicated machining operations that would require a much higher skill level (and increase the potential for mistakes resulting in scrap workpieces) of the person running the conventional machine tool. (We commonly refer to style of machine that CNC is replacing as the conventional machine.)By comparison, the CNC equivalent for a drill press (possibly a CNC machining center or CNC drilling & tapping center) can be programmed to perform this operation in a much more automatic fashion. Everything that the drill press operator was doing manually will now be done by the CNC machine, including:placing thedrill in the spindle, activating the spindle,positioning the workpiece under the drill, machining the hole, and turning off the spindle.How CNC worksAs you might already have guessed,everything that an operator would be required to do with conventional machine tools is programmable with CNC machines. Once the machine is setup and running, a CNC machine is quite simple to keep running. In fact CNC operators tend to get quite bored during lengthy production runs because there is so little to do. With some CNC machines, some of the specific programmable functions.Motion controlAll CNC machine types share this commonalty: They all have two or more programmable directions of motion called axes. An axis of motion can be linear(along a straight line) or rotary(along a circular path). One of the first specifications that imply a CNC machine's complexity is how many axes it has. Generally speaking, the more axes, the more complex the machine.The axes of any CNC machine are required for the purpose of causing the motions needed for the manufacturing process. In the drilling example, these axes would position then tool over the hole to be machined (in two axes) and machine the hole (with the third axis). Axes are named with mon linear axis named X,Y,and Z. Common rotary names are A,B,and C. There are related to the coordinate system.Programmable accessoriesA CNC machine wouldn't be very helpful if all it could only move the workpiece in two or more axes. Almost all CNC machines are programmable in several other ways. The specific CNC machine type has a lot to do with its appropriate programmable accessories. Again,any required function will be programmable on full-blown CNC machine tools. Here are some examples for one machine type(machining centers).Automatic tool changerMost machining centers can hold many tools in a tool magazine. When required,the required tool can be automatically placed in spindle for machining.Spindle speed and activationThe spindle speed(in revolutions per minute) can be easily specified and the spindle can be turned on in a forward or reverse direction.It can also,of course, be turned off.CoolantMany machining operations require coolant for lubrication and cooling purposes. Coolant can be turned on and off from within the machine cycle.The CNC programThink of giving any series of step-by-step instructions. A CNC program is nothing more than another kind of instruction set. It's written in sentence-like format and the control will execute it in sequential order,step by step.A special series of CNC words are used to communicate what the machine is intended to do. CNC words begin with letter address(like F for feedrate,S for spindle speed,and X,Y,and Z for axis motion). When placed together in a logical method, a group of CNC words make up a command that resemble a sentence.The CNC controlThe CNC control will interpret a CNC program and active the series of commands in sequential order. As it reads the program, the CNC control will activate the appropriate machine functions, cause axis motion, and in general, follow the instructions given in the program.Along with interpreting the CNC program, the CNC control has several other purposes. All current model CNC controls allow programs to be modified(edited) if mistakes are found. The CNC control allows special verification functions(like dry run) to confirm the correctness of the CNC program. The CNC control allows certain important operator inputs to be specified separate from the program, like tool length values. In general, the CNC control allows functions of the machine to be manipulated.What is a CAM system?For simple applications (like drilling holes),the CNC program can developedmanually. That is ,a programmer will sit down to write the program armed only with pencil,paper, and calculator. Again, for simple applications,this may be the very best way to develop CNC programs.As applications get more complicated, and especially when new programs are required on a regular basis, writing programs manually becomes much more difficult.To simplify the programming process,a computer aided manufacturing (CAM) system can be used. A CAM system is a software program that runs on a computer(commonly a PC) that helps the CNC programmer with the programming process. Generally speaking, a CAM system will take the tediousness and drudgery out of programming.In many companies the CAM system will work with the computer aided design(CAD) drawing developed by the computer's design engineering department.This eliminates the need for redefining the workpiece configuration to the CAM system .The CNC programmer will simply specify the machining operations to be performed and the CAM system will create the CNC program(much like the manual programmer would have written) automatically.What is a DNC system?Once the program is developed (either manually or with a CAM system), it must be loaded into the CNC control. Tough the setup person could type the program right into the control, this would be like using the CNC machine as a very expensive typewrite. If the CNC program is developed with the help of a CAM system, then it is already in the form of a text file.If the program is written manually,it can be typed into any computer using a common word processor (though most companies use a special CNC text editor for this purpose). Either way, the program is in the form of a text file that can be transferred right into the CNC machine. A distributive numerical control (DNC) system is used for this purpose.A DNC system is nothing more than a computer that is networked with one or more CNC machines. Until only recently, rather crude serial communications protocol (RS-232C) had to be used for transferring programs. Newer controls have more current communications capabilities and can be networked in more conventional ways(Ethernet, etc.). Regardless of methods , the CNC program must of course be loaded into the CNC machine before it can be run.When Numerical Control is performed under computer supervision, it is called Computer Numerical Control (CNC). Computers are the control units of CNC machines. They are built in or linked to the machines via communications channels. When a programmer inputs some information in the program by tape and so on, the computer calculates all necessary data to get the job done.Today’s systems have computers control data, so they are called Computer Numerically Controlled Machines. For both NC and CNC systems, work principles are the same. Only the way in which the execution is controlled is different. Normally, new systems are faster, more powerful, and more versatile unit.The Construction of CNC MachinesCNC machine tools are complex assemblies. However, in general, any CNC machine tool consists of the following units: computers, control systems, drive motors and tool changers.According to the construction of CNC machine tools, CNC machines work in the following manner:(1) The CNC machine language, which is a programming language of binary notation used on computers, is not used on CNC machines.(2) When the operator starts the execution cycle, the computer translates binary codes into electronic pulses that are automatically sent to the machine’s power units. The control units compare the number of pulses sent and received.(3) When the motors receive each pulse, they automatically transform the pulses into rotations that drive the spindle and lead screw, causing the spindle rotation and slide or table movement. The part on the milling machine table or the tool in the lathe turret is driven to the position specified by the program.putersAs with all computers, the CNC machine computer works on binary principle using only two characters 1 and 0, for information processing precise time impulses from the circuit. There are two states, a state with voltage, 1, and a state without voltage, 0. Series of ones and zeroes are the only states that the computer distinguishes are called machine language, and it is the only language the computer understands. When creating the program, the programmer does not care about themachine language. He or she simply uses a list of codes and keys in the meaningful information.Special built-in software compiles the program into the machine language and the machine moves the tool by its servomotors. However, the programmability of the machine is dependent on whether there is a computer in the machine’s control. If there is a minicomputer programming, say, a radius (which is a rather simple task), the computer will calculate all the points on the tool path.On the machine without a minicomputer, this may prove to be a tedious task, since the programmer must calculate all the points of intersection on the tool path. Modern CNC machines use 32-bit processors in their computers that allow fast and accurate processing of information.2.Control systemsThere are two types of control systems on NC/CNC machines: the open loop and the closed loop. The type of control loop used determines the overall accuracy of the machine.The open-loop control system does not provide positioning feedback to the control unit. The movement pulses are sent out by the control and they are received by a special type of servomotor called a stepper motor.The number of pulses that the control sends to the stepper motor controls the amount of the rotation of the motor. The stepper motor then proceeds with the next movement command. Since this control system only counts pulses and cannot identify discrepancies in positioning, the machine will continue this inaccuracy until somebody finds the error.The open-loop control can be used in applications in which there is no change in load conditions, such as the NC drilling machine.The advantage of the open-loop control system is that it is less expensive, since it does not require the additional hardware and electrics needed for positioning feedback. The disadvantage is the difficulty of detecting a positioning error.In the closed-loop control system, the electronic movement pulses are sent from the control to the servomotor, enabling the motor to rotate with each pulse. The movements are detected and counted by a feedback device called a transducer. With each step of movement, a transducer sends a signal back to the control, which compares the current position of the driven axis with the programmed position. When the number of pulses sent and received matches, the control starts sending out pulses for the next movement.Closed-loop systems are very accurate. Most have an automatic compensation for error, since the feedback device indicates the error and the control makes the necessary adjustments to bring the slide back to the position. They use AC, DC or hydraulic servomotors.Position measurement in NC machines can be accomplished through direct or indirect methods. In direct measuring systems, a sensing device reads a graduated scale on the machine table or slide for linear movement. This system is more accurate because the scale is built into the machine and backlash (the play between two adjacent mating gear teeth) in the mechanisms is not significant.In indirect measuring systems, rotary encoders or resolves convert rotary movement to translation movement. In this system, backlash can significantly affect measurement accuracy. Position feedback mechanisms utilize various sensors that are based mainly on magnetic and photoelectric principles.3.Drive MotorsThe drive motors control the machine slide movement on NC/CNC equipment. They come in four basic types: stepper motors, DC servomotors, AC servomotors and fluid servomotors.Stepper motors convert a digital pulse generated by the microcomputer unit (MCU) into a small step rotation. Stepper motors have a certain number of steps that they can travel. The number of pulses that the MCU sends to the stepper motor controls the amount of the rotation of the motor.Stepper motors are mostly used in applications where low torque is required.Stepper motors are used in open-loop control systems, while AC, DC or hydraulic servomotors are used in closed-loop control systems.Direct current (DC) servomotors are variable speed motors that rotate in response to the applied voltage. They are used to drive a lead screw and gear mechanism. DC servomotors provide higher-torque output than stepper motors.Alternative current (AC) servomotors are controlled by varying the voltage frequency to control speed. They can develop more power than a DC servomotor. They are also used to drive a lead screw and gear mechanism.Fluid or hydraulic servomotors are also variable speed motors. They are able to produce more power, or more speed in the case of pneumatic motors than electric servomotors. The hydraulic pump provides energy to values that are controlled by the MCU.4.Tool ChangersMost of the time, several different cutting tools are used to produce a part. The tools must be replaced quickly for the next machining operation. For this reason, the majority of NC/CNC machine tools are equipped with automatic tool changers, such as magazines on machining centers and turrets on turning centers. Typically, an automatic tool changer grips the tool in the spindle, pulls it out, and replaces it with another tool.On most machines with automatic tool changers, the turret or magazine can rotate in either direction, forward or reverse.Tool changers may be equipped for either random or sequential selection. In random tool selection, there is no specific pattern of tool selection. On the machining center, when the program calls for the tool, it is automatically indexed into waiting position, where it can be retrieved by the tool-handling device. On the turning center, the turret automatically rotates, bringing the tool into position.While the specific intention and application for CNC machines vary from one machine type to another, all forms of CNC have common benefits. Here are but a few of the more important benefits offered by CNC equipment.The first benefit offered by all forms of CNC machine tools is improved automation. The operator intervention related to producing workpieces can be reduced or eliminated. Many CNC machine can run unattended during their entire machining cycle, freeing the operator to do other tasks. This gives the CNC user several side benefits including reduced operator fatigue, fewer mistakes caused by human error, and consistent and predictable machining time for each workpiece. Since the machine will be running under program control, the skill level required of the CNC operator (related to basic machining practice) is also reduced as compared to a machinist producing workpieces with conventional machine tools.The second major benefit of CNC technology is consistent and accurate workpieces. Today's CNC machines boast almost unbelievable accuracy and repeatability specification. This means that once a program is verified, two,ten, or one thousand identical workpieces can be easily produced with precision and consistency.A third benefit offer by most forms of CNC machine tools is flexibility.Since these machines are run from programs, running a different workpiece is almost as easy as easy loading a different program. Once a program has been verified and executed for one production run, it can be easily recalled the next time the workpiece is to be run.This leads to yet another benefit, fast change overs. Since these machinesare very easy to set up and run, and since programs can be easily loaded,they allow very short setup time. This is imperative with today's just-in-time(JIT) product requirements.Motion control - the heart of CNCThe most basic function of any CNC machine is automatic,precise,and consistent motion control. Rather than applying completely mechanical devices to cause motion as is required on most conventional machine tools,CNC machines allow motion control in a revolutionary manner. All forms of CNC equipment have two or more directions of motion,called axes.These axes can be precisely and automatically positioned along their lengths of travel.The two most common axis types are linear(driven along a straight path) and rotary(drive along a circular path).Instead of causing motion by turning cranks and handwheels as is required on conventional machine tools. CNC machines allow motions to be commanded though programmed commands. Generally speaking ,the motion rate ( feedrate ) are programmable with almost all CNC machine tools.A CNC command executed within the control tells the drive motor to rotate a precise number of times.The rotation of the drive motor in turn rotates the ball screw. And the ball screw drives the linear axis(slide).A feedback device (linear scale) on the slide allows the control to confirm that the commanded number of rotations has taken place.数控技术CNC代表计算机数(字)控(制),自20世纪70年代以来一直受到人们的关注。
题目Programming Overlay Networkswith Overlay SocketsProgramming Overlay Networks with Overlay Sockets The emergence of application-layer overlay networks has inspired the development of new network services and applications. Research on overlay net-workshas focused on the design of protocols to maintain and forward data in an overlay network, however, less attention has been given to the software development process of building application programs in such an environment. Clearly,the complexity of overlay network protocols calls for suitable application programming interfaces (APIs) and abstractions that do not require detailed knowledge of the overlay protocol, and, thereby, simplify the task of the application programmer. In this paper, we present the concept of an overlay socket as a new programming abstraction that serves as the end point of communication in an overlay network. The overlay socket provides a socket-based API that is independent of the chosen overlay topology, and can be configured to work for different overlay topologies. The overlay socket can support application data transfer over TCP, UDP, or other transport protocols. This paper describes the design of the overlay socket and discusses API and configuration options.1 IntroductionApplication-layer overlay networks [5, 9, 13, 17] provide flexible platforms for develop-ing new network services [1, 10, 11, 14, 18–20] without requiring changes to the network-layer infrastructure. Members of an overlay network, which can be hosts, routers, servers, or applications, organize themselves to form a logical network topology, and commu-nicate only with their respective neighbors in the overlay topology. A member ofan overlay network sends and receives application data, and also forwards data intended for other members. This paper addresses application development in overlay networks. We use the term overlay network programming to refer to the software development process of building application programs that communicate with one another in an application-layer overlay_This work is supported in part by the National Science Foundation through grant work. The diversity and complexity of building and maintaining overlay networks make it impractical to assume that application developers can be concerned with the complexity of managing the participation of an application in a specific overlay networktopology.We present a software module, called overlay socket, that intends to simplify the task of overlay network programming. The design of the overlay socket pursues the following set of objectives: First, the application programming interface (API) of the overlay socket does not require that an application programmer has knowledge of the overlay network topology. Second, the overlay socket is designed to accommodate dif-ferent overlay network topologies. Switching to different overlay network topologies is done by modifying parameters in a configuration file. Third, the overlay socket, which operates at the applicationlayer,can accommodate different types of transport layer protocols. This is accomplished by using network adapters that interface to the un-derlying transport layer network and perform encapsulation and de-encapsulation of messages exchanged by the overlay socket. Currently available network adapters are TCP, UDP, and UDP multicast. Fourth, the overlay socket provides mechanisms for bootstrapping new overlay networks. In this paper, we provide an overview of the overlay socket design and discuss over-lay network programming with the overlay socket. The overlay socket has been imple-mented in Java as part of the HyperCast 2.0 software distribution [12]. The software has been used for various overlay applications, and has been tested in both local-area as well as wide-area settings. The HyperCast 2.0 software implements the overlay topolo-gies described in [15] and [16]. This paper highlights important issues of the overlay socket, additional information can be found in the design documentation available from[12]. Several studies before us have addressed overlay network programming issues. Evenearly overlay network proposals, such as Yoid [9], Scribe [4], and Scattercast [6], have presented APIs that aspire to achieve independence of the API from the overlay network topology used. Particularly, Yoid and Scattercast use a socket-like API, how-ever, these APIs do not address issues that arise when the same API is used by different overlay network topologies. Several works on application-layer multicast overlays inte-grate the application program with the software responsible for maintaining the overlay network, without explicitly providing general-purpose APIs.These include Narada [5], Overcast [13], ALMI [17], and NICE [2]. A recent study [8] has proposed a common API for the class of so-called structured overlays, which includes Chord [19], CAN [18], and Bayeux [20], and other overlays that were originally motivated by distributed hash tables. Our work has a different emphasis than [8], since we assume a scenario where an application programmer must work with several, possibly fundamentally dif-ferent, overlay network topologies and different transmission modes (UDP, TCP), and, therefore, needs mechanisms that make it easy to change the configuration of the un-derlying overlay network..Internet Overlay socket Application Overlay socket Application Application Overlay socket Application Application Overlay socket Application Overlay Network. Fig. 1. The overlay network is a collection of overlay sockets. Root (sender) Root (receiver) (a) Multicast (b) Unicast.Fig. 2. Data forwarding in overlay networks.The rest of the paper is organized as following. In Section 2 we introduce con-cepts, abstractions, and terminology needed for the discussion of the overlay socket. In Section 3 we present the design of the overlay socket, and discuss its components. In Section 4 we show how to write programs using the overlay socket. We present brief conclusions in Section 5.2 Basic ConceptsAn overlay socket is an endpoint for communication in an overlay network, and an overlay network is seen as a collection of overlay sockets that self-organize using an overlay protocol (see Figure 1). An overlay socket offers to an application programmer a Berkeley socket-style API [3] for sending and receiving data over an overlay network.Each overlay socket executes an overlay protocol that is responsible for maintaining the membership of the socket in the overlay network topology. Each overlay socket has a logical address and a physical address in the overlay network. The logical address is dependent on the type of overlay protocol used. In the overlay protocols currently implemented in HyperCast 2.0, the logical addresses are 32- bit integers or_x_y_coordinates, where x and y are positive 32-bit positive integers. The physical address is a transport layer address where overlay sockets receive messages from the overlay network. On the Internet, the physical address is an IP address and a TCP or UDP port number. Application programs that use overlay sockets only work with logical addresses, and do not see physical addresses of overlay nodes. When an overlay socket is created, the socket is configured with a set of configu-ration parameters, called attributes. The application program can obtain the attributes from a configuration file or it downloads the attributes from a server. The configuration file specifies the type of overlay protocol and the type of transport protocol to be used,.but also more detailed information such as the size of internal buffers, and the value of protocol-specific timers. The most important attribute is the overlay identifier (overlay ID) which is used as a global identifier for an overlay network and which can be used as a key to access the other attributes of the overlay network. Each new overlay ID corresponds to the creation of a new overlay network. Overlay sockets exchange two types of messages, protocol messages and application messages. Protocol messages are the messages of the overlay protocol that main-tain the overlay topology. Application messages contain applicationdata that is encap-sulatedn an overlay message header. An application message uses logical addresses in the header to identify source and, for unicast, the destination of the message. If an overlay socket receives an application message from one of its neighbors in the over-laynetwork, it determines if the message must be forwarded to other overlay sockets, and if the message needs to be passed to the local application. The transmission modes currently supported by the overlay sockets are unicast, and multicast. In multicast, all members in the overlay network are receivers.In both unicast and multicast,the com-mon abstraction for data forwarding is that of passing data in spanning trees that are embedded in the overlay topology. For example, a multicast message is transmitted downstream a spanning tree that has the sender of the multicast message as the root (see Figure 2(a)). When an overlay socket receives a multicast message, it forwards the message to all of its downstream neighbors (children) in the tree, and passes the mes-sage to the local application program. A unicast message is transmitted upstream a tree with the receiver of the message as the root (see Figure 2(b)). An overlay socket that receives a unicast message forwards the message to the upstream neighbor (parent) in the tree that has the destination as the root. An overlay socket makes forwarding decisions locally using only the logical ad-dresses of its neighbors and the logical address of the root of the tree. Hence, there is a requirement that each overlay socket can locally compute its parent and its children in a tree with respect to a root node. This requirement is satisfied by many overlay network topologies, including [15, 16, 18–20].3 The Components of an Overlay SocketAn overlay socket consists of a collection of components that are configured when the overlay socketis created, using the supplied set of attributes. These components include the overlay protocol, which helps to build and maintain the overlay network topology, a component that processes application data, and interfaces to a transport-layer network. The main components of an overlay socket, as illustrated in Figure 3, are as follows:The overlay node implements an overlay protocol that establishes and maintains the overlay network topology. The overlay node sends and receives overlay protocol messages, and maintains a set of timers. The overlay node is the only component of an overlay socket that is aware of the overlay topology. In the HyperCast 2.0. Overlay socket Forwarding EngineApplication Programming InterfaceStatistics InterfaceProtocol MessagesApplicationReceiveBufferApplicationTransmitBuffer Overlay NodeO verlay NodeInterfac eNode AdapterAdapter InterfaceSocket AdapterA dapter InterfaceApplication MessagesApplication ProgramTransport-layer NetworkApplication MessagesFig. 3. Components of an overlay socket.software, there are overlay nodes that build a logical hypercube [15] and a logical Delaunay triangu-lartion [16].The forwarding engine performs the functions of an application-layer router, that sends, receives, and forwards formatted application-layer messages in the overlay network. The forwarding engine communicates with the overlay node to query next hop routing information for application messages. The forwarding decision is made using logical addresses of the overlay nodes. Each overlay socket has two network adapters that each provides an interface to transport-layer protocols, such as TCP or UDP. The nodeadapter serves as the in-terface for sending and receiving overlay protocol messages, and the socket adapter serves as the interface for application messages. Each adapter has a transport level address, which, in the case of the Internet, consists of an IP address and a UDP or TCP port number. Currently, there are three different types of adapters, for TCP, UDP, and UDP multicast. Using two adapters completely separates the handling of messages for maintaining the overlay protocol and the messages that transport application data.The application receive buffer and application transmit buffer can temporarily store messages that, respectively, have been received by the socket but not been deliv-ered to theapplication, or that have been released by the application program, but not been transmitted by the socket. The application transmit buffer can play a role when messages cannot be transmitted due to rate control or congestion control con-straints. The application transmit buffer is not implemented in the HyperCast 2.0 software.Each overlay socket has two external interfaces. The application programming in-terface (API) of the socket offers application programs the ability to join and leave existing overlays, to send data to other members of the overlay network, and receive data from the overlay network. The statistics interface of the overlay socket provides access to status information of components of the overlay socket, and is used for monitoring and management of an overlay socket. Note in Figure 3 that some components of the overlay socket also have interfaces, which are accessed by other components of the overlay socket. The overlay manager is a component external to the overlay socket (and not shown in Figure 3). It is responsible for configuring an overlay socket when the socket is created. The overlay manager reads a configuration file that stores the attributes of an overlay socket, and, if it is specified in the configuration file, may access attributes from a server, and then initiates the instantiation of a new overlay socket.4 Overlay Network ProgrammingAn application developer does not need to be familiar with the details of the components of an overlay socket as described in the previous section. The developer is exposed only to the API of the overlay socket and to a file with configuration parameters.The configuration file is a text file which stores all attributes needed to configure an overlay socket. The configuration file is modified whenever a change is needed to the transport protocol, the overlay protocol, or some other parameters of the overlay socket. In the following, we summarize only the main features of the API, and we refer to [12] for detailed information on the overlay socket API.4.1 Overlay Socket APISince the overlay topology and the forwarding of application-layer data is transparent to the application program, the API for overlay network programming can be made simple. Applications need to be able to create a new overlay network, join and leave an existing overlay network, send data to and receive data from other members in the overlay.The API of the overlay socket is message-based, and intentionally stays close to the familiar Berkeley socket API [3]. Since space considerations do not permit a description of the full API, we sketch the API with the help of a simplified example. Figure 4 shows the fragment of a Java program that uses an overlay socket. An application program configures and creates an overlay socket with the help of an overlay manager (o m). The overlay manager reads configuration parameters for the overlay socket from a configu-ration file (hypercast.pro p), which can look similarly as shown in Figure 5. The applica-tion program reads the overlay ID with command om.getDefaultProperty(“OverlayID”) from the file, and creates an configuration object (confi g) for an overlay socket with the.// Generate the configuration object OverlayManager om = newOverlayManager("hypercast.prop");String MyOverlay = om.getDefaultProperty("OverlayID"); OverlaySocketConfig config = new om.getOverlaySocketConfig(MyOverlay); // create an overlay socketOL Socket socket = config.createOverlaySocket(callback);// Join an overlaysocket.joinGroup();// Create a messageOL Message msg = socket.createMessage(byte[] data, int length);// Send the message to all members in overlay networksocket.sendToAll(msg);// Receive a message from the socketOL Message msg = socket.receive();Fig. 4. Program with overlay sockets.# OVERLAY Server:OverlayServer =# OVERLAY ID:OverlayID = 1234KeyAttributes= Socket,Node,SocketAdapter# SOCKET:Socket = HCast2-0HCAST2-0.TTL = 255HCAST2-0.ReceiveBufferSize = 200# SOCKET ADAPTER:SocketAdapter = TCPSocketAdapter.TCP.MaximumPacketLength = 16384# NODE:Node = DT2-0DT2-0.SleepTime = 400# NODE ADAPTER:NodeAdapter = NodeAdptUDPServer NodeAdapter.UDP.MaximumPacketLength = 8192 NodeAdapter.UDPServer.UdpServer0 =128.143.71.50:8081Fig. 5. Configuration file (simplified) given overlay ID. The configuration objectalso loads all configuration information from the configuration file, and then creates the overlay socket(config.createOverlaySocke t).Once the overlay socket is created, the socket joins the overlay network (socket.join-Grou p). When a socket wants to multicast a message, it instantiates a new message (socket.createMessage) and trans-mits the message using the sendToAll method. Other transmission options are send-To-Parent, send-To-Children, sendToNeighbors, and sendToNode, which, respectively, send a message to the upstream neighbor with respect to a given root (see Figure 2), to the downstream neighbors, to all neighbors, or to a particular node with a given logical address.4.2 Overlay Network Properties ManagementAs seen, the properties of an overlay socket are configured by setting attributes in a configuration file. The overlay manager in an application process uses the attributes to create a new overlay socket. By modifying the attributes in the configuration file, an application programmer can configure the overlay protocol or transport protocol that is used by the overlay socket. Changes to the file must be done before the socket is created. Figure 5 shows a (simplified) example of a configuration file. Each line of the configuration file assigns a value to an attribute. The complete list of attributes and the range of values is documented in [12]. Without explaining all entries in Figure 5, the file sets, among others, the ov erlay ID to …1234 ‟, selects version 2.0 of the DT protocol as overlay protocol (…Node=DT2-0 ‟), and it sets the transport protocol of the socket adaptor to TCP(…SocketAdapter=TCP ‟).Each overlay network is associated with a set of attributes that characterize the properties of the over-lay sockets that participate in the overlay network. As mentioned earlier, the most important attribute is the overlay ID, which is used to identify an y network, andwhich can be used as a key toaccess all other attributes of an overlay network. The overlay ID should be a globally unique identifier.A new overlay network is created by generating a new overlay ID and associating a set of attributes that specify the properties of the overlay sockets in the overlay network. To join an overlay network, an overlay socket must know the overlay ID and the set of attributes for this overlay ID. This information can be obtained from a configuration file, as shown in Figure 5.All attributes have a name and a value, both of which are strings. For example, the overlay protocol of an overlay socket can be determined by an attribute with name NODE. If the attribute is set to NOD-E=DT2- 0, then the overlay node in the overlay socket runs the DT (version 2) overlay protocol. The overlay socket distinguishes between two types of attributes: key attributes and configurable attributes. Key attributes are specific to an overlay network with a given overlay ID. Key attributes are selectedwhen the overlay ID is created for an overlay network, and cannot be modified after-wards.Overlay sockets that participate in an overlay network must have identical key attributes, but can have different configurable attributes. The attributes OverlayID and KeyAttributes are key attributes by default in all overlay networks. Configurable at-tributes specify parameters of an overlay socket, which are not considered essential for establishing communication between overlay sockets in the same overlay network, and which are considered …tunable‟.5 ConclusionsWe discussed the design of an overlay socket which attempts to simplify the task of overlay network programming. The overlay socket serves as an end point of commu-nication in the overlay network. The overlay socket can be used for various overlay topologies and support different transport protoc-ols. The overlay socket supports a simple API for joining and leaving an overlaynetwork, and for sending and receiving data to and from other sockets in the overlay network. The main advantage of the overlay socket is that it is relatively easy to change the configuration of the overlay network. An implementation of the overlay socket is distributed with the HyperCast2.0 soft-ware. The software has been extensively tested. A variety of different applications, such as distributed whiteboard and a video streaming application, have been developed with the overlay sockets. Acknowledgement. In addition to the authors of this article the contributors include Bhupinder Sethi, Tyler Beam, Burton Filstrup, Mike Nahas, Dongwen Wang, Konrad Lorincz, Jean Ablutz, Haiyong Wang, Weisheng Si, Huafeng Lu, and Guangyu Dong.应用层覆盖网络的出现促进了新网络服务和应用的发展。
计算机类毕业外文翻译The Phase to Develop the systemWith the society's development, the personal relationship is day by day intense. How enhances the personal relationship, reduces the management cost, the enhancement service level and pensonal competitive ability, is every one superintendent most matter of concern. More and more superintendents thought the implementation computer scientific style management solves this question.Management information systems (MIS), are information systems, typically computer based, that are used within an organization. World net described an information system as‖ a system consisting of the network of all communication channels used with an organization‖.Generally speaking, MIS involved the following parts:1 Conduct a Preliminary Investigation(1)What is the objective of the first phase of the SDLC?Attention: SDLC means Systems Development Life Cycle.The objectives of phase 1, preliminary investigation, are to conduct a preliminary analysis, propose alternative solutions, describe the costs and benefits of each solution, and submit a preliminary plan with recommendations. The problems are briefly identified and a few solutions are suggested. This phase is often called a feasibility study.(2)Conduct the preliminary analysisIn this step, you need to find out what the organization’s objectives are and to explore the nature and scope of the problems under study.Determine the organization’s objectives: Even if a problem pertains to only a small segment of the organization, you cannot study it in isolation. You need to find out what the overall objectives of the organization are and how groups and departments with in the organization interact. Then you need to examine the problem in that context.Determine the nature and scope of the problems: you may already have a sense of the nature and scope of a problem. However, with a fuller understanding of the goals of the organization, you can now take a closer look at the specifics. Is too much time being wasted on paperwork? On waiting for materials? On nonessential tasks? How pervasive is the problem within the organization? Outside of it? What people are most affected? And so on. Your reading and your interviews should give you a sense of the character of the problem.(3)Propose alternative solutionsIn delving into the organization’s objectives and the specific problems, you may have already discovered some solutions. Other possible solutions may be generated by interviewing people inside the organization, clients or customers, suppliers, and consultants and by studying what competitors are doing. With this data, you then have three choices. You can leave the system as is, improve it, or develop a new system.Leave the system as is: often, especially with paper-based or no technological systems, the problem really isn’t bad enough to justify the measures and expenditures required to get rid of it.Improve the system: sometimes changing a few key elements in the system upgrading to a new computer or new software, or doing a bit of employee retraining, for example will do the trick. Modifications might be introduced over several months, if the problem is no serious.Develop a new system: if the existing system is truly harmful to the organization, radical changes may be warranted. A new system would not mean just tinkering around the edges or introducing some new piece of hardware or software. It could mean changes in every part and at every level.(4)Describe costs and benefitsWhichever of the three alternatives is chose, it will have costs and benefits. In this step, you need to indicate what these are.The changes or absence of changes will have a price tag, of course, and you need to indicate what it is. Greater costs may result in greater benefits, which, in turn, may offer savings. The benefits may be both tangible—such as costly savings –and intangible—such as worker satisfaction. A process may be speeded up, streamlined through the elimination of unnecessary steps, or combined with other processes. Input errors or redundant output may be reduced. Systems and subsystems may be better integrated. Users may be happier with the system. Customers or suppliers may interact more efficiently with the system. Security may be improved. Costs may be cut.(5)Submit a preliminary planNow you need to wrap up all your findings in a written report, submitted to the executives(probably top managers) who are in a position to decide in which direction to proceed—make no changes, change a little, or change a lot—and how much money to allow the project. You should describe the potential solutions, costs, and benefits and indicate your recommendations. If management approves the feasibility study, then the systems analysis phase can begin.2 Do a Detailed Analysis of the System(1)What tools are used in the second phase of the SDLC to analyze data?The objectives of phase 2, systems analysis, are to gather data, analyze the data, and write a report. The present system is studied in depth, and new requirements are specified. Systems analysis describes what a system is already doing and what it should do to meet the needs of users. Systems design—the next phase—specifies how the system will accommodate the objective.In this second phase of the SDLC, you will follow the course prescribed by management on the basis of your phase/feasibility report. We are assuming what you have been directed to perform phase 2—to do a careful analysis of the existing system, in order to understand how the new system you propose would differ. This analysis will also consider how people’s positions and tasks will have to change if the new system is put into effect. In general, it involves a detailed study of: The information needs of the organization and all users;The actives, resources, and products or any present information systems;The information systems capabilities required to need the established information needs and user needs.(2)Gather dataIn gathering data, systems analysts use a handful of tools. Most of them not tem ply technical. They include written documents, interviews, questionnaires, observation, and sampling.Written documents: a great deal of what you need is probably available in the form of written documents, and so on. Documents are a good place to start because they tell you how things are or are supposed to be. These tools will also provide leads on people and areas to pursuer further.Interviews: interviews with managers, workers, clients, suppliers, and competitors will also give you insights. Interviews may be structured or unstructured.Questionnaires: questionnaires are useful for getting information for large groups of people when you can’t get around to interviewing everyone. Questionnaires may also yield more information if respondents can be anonymous. In addition, this tool is convenient, is inexpensive, and yields a lot of data. However, people may not return their forms, results can be ambiguous, and with anonymous questionnaires you’ll have no opportunity to follow up.Observation: no doubt you’ve sat in a coffee shop or on a park bench and just alone ―a person is watching‖. This can be a tool for analysis, too. Through observation you can see how people interact with one another and how paper moves through an organization. Observation can be non-participant or participant. If you are a non-participant observer, and people knew they are a participant observer, you may gain more insights by experiencing the conflicts and responsibilities of the people you are working with.(3)Analyze the dataOnce the data is gathered, you need to come to grips with it and analyze it. Many analytical tools, or modeling tools, are available. Modeling tools enables a systems analyst to present graphic representations of a system. Examples are CASE tools,data flow diagrams, systems flow charts, connectivity diagrams, grid charts, decision tables, and object-oriented analysis.For example, in analyzing the current system and preparing data flow diagrams, the systems analyst must also prepare a data dictionary, which is then used and expanded during all remaining phases of the SDLC. A data dictionary defines all the elements that make up the data flow. Among other things, it records what each data element is by name, how long it is, are where it is used, as well as any numerical values assigned to it. This information is usually entered into a data dictionary software program.The Phase: Design the System(4)At the conclusions of the third phase of the SDLC, what should have been created?The objectives of phase 3, systems design, are to do a preliminary design and then a detail and to write a report. In this third phase of the SDLC, you will essentially create a rough draft and then a detail draft of the proposed information system.(5)Do a preliminary designA preliminary design describes the general foundational capabilities of proposed information system. It reviews the system requirements and then considers major components of the system. Usually several alternative systems are considered, and the costs and the benefits of each are evaluated.Some tools that may be used in the preliminary design an the detail design are following:CASE tools: they are software programs that automate various activities of the SDLC in several phases. This screen is from one of their banking system tools. It shows a model for an A TM transaction. The purchaser of the CASE tool would enter details relative to the particular situation. This technology is intended to speed up to the process of developing systems and to improve the quality of the resulting systems.Project management software: it consists of programs used to plan, schedule, a control the people, costs, and resources required to complete a project on time.3 A detail designA detail design describes how a proposed information system will deliver the general capabilities in the preliminary design. The detail design usually considers the following parts of the system, in this order: output requirements, and system controls and backup.(1) Output requirements: the first thing to determine is what you want the system to produce. In this first step, the systems analyst determines what media the appearance or format of the output, such as headings, columns, and menus.(2) Input requirements: once you know the output, you can determine the inputs, here, too, you must define the type of input, such as keyboard or source data entry. You must determine in what form data will be input and how it will be checked for accuracy. You also need to figure out what volume of data the system can be allowed to take in.(3) Storage requirements: using the data dictionary as a quite, you need to define the files and databases in the information system. How will the files be organized? What kind of storage devices will be used? How will they interface with other storage devices inside and outside of the organization? What will be the volume of database activity?(4) Processing and networking requirements, what kind of computer or computers will be used to handle the processing? What kind of operating system and applications software will be used? Will the computer or computers be tied to others in a network? Exactly what operations will be performed on the input data to achieve the desired output information? What kinds of user interface are desired?(5) System controls backup: finally, you need to think about matters of security, privacy, and data accuracy. You need to prevent unauthorized users from breaking into the system, for example, and snooping in private files. You need to devise auditing procedures and to set up specifications for testing the new system. Finally, you need to institute automatic ways of backing up information and storing it else where in case the system fails or is destroyed.4 Develop/Acquire the System(1)What general tasks do systems analysts perform in the fourth phase of the SDLC?Systems development/acquisition, the systems analysts or others in the organization acquire the software, acquire thehardware, and then test the system. This phase begins once management has accepted the report containing the design and has‖green lighted‖the way to development. Depending on the size of the project, this phase will probably involve substantial expenditures of money and time. However, at the end you should have a workable system.(2)Acquire softwareDuring the design stage, the systems analyst may have had to address what is called the ―make-or-buy‖ decision; if not, that decision certainly cannot be avoided now. In the make-or-buy decision, you decide whether you have to create a program –have it custom-written—or buy it. Sometimes programmers decide they can buy an existing software package and modify it rather than write it from scratch.If you decide to create a new program, then the question is whether to use the organization’s own staff programmers or to hair outside contract programmers. Whichever way you go, the task could take months.(3)Acquire hardwareOnce the software has been chosen, the hardware to run it must be acquired or upgraded. It’s possible you will not need to obtain any new hardware. It’s also possible that the new hardware will cost millions of dollars and involve many items: models, and many other devices. The organization may prefer to lease rather than buy some equipment, especially since chip capability was traditionally doubled about every 18 months.(4)Test the systemWith the software and hardware acquired, you can now start testing the system in two stages: first unit testing and then system testing. If CASE tools have been used throughout the SDLC, testing is minimized because any automatically generated program code is more likely to be error free.5 Implement the System(1)What tasks are typically performed in the fifth phase of the SDLC?Whether the new information system involves a few handheld computers, and elaborate telecommunications network, or expensive mainframes, phase 5,systems implementation, with involve some close coordination to make the system not just workable but successful, and people are tainted to use it.6 Maintain the System(1)What two tools are often used in the maintenance phase of the SDLC?Phase 6, systems maintain, adjusts and improves the system by having system audits and periodic evaluations and by making changes based on new conditions.Even with the conversion accomplished and the users trained, the system won’t just run itself. There is a sixth-and never-ending –phase in which the information system must—monitored to ensure that it is effective. Maintenance includes not only keeping the machinery running but also updating and upgrading the system to keep pace with new products, services, customers, government regulations, and other requirements.附件二英汉翻译系统开发阶段随着社会的发展,个人关系管理在日常生活中起的左右显而易见,怎样增强个人管理管理能力,减少管理成本,加强服务水平和个人的竞争力是困扰每一个主管的重要问题之一。
英文文献及翻译(计算机专业)The increasing complexity of design resources in a net-based collaborative XXX common systems。
design resources can be organized in n with design activities。
A task is formed by a set of activities and resources linked by logical ns。
XXX managementof all design resources and activities via a Task Management System (TMS)。
which is designed to break down tasks and assign resources to task nodes。
This XXX。
2 Task Management System (TMS)TMS is a system designed to manage the tasks and resources involved in a design project。
It poses tasks into smaller subtasks。
XXX management of all design resources and activities。
TMS assigns resources to task nodes。
XXX。
3 Collaborative DesignCollaborative design is a process that XXX a common goal。
In a net-based collaborative design environment。
n XXX n for all design resources and activities。
附件1:外文资料翻译译文 大容量存储器 由于计算机主存储器的易失性和容量的限制, 大多数的计算机都有附加的称为大容量存储系统的存储设备, 包括有磁盘、 CD 和 磁带。相对于主存储器,大的容量储存系统的优点是易失性小,容量大,低成本, 并且在许多情况下, 为了归档的需要可以把储存介质从计算机上移开。 术语联机和脱机通常分别用于描述连接于和没有连接于计算机的设备。联机意味着,设备或信息已经与计算机连接,计算机不需要人的干预,脱机意味着设备或信息与机器相连前需要人的干预,或许需要将这个设备接通电源,或许包含有该信息的介质需要插到某机械装置里。 大量储存器系统的主要缺点是他们典型地需要机械的运动因此需要较多的时间,因为主存储器的所有工作都由电子器件实现 。 1. 磁盘 今天,我们使用得最多的一种大量存储器是磁盘,在那里有薄的可以旋转的盘片,盘片上有磁介质以储存数据。盘片的上面和(或)下面安装有读/写磁头,当盘片旋转时,每个磁头都遍历一圈,它被叫作磁道,围绕着磁盘的上下两个表面。通过重新定位的读/写磁头,不同的同心圆磁道可以被访问。通常,一个磁盘存储系统由若干个安装在同一根轴上的盘片组成,盘片之间有足够的距离,使得磁头可以在盘片之间滑动。在一个磁盘中,所有的磁头是一起移动的。因此,当磁头移动到新的位置时,新的一组磁道可以存取了。每一组磁道称为一个柱面。 因为一个磁道能包含的信息可能比我们一次操作所需要得多,所以每个磁道划分成若干个弧区,称为扇区,记录在每个扇区上的信息是连续的二进制位串。传统的磁盘上每个磁道分为同样数目的扇区,而每个扇区也包含同样数目的二进制位。(所以,盘片中心的储存的二进制位的密度要比靠近盘片边缘的大)。 因此,一个磁盘存储器系统有许多个别的磁区, 每个扇区都可以作为独立的二进制位串存取,盘片表面上的磁道数目和每个磁道上的扇区数目对于不同的磁盘系统可能都不相同。磁区大小一般是不超过几个KB; 512 个字节或 1024 个字节。 磁道和扇区的位置不是磁盘的物理结构的固定部分,它是通过称为磁盘格式化或初始化形成的,它通常是由磁盘的厂家完成的,这样的盘称为格式化盘,大多数的计算机系统也能执行这一个任务。因此, 如果一个磁盘上的信息被损坏了磁盘能被再格式化,虽然这一过程会破坏所有的先前在磁盘上被记录的信息。 磁盘储存器系统的容量取决于所使用盘片的数目和所划分的磁道与扇区的密度。低容量的系统仅有一张塑料盘片组成,称为软磁盘或软盘,另一个名称是floppy disk,强调它的灵活性。 (现在直径3.5英寸的软盘封装在硬的塑料盒子里,没有继续使用老的为5.25英寸的软盘的柔软纸质包装)软盘很容易插入到相应的读写装置里,也容易读取和保存,因此,软盘通常用于信息的脱机存储设备,普通的3.5英寸软盘的容量是1.44MB,而特殊的软盘会有较高的容量,一个例子是INMEGA公司的ZIP盘,单盘容量达几百兆。 大容量的磁盘系统的容量可达几个GB,它可能有5-10个刚性的盘片,这种磁盘系统出于所用的盘片是刚性的,所以称为硬盘系统,为了使盘片可以比较快的旋转,硬盘系统里的磁头不与盘片是表面接触,而是依靠气流“浮”在上面,磁头与盘片表面的间隙非常小,甚至一颗尘粒都会造成磁头和盘片卡住,或者两者毁坏(这个现象称为划道)。因此,硬盘系统出厂前已被密封在盒子里。 评估一个磁盘系统的性能有几个指标: (1)寻道时间,读/写磁头从当前磁道移到目的磁道(依靠存取臂)所需要的时间 。 (2)旋转延迟或等待时间,读/写磁头到达所要求的磁道后,等待盘片旋转使读/写磁头位于所要存取的数据(扇区)上所需要的时间。它平均为盘片旋转一圈时间的一半。 (3)存取时间,寻道时间和等待时间之和。 (4)传输速率,数据从磁盘上读出或写入磁盘的时间。 硬盘系统的性能通常大大优于软盘,因为硬盘系统里的读/写磁头不接触盘片表面,所以盘片旋转速度达到每分种几千转,而软盘系统只有每分300转。因此,硬盘系统的传输速率通常以每秒MB数目来标称,比软盘系统大得多,因为后者仅为每秒数KB。 因为磁盘系统需要物理移动来完成它的们的操作,因此软盘系统和硬盘系统都难以与电子工业线路的速度相比。电子线路的延迟时间是以毫微秒或更小单位度量的,而磁盘系统的寻道时间,等待时间和存取时间是以毫秒度量的,因此,从磁盘系统检索信息所需要的时间与电子线路的等待时间相比是一个漫长的过程。 2. 光盘 另一种流行的数据存储技术是光盘,盘片直径是12厘米(大约5英寸),由反射材料组成,上面有光洁的保护层。通过在它们反射层上创建反射偏差的方法在上面记录信息,这种信息可以借助激光束检测出来,因为在CD旋转时激光束监视它的反射面上的反射偏差。 CD技术原来用于音频录制,采用称为CD-DA(光盘数字音频)的记录格式,今天作为计算机数据存储器使用的CD实际上使用同样的格式。CD上的信息是存放在一条绕着CD的螺旋形的磁道上,很象老式唱片里的凹槽;与老式唱片不同的是,CD上的磁道是从里向外的,这条磁道被分成称为扇区的单元。每个扇区有自己的标识,有2KB的数据容量,相当于在音频录制时1/75的音乐。 CD上保存的信息在整个螺旋形的磁道是按照统一的线性刻度,这就意味着,螺旋形磁道靠边的环道存放的信息比靠里边的环道要多。所以,如果盘片旋转一整圈,那么激光束在扫描螺旋形磁道外边时读到的扇区个数要比里边多。因而,为了获得一致的数据传输速率,CD-DA播放器能够根据激光束在盘片上的位置调整盘片的旋转速度。但是,作为计算机数据存储器使用的大多数CD驱动器都以一种比较快的、恒定的速度旋转盘片,因此必须适应数据传输速率的变化。 这种设计思想就使得CD存储系统在对付长而连续的数据串时有最好的表现,如音乐复制。相反,当一个应用需要以随机的方法存取数据时,那么磁盘存储器所用的方法(独立的、同心的磁道)就胜过CD所用的螺旋形方法。 传统CD的容量为600~700MB。但是,较新的DVD的容量达到几个GB。DVD由多个半透明的层构成,精确聚焦的激光可以识别不同的层。这种盘能够储存冗长的多媒体演示,包括整个电影。 3. 磁带 一种比较老式的大容量存储器设备是磁带。这时,信息储存在一条细薄的的塑料带的磁介质涂层上,而塑料带则围在磁带盘上作为存储器,要存取数据时,磁带装到称为磁带驱动器的设备里,它在计算机控制下通常可以读带,写带和倒带,磁带机有大有小,从小的盒式磁带机到比较老式的大型盘式磁带机,前者称为流式磁带机,它表面上类似于立体声收录机,虽然这些磁带机的存储容量依赖于所使用的格式,但是大多数都达几个GB。 现代的流式磁带机都将磁带划分为许多段,每段的标记是格式化过程中磁化形成的,类似于磁盘驱动器。每一段含有若干条纵向相互平行的磁道,这些磁道可以独立地存取,因而可以说,磁带是由许多单独的二进制位串组成的,好比磁盘的扇区。 磁带技术的主要缺点是:在一条磁带上不同位置之间移动非常耗费时间,因为在磁带卷轴之间要移动很长的磁带,于是,磁带系统的数据存取时间比磁盘系统的长,因为对于不同的扇区,磁盘的读/写磁头只要在磁道之间作短的移动,因此,磁带不是流行的联机的数据存储设备,但是,磁带系统常使用在脱机档案数据应用中,原因是它具有容量大,可靠性高和性价比好等优势。虽然例如DVD非传统技术的进展正迅速向这磁带的最后痕迹提出挑战。 4. 文件存储和检索 在大容量存储系统中,信息是以称为文件的大的单位储存的,一个典型的文件可以是一个全文本的资料,一张照片,一个程序或一组关于公司员工的数据,大容量存储系统的物理特性表明,这些文件是按照许多字节为单位存储的检索的,例如,磁盘上每个扇区必须作为一个连续的二进制位串进行操作,符合存储系统物理特性的数据块称为物理记录,因此存放在大容量存储系统中的文件通常包含许多物理记录。 与这种物理记录划分相对的是,一个文件通常有一种由它所表示的信息决定的自然划分,例如,一个关于公司员工信息的文件由许多单元组成,每个单元由一个员工的信息组成。这些自然产生的数据块称为逻辑记录,其次,逻辑记录通常由更小的称为字段的单元组成,例如,包含员工信息的记录大概由姓名,地址,员工标识号等字段组成。 逻辑记录的大小很少能够与大容量存储系统的物理记录相匹配,因此,可能许多个逻辑记录可以存放在一个物理记录中,也可能一个逻辑记录分成几个物理记录,因此,从大容量存储系统中存取数据时需要一定的整理工作,对于这个问题的常用解决方法是,在主存储系统里设置一个足够大的存储区域,它可以存放若干个物理记录并可以通过它重新组织数据。(以符合逻辑记录(读)或物理记录(写)的要求)也就是说,在主存储器与大容量存储系统之间传输的数据应该符合物理记录的要求。同时位于主存储器区域的数据按照逻辑记录可以被查阅。 主存储器中的这种存储区域称为缓冲区,通常,缓冲区是在一个设备向另一个设备传输数据时用来临时保存数据的,例如,现代的打印机都有自己的存储芯片,其大部分的作为缓冲区,以保存该打印机已经收到但还没有打印的那部分数据。 由此可知,主存储器,磁盘,光盘和磁带依次表示随机存取程度降低的设备,主存储器里所用的编址系统可允许快速随机地存取某个字节。磁盘只能随机存取整个扇区的数据。其次,检索一个扇区涉及寻道时间和旋转延迟,光盘也能够随机存取单个扇区,但是延迟时间比磁盘长一些,因为把读/写头定位到螺旋形磁道上并调准盘片的旋转速度需要的时间较长,最后,磁带几乎没有随机存取的机制,现代的磁带系统都在磁带上做标记,使得可以单独存取磁带上指定的段,但是磁带的物理结构决定了存取远距离的段需要花费比较多的时间。
计算机英文文献翻译INDUSTTRY PERSPECTIVEUSING A DSS TO KEEP THE COST OF GAS DOWNThink you spend a lot on gas for you car every yer?J.B.Hunt Transportation Inc.spends a lot more..J.B.Hunt moves freight around the country on its 10,000trucks and 48,000 trailers.The company spent$250 million in 2004 on fuel.That figure was up by 40 percent over the previous year.Diesel fuel is the company''s second-largest expense(drivers''wages is the largest),and the freight hauler wanted to find a way to reduce that.part of the answer lay,as it often does,in IT.In2000,J.B.Hunt installed a decision support system that provides drivers with help in deciding which gas station to stop at for ing statellite communications,the system beams diesel-fuel prices from all over the country straight into the cabs of the tricks.The software accesses a database with local taxes for each area of the country and then calculates for the drivers how much refueling will actually cost.J.B.Hunt doesn''t require drivers to use this system,but provides incentives for those who do.The company estimates that the system saves about $1 million annually.Decision Support SystemIn Chapter 3,you saw how data mining can help you make business decisions by giving you the ability to slice and dice your way through massive amounts of information.Actually,a data warehouse with data-mining tools is a form of decision support.The term decision support system ,used broadly ,means any computerized system that helps you make decisions.Medicine can mean the whole health care industy or in can mean cough syrup,depending on the context.Narrowly definrd,a decision support system(DSS) si a highly flexible and interantive IT system that is designed to support decision making when the problem is not structured.A DSS is an alliance between you,the decision maker,and specialized support provided by IT(see figure4.4).IT brings speed,vast amounts information,and sophisticated processing capabilities to help you create information useful in making a decision.You bring know-how in the form of your experience,intuition,judgment,and knowledge of the relevant factors.IT provides great power ,but you-as the decision maker-must know what kinds of questions to ask of the information and how to process the information to get those questions answered.In fact,theprimary objective of a DSS is to improve your effectiveness as a decision maker by providing you with assistance that will complement your insights.This union of your know-how and IT power helps you generate business intelligence so that you can quickly respond to changes in the marketplace and manage resources in themost effective and efficient ways possible.Following are some example of the varid applicatins of DSSs:.。
Five essential elements that the website set up Though the network is total to enter overcast appearance currently, but the power head of the business enterprise construction website back anti- increase. A lot of have farseeing of the business enterprise has already used from the network of the convenience is sexy to arrive construction one self’s website of inevitable. At present is the bran acre that burns money, the concept website drops into in succession of day is exactly also the entity business enterprise that has real strength to get involved a network of hour. But, when the business enterprise decision assurance want to do a website of time, but usually commence from nowhere, don't know need to be throw in how much , need to be usher in what talented person. Under the circumstance of this kind of innumerable in heart, general business enterprise usually according to the mode of the business enterprise publicity material, under the suggestion of the network company or IT technical personnel, the construction becomes the business enterprise website that goes through a format. This kind of business enterprise website that goes through a format, also have it five essential elements: The business enterprise introduction, the contact way, the product(service) introduction, message board, forum. The value of this kind of business enterprise website, the equal to per origin electronics version business enterprise publicity volume, turns over while being provided for interested in customer to need. Therefore, a lot of business enterprises think, throw in several 10000 dollars to do a didn't how much person visit of the website have unworthy. BE a network particularly currently at the time that heat fade away, the general business enterprise can't even consider to throw in several hundred top the ten million imitate the large website of the pure concept.So, how make sure the fixed position that the business enterprise constructs a website?The key figures make policy to the website a real realization from the business enterprise, what is the most decisive decision factor? Isn't the network company or the IT technical personnel's level, is not the funds devotion of the business enterprise either how much, is a website whole plan ability. A lot of business enterprise superstition IT techniques, think as long as invite a professional talented person of IT of arrive the high point, can set up a website. The technical personnel of the IT also is much conceited, the condition of the market also anticipates a rising continuously. Quite good, set up a website not difficult. As long as there is funds, time and condition, anyone’s all can be CEO, CFO, the COO etc. Had no more easy but cheap than network life and the business stage up to now from the thou. This also is why will appear so many websites of cause. But, regardless the technique level is much high, it is just means but isn't a purpose. Have no business enterprise need, technique useless.On the other hand, the business enterprise devotion funds also doesn't necessarily can attain result. There has been currently several 100 up under the circumstance of the website of the ten million, how much a new website really not easy cause people an attention. Regardless website the oneself thinks the meaning is how important, people have already no longer believed the tears and battle cry. People get to the Internet, end is for look for real to oneself useful resources. The resources just is a website to exist of unique have a foothold it originally. Resources of how much decision the size of thewebsite value, this is the truth that the network just knows through the rains and winds of this several years. Depended on an advertisement to try to gain an eyeball to own quantity before, click a rate, popularity etc. the concept has already fall into disfavor, take but the generation registered the customer number, visiting again a rate(turn head a rate), the source information quantity with can participate degree etc. standard. Not only is a risk only investment the house concern these, the website constructor and the plan also should even concern these.Five essential elements of the website construction : Purpose, resources, technique, object, result. The purpose is a need, is the initial problem that the website owns and lets design to understand. The purpose have at present of with long-term of, have public of with implicit of, have direct of with indirect of, have main of with from belong to of, is viable with not viable etc. The purpose relates to the aim or the creativity of the website directly. The creativity of the website is the soul place of the website. Have no creative website like have no the hull of soul. The concept of the resources is very extensive, not the simplicity point the amount of information that the oneself can provide. The funds is the most important composition in the resources. The common saying say: See the vegetables have a meal, measuring form dress. To website, this is the key particularly. The network is a pasta, want to crumple what kind be what kind; the network is a bottomless pit again, how much can burn down. Therefore, the network the earnings mode is the website constructor should be clear early of a debt. For the very first time preparation devotion how much, prepare to support a funds annually how much, provide how much human resource, short date profit and loss the balance point and the long-term earnings target etc. Contents-manpower-funds the threes are closely related. All want to weigh the manpower and the funds resources that can adjust a degree while designing each column eyes in the consideration website. Explicit the purpose and resources be a choice immediately after what technique level. BE the static state a page still a dynamic state page for example, whether the adoption database, the beautiful work result has high and low request, the renewal speed rate of speed and the maintenance way JIAN3 FAN2's etc. Will consider the website service object to participate degree in using convenience. Be also the most important finally, is hope the website attain what result, and how attain this result.The main page is the square one of the website design. Many persons also think the website design are a main page design, the main page level high and low representative website level is high and low. Really such to some extent, so the style, the color layout, the column eyes design, writing of the main page expresses the place that etc. becomes a website to produce a controversy most easily. The so-called sees, a wise man sees ,this top has no everyone forever consistently satisfied of opinion, more temperament styles that is a body to design directly now. Along with to website cognitive transformation, the style of the main page is also transformation usually. But, the main page should be understand to tell customer without any error its purpose, this is affirmative. Now a lot of websites is a particularly large website, all in succession outstanding it owns in the top of the main page of resources, browse with the automatic and more recent contents attraction customer. Therefore, these main pages all imply the dynamic state renewal contents of, then belong to a dynamic state web page technique. And, want generally in consideration of the tasteof the customer(object), arrange in the contents up set up with meticulous care, attain the best result by period. The main page design has two kinds of main trends: Pursue beautiful result(static state) of appearance with pursue a contents abundant result(dynamic state), the former in keeping with contents not much of business enterprise website, the latter suits the comprehensive website that the contents enrich. But some function websites, usually chase the most function outstanding in the main page center, if search the engine and large database.The column purpose the assurance is the website inner part structure of key. Usually forum, message board, the concerning us, website navigate, declare, register area etc. the basic column eyes all put in the next in importance position chain to connect into. The news, main function, main contents, the renewal contents hint etc. puts in the refreshing position. The news and renewal are a website to order, it make, is one of the main meanness that the attraction turns head a rate, is also an essential website necessary of. In browse the eyes, the vitality of the website now of body is here.The website contents cent the function and information two major type. Function have: search engine, the database index, stand to order to navigate, electronic commerce, community, submit manuscript, the self-help web page(free and main page), register, network office etc. The information containment all levels writing page, the data database, and the related chain connect etc. The basis website purpose make sure outstanding what contents, and enrichment and the renewal contents of the technique means and form.The customer(object) community also has important influence to the website design. Face an experienced professional of network for example with face general common customer, its technique carries out a way different, to "usage convenience" the comprehension of this phrase, two community is also different.What is the marking of[with] the website planning success? Be click a rate, popularity, register a number, turn head a rate and lead of consent, great ,contents of customer of abundant, use of convenience, operate of smooth, invest the favor of the house? Also being also incompletely. Successful website should be comprehend to set up the station purpose with accuracy, adjust a limited resources of one degree adequately, fittingly usage suitable technique, serve the usage object expediently, attain expectation result in time. Why want to say convenience with in time? Because network the biggest advantage be fast with the convenience, if the website compares other paths to have no forerunner, even can attain the expectation result a mill- also is nonsense.Construct a website not difficult, is the maintenance and the development that sets up the empress website seldom. Regardless how creativity is novel, the website continues to develop to return a root to still depend on a resources exactly, depending on forever an information total amount of lead the one step, otherwise will be very quickly replace by person. Website depended on a creativity to become famous with one action, its creativity drive after person's mimicry, very quickly the example replace by person is all like that.So, website at set up after, how support with the development?1. The persistence is own special features. The special features is the body of the purpose now, the persistence special features is the purpose that the persistence sets up a station. Usually the transformation special features is disaster to website, being equal tore- construct a new website continuously, how much efforts all would the form notbecome a backlog but throw to the winds. And the manpower funds waste very greatly.2. Concentration the most information’s. Must concentrate all related information’s with maximum limit in own realm. Website like gather city, always merchandise the most places can draw on the most persons. Adopt the whole way collections and display the contents that the website enrich under the possible condition, this principle can never be dated.3. Keep a technical forerunner. As long as the condition allow, must have the technique strength to carry on the technique reformation and exaltation to the website. The website beginning sets up a hair building just, need to be continuously perfect with the correction, reform with exaltation. If don't go together with other to keep to even surmount synchronously on the technique, the website is very quick and then will fade out history stage.4. Control the need of the customer. The customer need is the problem that puts in the first to consider. Among them participate is the most important need of customer. The customer can announce a speech, message, put forward animadversion and construct an opinion general to just participate. Whether website satisfies the need of the customer or not, its participating degree is a main marking.5. Consciousness tracked to run before. The magic power of the network is it continuously creative with surmount. The network constructor must track to flow out continuously to run before consciousness now, and reflect it to own website in time in. This will never be what pursue one astonishing, gain notoriety by shocking statement of result, but the inevitable request that maintenance leads a position.In fine, is the website beginning to set up regardless to still develop over a long period of time, five essential elements all from beginning to end carry through among them. Among them decisive be still a resources-funds, talented person, information, andall participants pay the effort and the strenuous effort of.网站建立的五要素尽管目前网络总体进入低迷状态,但企业建设网站的势头却不退反增。
东北石油大学本科毕业设计英文文献及翻译学院计算机与信息技术学院班级计科07-1班学号姓名指导教师职称副教授 技术1.构建 页面 和结构 是微软.NET framework整体的一部分, 它包含一组大量的编程用的类,满足各种编程需要。
在下列的二个部分中, 你如何学会 很适合的放在.NET framework, 和学会能在你的 页面中使用语言。
.NET类库假想你是微软。
假想你必须支持大量的编程语言-比如Visual Basic 、C# 和C++. 这些编程语言的很多功能具有重叠性。
举例来说,对于每一种语言,你必须包括存取文件系统、与数据库协同工作和操作字符串的方法。
此外,这些语言包含相似的编程构造。
每种语言,举例来说,都能够使用循环语句和条件语句。
即使用Visual Basic 写的条件语句的语法不与用C++ 写的不一样,程序的功能也是相同的。
最后,大多数的编程语言有相似的数据变量类型。
以大多数的语言,你有设定字符串类型和整型数据类型的方法。
举例来说,整型数据最大值和最小值可能依赖语言的种类,但是基本的数据类型是相同的。
对于多种语言来说维持这一功能需要很大的工作量。
为什么继续再创轮子? 对所有的语言创建这种功能一次,然后把这个功能用在每一种语言中岂不是更容易。
.NET类库不完全是那样。
它含有大量的满足编程需要的类。
举例来说,.NET类库包含处理数据库访问的类和文件协同工作,操作文本和生成图像。
除此之外,它包含更多特殊的类用在正则表达式和处理Web协议。
.NET framework,此外包含支持所有的基本变量数据类型的类,比如:字符串、整型、字节型、字符型和数组。
最重要地, 写这一本书的目的, .NET类库包含构建的 页面的类。
然而你需要了解当你构建.NET页面的时候能够访问.NET framework 的任意类。
理解命名空间正如你猜测的, .NET framework是庞大的。
它包含数以千计的类(超过3,400) 。
计算机专业英语论文一、原文New technique of the computer network AbstractThe 21 century is an ages of the information economy, being the computer network technique of representative techniques this ages, will be at very fast speed develop soon in continuously creatively, and will go deep into the people's work, life and study. Therefore, control this technique and then seem to be more to deliver the importance. Now I mainly introduce the new technique of a few networks in actuality live of application.keywordsInternet Digital Certificates Digital Wallets Grid Storage1. ForewordInternet turns 36, still a work in progressThirty-six years after computer scientists at UCLA linked two bulky computers using a 15-foot gray cable, testing a new way for exchanging data over networks, what would ultimately become the Internet remains a work in progress.University researchers are experimenting with ways to increase its capacity and speed. Programmers are trying to imbue Web pages with intelligence. And work is underway to re-engineer the network to reduce Spam (junk mail) and security troubles.All the while threats loom: Critics warn that commercial, legal and political pressures could hinder the types of innovations that made the Internet what it is today.Stephen Crocker and Vinton Cerf were among the graduate students who joined UCLA professor Len Klein rock in an engineering lab on Sept. 2, 1969, as bits of meaningless test data flowed silently between the two computers. By January, three other "nodes" joined the fledgling network.Then came e-mail a few years later, a core communications protocol called TCP/IP in the late 70s, the domain name system in the 80s and the World Wide Web - now the second most popular application behind e-mail - in 1990. The Internet expanded beyond its initial military and educational domain into businesses and homes around the world.Today, Crocker continues work on the Internet, designing better tools for collaboration. And as security chairman for the Internet's key oversight body, he is trying to defend the core addressing system from outside threats.He acknowledges the Internet he helped build is far from finished, and changes are in store to meet growing demands for multimedia. Network providers now make only "best efforts" at delivering data packets, and Crocker said better guarantees are needed to prevent the skips and stutters now common with video.Cerf, now at MCI Inc., said he wished he could have designed the Internet with security built-in. Microsoft Corp.Yahoo Inc. and America Online Inc., among others, are currently trying to retrofit the network so e-mail senders can be authenticated - a way to cut down on junk messages sent using spoofed addresses.Many features being developed today wouldn't have been possible at birth given the slower computing speeds and narrower Internet pipes, or bandwidth, Cerf said.2.Digital CertificatesDigital certificates are data files used to establish the identity of people and electronic assets on the Internet. They allow for secure, encrypted online communication and are often used to protect online transactions.Digital certificates are issued by a trusted third party known as a certification authority (CA). The CA validates the identity of a certificate holder and “signs” the certificate to attest that it hasn’t been forged or altered in any way.New Uses For Digital CertificatesDigital certificates are now being used to provide security and validation for wireless connections, and hardware manufacturers are one of the latest groups to use them. Not long ago, Version Inc. announced its Cable Modem Authentication Services, which allow hardware manufacturers to embed digital certificates into cable modems to help prevent the pirating of broadband services through device cloning.Using Version software, hardware makers can generate cryptographic keys and corresponding digital certificates those manufacturers or cable service providers can use to automatically identify individual modems. This ‘ast-mile’authentication not only pro tects the value of existing content and services but also positions cable system operators to bring a broad new range of content, applications and value-added services to market.When a CA digitally signs a certificate, its owner can use it as an electronic passport to prove his identity. It can be presented to Web sites, networks or individuals that require secure access.Identifying information embedded in the certificate includes the holder’ s name and e-mail address, the name of the CA, a serial number and any activation or expiration data for the certificate. When the CA verifies a user’s identity, the certificate uses the holder’s public encryption key to protect this data.Certificates that a Web server uses to confirm the authenticity of a Web sit e for a user’s browser also employ public keys. When a user wants to send confidential information to a Web server, such as a credit-cardnumber for an online transaction, the browser will access the public key in the server’s digital certificate to verify its identity.Role of Public-Key CryptographyThe public key is one half of a pair of keys used in public-key cryptography, which provides the foundation for digital certificates.Public-key cryptography uses matched public and private keys for encrypt ion and decryption. These keys have a numerical value that’s used by an algorithm to scramble information and make it readable only to users with the corresponding decryption key.Others to encrypt information meant only for that person use a person’s public key. When he receives the information, he uses his corresponding private key, which is kept secret, to decrypt the data. A person's public key can be distributed without damaging the private key. A Web server using a digital certificate can use its private key to make sure that only it can decrypt confidential information sent to it over the Internet.The Web server’s certificate is validated by a self-signed CA certificate that identifies the issuing CA. CA certificates are preinstalled on most major Web browsers, including Microsoft Internet Explorer and Netscape Navigator.The CA certificate tells users whether they can trust the Web server certificate when it’s presented to the browser. If the validity of the Web server certificate is affirmed, the certificate’s public key is used to secure information for the server using Secure Sockets Layer (SSL) technology. Digital certificates are used by the SSL security protocol to create a secure “pipe” between two parties that seek confidential communicati on. SSL is used in most major Web browsers and commercial Web servers.3. Digital Wallets----A digital wallet is software that enables users to pay for goods on the Web .It holds credit-card numbers and other personal information suchas a shipping address .Once entered,the data automatically populates order fields at merchant sites .----When using a digital wallet,consumers don’t need to fill out order forms on each site when they purchase an item because the information has already been stored and is automatically updated and entered into the order fields across merchant sites .Consumers also benefit when using digital wallets because their information is encrypted or protected by a private software code .And merchants benefit by receiving protection against fraud .----Digital wallets are available to consumers free of charge,and they’re fairly easy to obtain .For example,when a consumer makes a purchase at a merchant site that’s set up to handle server-side digital wallets,he types his name and paym ent and shipping information into the merchant’s own form .At the end of the purchase,one consumer is asked to sign up for a wallet of his choice by entering a user name and password for future purchases .Users can also acquire wallets at a wallet vendor’s site .----Although a wallet is free for consumers,vendors charge merchants for wallets .----Digital wallets come in two main types: client-side and server- side .Within those divisions are wallets that work only on specific merchant sites and those that are merchant agnostic .----Client-based digital wallets,the older of the two types,are falling by the wayside,according to analysts,because they require users to download and install software .A user downloads the wallet application and inputs payment and mailing information .At that point,the information is secured and encrypted on the user’s hard drive .The user retains control of his credit card and personal information locally .----With a server-based wallet,a user fills out his personal information,and a cookie is automatically downloaded .(A cookie is a text file that contains information about the user .)In this scenario,the consumer information resides on the server of a financial institution or a digital wallet vendor rather than on the user’s PC .----Server-side wallets provide assurance against merchant fraud because they use certificates to verify the identity of all parties .When a party makes a transaction,it presents its certificate to the other parties involved .A certificate is an attachment to an electronic message used to verify the identity of the party and to provide the receiver with the means to encode a reply .----Furthermore,the cardholder’s sensitive data is typically housed at a financial institution,so there’s an extra se nse of security because financial environments generally provide the highest degree of security .----But even though wallets provide easy shopping online,adoption hasn’t been widespread .----Standards are pivotal to the success of digital wallets .----Last month,major vendors,including Microsoft Corp .,Sun Microsystems Inc .and America Online Inc .announced their endorsement of a new standard called EMCL,or E-Commerce Modeling Language,to give Web merchants a standardized way to collect electronic data for shipping,billing and payment .4. Grid StorageDefinition: Grid storage, analogous to grid computing, is a new model for deploying and managing storage distributed across multiple systems and networks, making efficient use of available storage capacity without requiring a large, centralized switching system.A grid is, in fact, a meshed network in which no single centralized switch or hub controls routing. Grids offer almost unlimited scalability in size and performance because they aren’t constr ained by the need for ever-larger central switches. Grid networks thus reduce component costs and produce a reliable and resilient structure.Applying the grid concept to a computer network lets us harness available but unused resources by dynamically allocating and deal locating capacity, bandwidth and processing among numerous distributed computers. A computing grid can span locations, organizations, machine architectures and software boundaries, offering power, collaboration andinformation access to connected users. Universities and research facilities are using grids to build what amounts to supercomputer capability from PCs, Macintoshes and Linux boxes.After grid computing came into being, it was only a matter of time before a similar model would emerge for making use of distributed data storage. Most storage networks are built in star configurations, where all servers and storage devices are connected to a single central switch. In contrast, grid topology is built with a network of interconnected smaller switches that can scale as bandwidth increases and continue to deliver improved reliability and higher performance and connectivity.Based on current and proposed products, it appears that a grid storage system should include the following:Modular storage arrays: These systems are connected across a storage network using serial ATA disks. The systems can be block-oriented storage arrays or network-attached storage gateways and servers.Common virtualization layer: Storage must be organized as a single logical pool of resources available to users.Data redundancy and availability: Multiple copies of data should exist across nodes in the grid, creating redundant data access and availability in case of a component failure.Common management: A single level of management across all nodes should cover the areas of data security, mobility and migration, capacity on demand, and provisioning.Simplified platform/management architecture: Because common management is so important, the tasks involved in administration should be organized in modular fashion, allowing the auto discovery of new nodes in the grid and automating volume and file management.Three Basic BenefitsApplying grid topology to a storage network provides several benefits, including the following:Reliability. A well-designed grid network is extremely resilient. Rather than providing just two paths between any two nodes, the grid offers multiple paths between each storage node. This makes it easy to service and replace components in case of failure, with minimal impact on system availability or downtime.Performance. The same factors that lead to reliability also can improve performance. Not requiring a centralized switch with many ports eliminates a potential performance bottleneck, and applying load-balancing techniques to the multiple paths available offers consistent performance for the entire network.Scalability. It’s easy to expand a grid network using inexpensive switches with low port counts to accommodate additional servers for increased performance, bandwidth and capacity. In essence, grid storage is a way to scale out rather than up, using relatively inexpensive storage building blocks.四、译文新技术的计算机网络摘要:21世纪是信息经济的时代,作为这个时代的代表技术,计算机网络技术将在非常快的速度发展很快,不断创造性地将进入人们的工作,学习和生活中深。
毕业设计(论文)外文文献翻译(本科学生用)题目:Plc based control system for the music fountain 学生姓名:_ ___学号:060108011117 学部(系): 信息学部专业年级: _06自动化(1)班_指导教师: ___职称或学位:助教__20 年月日外文文献翻译(译成中文1000字左右):【主要阅读文献不少于5篇,译文后附注文献信息,包括:作者、书名(或论文题目)、出版社(或刊物名称)、出版时间(或刊号)、页码。
提供所译外文资料附件(印刷类含封面、封底、目录、翻译部分的复印件等,网站类的请附网址及原文】英文节选原文:Central Processing Unit (CPU) is the brain of a PLC controller. CPU itself is usually one of the microcontrollers. Aforetime these were 8-bit microcontrollers such as 8051, and now these are 16-and 32-bit microcontrollers. Unspoken rule is that you’ll find mostly Hitachi and Fujicu microcontrollers in PLC controllers by Japanese makers, Siemens in European controllers, and Motorola microcontrollers in American ones. CPU also takes care of communication, interconnectedness among other parts of PLC controllers, program execution, memory operation, overseeing input and setting up of an output. PLC controllers have complex routines for memory checkup in order to ensure that PLC memory was not damaged (memory checkup is done for safety reasons).Generally speaking, CPU unit makes a great number of check-ups of the PLC controller itself so eventual errors would be discovered early. You can simply look at any PLC controller and see that there are several indicators in the form. of light diodes for error signalization.System memory (today mostly implemented in FLASH technology) is used by a PLC for a process control system. Aside form. this operating system it also contains a user program translated forma ladder diagram to a binary form. FLASH memory contents can be changed only in case where user program is being changed. PLC controllers were used earlier instead of PLASH memory and have had EPROM memory instead of FLASH memory which had to be erased with UV lamp and programmed on programmers. With the use of FLASH technology this process was greatly shortened. Reprogramming a program memory is done through a serial cable in a program for application development.User memory is divided into blocks having special functions. Some parts of a memory are used for storing input and output status. The real status of an input is stored either as “1”or as “0”in a specific memory bit/ each input or output has one corresponding bit in memory. Other parts of memory are used to store variable contents for variables used in used program. For example, time value, or counter value would be stored in this part of the memory.PLC controller can be reprogrammed through a computer (usual way), but also through manual programmers (consoles). This practically means that each PLC controller can programmed through a computer if you have the software needed for programming. Today’s transmission computers are ideal for reprogramming a PLC controller in factory itself. This is of great importance to industry. Once the system is corrected, it is also important to read the right program into a PLC again. It is also good to check from time to time whether program in a PLC has not changed. This helps to avoid hazardous situations in factory rooms (some automakers have established communication networks which regularly check programs in PLC controllers to ensure execution only of good programs). Almost every program for programming a PLC controller possesses various useful options such as: forced switching on and off of the system input/outputs (I/O lines),program follow up in real time as well as documenting a diagram. This documenting is necessary to understand and define failures and malfunctions. Programmer can add remarks, names of input or output devices, and comments that can be useful when finding errors, or with system maintenance. Adding comments and remarks enables any technician (and not just a person who developed the system) to understand a ladder diagram right away. Comments and remarks can even quote precisely part numbers if replacements would be needed. This would speed up a repair of any problems that come up due to bad parts. The old way was such that a person who developed a system had protection on the program, so nobody aside from this person could understand how it was done. Correctly documented ladder diagram allows any technician to understand thoroughly how system functions.Electrical supply is used in bringing electrical energy to central processing unit. Most PLC controllers work either at 24 VDC or 220 VAC. On some PLC controllers you’ll find electrical supply as a separate module. Those are usually bigger PLC controllers, while small and medium series already contain the supply module. User has to determine how much current to take from I/O module to ensure that electrical supply provides appropriate amount of current. Different types of modules use different amounts of electrical current. This electrical supply is usually not used to start external input or output. User has to provide separate supplies in starting PLC controller inputs because then you can ensure so called “pure” supply for the PLC controller. With pure supply we mean supply where industrial environment can not affect it damagingly. Some of the smaller PLC controllers supply their inputs with voltage from a small supply source already incorporated into a PLC.中文翻译:从结构上分,PLC分为固定式和组合式(模块式)两种。
外文原文JSP application frameworksbrian wright、michael freedman/pdf/introduction-to-machine-learning/ What are application frameworks:A framework is a reusable, semi-complete application that can be specialized to produce custom applications [Johnson]. Like people, software applications are more alike than they are different. They run on the same computers, expect input from the same devices, output to the same displays, and save data to the same hard disks. Developers working on conventional desktop applications are accustomed to toolkits and development environments that leverage the sameness between applications. Application frameworks build on this common ground to provide developers with a reusable structure that can serve as the foundation for their own products.A framework provides developers with a set of backbone components that have the following characteristics:1.They are known to work well in other applications.2. They are ready to use with the next project.3. They can also be used by other teams in the organization.Frameworks are the classic build-versus-buy proposition. If you build it, you will understand it when you are done—but how long will it be before you can roll your own? If you buy it, you will have to climb the learning curve—and how long is that going to take? There is no right answer here, but most observers would agree that frameworks such as Struts provide a significant return on investment compared to starting from scratch, especially for larger projects.Other types of frameworks:The idea of a framework applies not only to applications but to application componentsas well. Throughout this article, we introduce other types of frameworks that you can use with Struts. These include the Lucene search engine, the Scaffold toolkit, the Struts validator, and the Tiles tag library. Like application frameworks, these tools provide semi-complete versions of a subsystem that can be specialized toprovide a custom component.Some frameworks have been linked to a proprietary development environment. This is not the case with Struts or any of the other frameworks shown in this book. You can use any development environment with Struts: Visual Age for Java, JBuilder, Eclipse, Emacs, and Textpad are all popular choices among Struts developers. If you can use it with Java, you can use it with Struts.Enabling technologies:Applications developed with Struts are based on a number of enabling technologies.These components are not specific to Struts and underlie every Java web application. A reason that developers use frameworks like Struts is to hide the nasty details behind acronyms like HTTP, CGI, and JSP. As a Struts developer, you don’t need to be an alphabet soup guru, but a working knowledge of these base technologies can help you devise creative solutions to tricky problems.Hypertext Transfer Protocol (HTTP):When mediating talks between nations, diplomats often follow a formal protocol.Diplomatic protocols are designed to avoid misunderstandings and to keep negotiations from breaking down. In a similar vein, when computers need to talk, they also follow a formal protocol. The protocol defines how data is transmitted and how to decode it once it arrives. Web applications use the Hypertext Transfer Protocol (HTTP) to move data between the browser running on your computer and the application running on the server.Many server applications communicate using protocols other than HTTP. Some of these maintain an ongoing connection between the computers. The application server knows exactly who is connected at all times and can tell when a connection is dropped. Because they know the state of each connection and the identity of each person using it, these are known as stateful protocols.By contrast, HTTP is known as a stateless protocol. An HTTP server will accept any request from any client and will always provide some type of response, even if the response is just to say no. Without the overhead of negotiating and retaining a connection, stateless protocols can handle a large volume of requests. This is one reason why the Internet has been able to scale to millions of computers.Another reason HTTP has become the universal standard is its simplicity. An HTTP request looks like an ordinary text document. This has made it easy forapplications to make HTTP requests. You can even send an HTTP request by hand using a standard utility such as Telnet. When the HTTP response comes back, it is also in plain text that developers can read.The first line in the HTTP request contains the method, followed by the location of the requested resource and the version of HTTP. Zero or more HTTP request headers follow the initial line. The HTTP headers provide additional information to the server. This can include the browser type and version, acceptable document types, and the browser’s cookies, just to name a few. Of the seven request methods, GET and POST are by far the most popular.Once the server has received and serviced the request, it will issue an HTTP response. The first line in the response is called the status line and carries the HTTP protocol version, a numeric status, and a brief description of the status. Following the status line, the server will return a set of HTTP response headers that work in a way similar to the request headers.As we mentioned, HTTP does not preserve state information between requests.The server logs the request, sends the response, and goes blissfully on to the next request. While simple and efficient, a stateless protocol is problematic for dynamic applications that need to keep track of their users. (Ignorance is not always bliss.Cookies and URL rewriting are two common ways to keep track of users between requests. A cookie is a special packet of information on the user’s computer. URL rewriting stores a special reference in the page address that a Java server can use to track users. Neither approach is seamless, and using either means extra work when developing a web application. On its own, a standard HTTP web server does not traffic in dynamic content. It mainly uses the request to locate a file and then returns that file in the response. The file is typically formatted using Hypertext Markup Language (HTML) [W3C, HTML] that the web browser can format and display. The HTML page often includes hypertext links to other web pages and may display any number of other goodies, such as images and videos. The user clicks a link to make another request, and the process begins a new.Standard web servers handle static content and images quite well but need a helping hand to provide users with a customized, dynamic response.DEFINITION:Static content on the Web comes directly from text or data files,like HTML or JPEG files. These files might be changed from time to time, but they are not altered automatically when requested by a web browser. Dynamic content, on the other hand, is generated on the fly, typically in response to an individualized request from a browser.Common Gateway Interface (CGI):The first widely used standard for producing dynamic content was the Common Gateway Interface (CGI). CGI uses standard operating system features, such as environment variables and standard input and output, to create a bridge, or gateway, between the web server and other applications on the host machine. The other applications can look at the request sent to them by the web server and create a customized response.When a web server receives a request that’s intended for a CGI program, it runs that program and provides the program with information from the incoming request. The CGI program runs and sends its output back to the server. The web server then relays the response to the browser.CGI defines a set of conventions regarding what information it will pass as environment variables and how it expects standard input and output to be used. Like HTTP, CGI is flexible and easy to implement, and a great number of CGI-aware programs have been written.The main drawback to CGI is that it must run a new copy of the CGI-aware program for each request. This is a relatively expensive process that can bog down high-volume sites where thousands of requests are serviced per minute. Another drawback is that CGI programs tend to be platform dependent. A CGI program written for one operating system may not run on another.Java servlets:Sun’s Java Servlet platform directly addresses the two main drawbacks of CGI programs.First, servlets offer better performance and utilization of resources than conventional CGI programs. Second, the write-once, run-anywhere nature of Java means that servlets are portable between operating systems that have a Java Virtual Machine (JVM).A servlet looks and feels like a miniature web server. It receives a request and renders a response. But, unlike conventional web servers, the servlet application programming interface (API) is specifically designed to help Java developers createdynamic applications.The servlet itself is simply a Java class that has been compiled into byte code, like any other Java object. The servlet has access to a rich API of HTTP-specific services, but it is still just another Java object running in an application and can leverage all your other Java assets.To give conventional web servers access to servlets, the servlets are plugged into containers. The servlet container is attached to the web server. Each servlet can declare what URL patterns it would like to handle. When a request matching a registered pattern arrives, the web server passes the request to the container, and the container invokes the servlet.But unlike CGI programs, a new servlet is not created for each request. Once the container instantiates the servlet, it will just create a new thread for each request. Java threads are much less expensive than the server processes used by CGI programs. Once the servlet has been created, using it for additional requests incurs very little overhead. Servlet developers can use the init() method to hold references to expensive resources, such as database connections or EJB Home Interfaces, so that they can be shared between requests. Acquiring resources like these can take several seconds—which is longer than many surfers are willing to wait.The other edge of the sword is that, since servlets are multithreaded, servlet developers must take special care to be sure their servlets are thread-safe. To learn more about servlet programming, we recommend Java Servlets by Example, by Alan R. Williamson [Williamson]. The definitive source for Servlet information is the Java Servlet Specification [Sun, JST].JavaServer Pages:While Java servlets are a big step up from CGI programs, they are not a panacea. To generate the response, developers are still stuck with using println statements to render the HTML. Code that looks like:out.println("<P>One line of HTML.</P>");out.println("<P>Another line of HTML.</P>");is all too common in servlets that generate the HTTP response. There are libraries that can help you generate HTML, but as applications grow more complex, Java developers end up being cast into the role of HTML page designers.Meanwhile, given the choice, most project managers prefer to divide development teams into specialized groups. They like HTML designers to be working on the presentation while Java engineers sweat the business logic. Using servlets alone encourages mixing markup with business logic, making it difficult for team members to specialize.To solve this problem, Sun turned to the idea of using server pages to combine scripting and templating technologies into a single component. To build Java Server Pages, developers start by creating HTML pages in the same old way, using the same old HTML syntax. To bring dynamic content into the page, the developer can also place JSP scripting elements on the page. Scripting elements are tags that encapsulate logic that is recognized by the JSP. You can easily pick out scripting elements on JSP pages by looking for code that begins with <% and ends with %>.To be seen as a JSP page, the file just needs to be saved with an extension of .jsp.When a client requests the JSP page, the container translates the page into a source code file for a Java servlet and compiles the source into a Java class file—just as you would do if you were writing a servlet from scratch. At runtime, the container can also check the last modified date of the JSP file against the class file. If the JSP file has changed since it was last compiled, the container will retranslate and rebuild the page all over again.Project managers can now assign the presentation layer to HTML developers, who then pass on their work to Java developers to complete the business-logic portion. The important thing to remember is that a JSP page is really just a servlet. Anything you can do with a servlet, you can do with a JSP.JavaBeans:JavaBeans are Java classes which conform to a set of design patterns that make them easier to use with development tools and other components.DEFINITION A JavaBean is a reusable software component written in Java. To qualify as a JavaBean, the class must be concrete and public, and have a noargument constructor. JavaBeans expose internal fields as properties by providing public methods that follow a consistent design pattern. Knowing that the property names follow this pattern, other Java classes are able to use introspection to discover and manipulate JavaBean properties.The JavaBean design patterns provide access to the bean’s internal state throughtwo flavors of methods: accessors are used to read a JavaBean’s state; mutators are used to change a JavaBean’s state.Mutators are always prefixed with lowercase token set followed by the property name. The first character in the property name must be uppercase. The return value is always void—mutators only change property values; they do not retrieve them. The mutator for a simple property takes only one parameter in its signature, which can be of any type. Mutators are often nicknamed setters after their prefix. The mutator method signature for a weight property of the type Double would be:public void setWeight(Double weight)A similar design pattern is used to create the accessor method signature. Accessor methods are always prefixed with the lowercase token get, followed by the property name. The first character in the property name must be uppercase. The return value will match the method parameter in the corresponding mutator. Accessors for simple properties cannot accept parameters in their method signature. Not surprisingly, accessors are often called getters.The accessor method signature for our weight property is:public Double getWeight()If the accessor returns a logical value, there is a variant pattern. Instead of using the lowercase token get, a logical property can use the prefix is, followed by the property name. The first character in the property name must be uppercase. The return value will always be a logical value—either boolean or Boolean. Logical accessors cannot accept parameters in their method signature.The boolean accessor method signature for an on property would bepublic boolean isOn()The canonical method signatures play an important role when working with Java- Beans. Other components are able to use the Java Reflection API to discover a JavaBean’s properties by looking for methods prefixed by set, is, or get. If a component finds such a signature on a JavaBean, it knows that the method can be used to access or change the bean’s properties.Sun introduced JavaBeans to work with GUI components, but they are now used with every aspect of Java development, including web applications. When Sun engineers developed the JSP tag extension classes, they designed them to work withJavaBeans. The dynamic data for a page can be passed as a JavaBean, and the JSP tag can then use the bean’s properties to customize the output.For more on JavaBeans, we highly recommend The Awesome Power of JavaBeans, by Lawrence H. Rodrigues [Rodrigues]. The definitive source for JavaBean information is the JavaBean Specification [Sun, JBS].Model 2:The 0.92 release of the Servlet/JSP Specification described Model 2 as an architecture that uses servlets and JSP pages together in the same application. The term Model 2 disappeared from later releases, but it remains in popular use among Java web developers.Under Model 2, servlets handle the data access and navigational flow, while JSP pages handle the presentation. Model 2 lets Java engineers and HTML developers each work on their own part of the application. A change in one part of a Model 2 application does not mandate a change to another part of the application. HTML developers can often change the look and feel of an application without changing how the back-office servlets work.The Struts framework is based on the Model 2 architecture. It provides a controller servlet to handle the navigational flow and special classes to help with the data access. A substantial custom tag library is bundled with the framework to make Struts easy to use with JSP pages.Summary:In this article, we introduced Struts as an application framework. We examined the technology behind HTTP, the Common Gateway Interface, Java servlets, JSPs, and JavaBeans. We also looked at the Model 2 application architecture to see how it is used to combine servlets and JSPs in the same application.Now that you have had a taste of what it is like to develop a web application with Struts, in chapter 2 we dig deeper into the theory and practice behind the Struts architecture.外文翻译JSP 应用框架brian wright、michael freedman/pdf/introduction-to-machine-learning/ 什么是应用框架:框架(framework)是可重用的,半成品的应用程序,可以用来产生专门的定制程序。
毕业设计(论文)外文文献翻译文献、资料中文题目:指纹识别操作系统文献、资料英文题目:文献、资料来源:文献、资料发表(出版)日期:院(部):专业:班级:姓名:学号:指导教师:翻译日期: 2017.02.14摘要:本文拟在提出一种可以区分protocol指纹识别的方法,用帧描述指纹识别代替建立帧系统获得主机信息与系统配对从而分辨出主机操作系统的类别。
实验的结果表明这种方法能够有效的辨别操作系统,这一方法比其他例如nmap 和xprobe的系统更为隐秘。
关键词:传输控制)协议/ 协议指纹识别操作系统辨别远程主机的操作系统,这是一个很重要的领域。
了解主机操作系统可以分析和获取一些信息,例如记忆管理,CPU的类型。
这些信息对于计算机网络的攻击与防御非常重要。
主要的辨别是通过TCP/IP指纹识别来完成的。
几乎所有的操作系统的定制他们自己的协议栈都通过以下的RFC。
这种情况导致一个实例,每个协议栈会有细节上的不同。
这些不同的细节就是所知道的使辨别操作系统称为可能的指纹识别。
Nmap、Queso在传输层里使用指纹。
他们将特殊数据包发送到目标并分析返回的数据包,在指纹库中寻找配对的指纹,以便得到的结果。
指纹库中的信息受指定的探测信息的影响.很难区分类似的操作系统(例如:windows98/2000/xp)Xprobe主要是利用ICMP协议,这是利用五种包来识别操作系统。
它能够提供的在所有可能的情况下确实是操作系统的概率。
主要不足是它过分依赖ICMP协议议定书。
SYNSCAN是在应用协议中与目标主机联系时,使用的一些典型的指纹识别方法。
指纹库对在这个领域有限制。
Ring,Ttbit查明操作系统所使用TCP / IP 的性能特点。
因为这种性能受网络环境极大。
其结果往往是不完全确定的。
文献分析资料中的行动而获得的拦截(如一些同步的要求,一个封闭的端口如何响应连接请求)。
虽然这种方式是有效,它在少数特定操作系统区分上述的各种系统,都没有完整的描述指纹系统,引起他们进行分辨的主要是依靠部分的TCP/IP。
这篇文章的目的就是要简绍一种新的方法来解决这些问题。
它们都被吓跑的方式来描述指纹的OS integrallty ,造成诉讼程序的确定只能依靠部分TCP / IP协议。
本文提出了一种新的方法来解决这一问题:它是指纹操作系统,是通过利用科技来获取一些信息,获取的信息的一些技术,查明操作系统。
第二章我们提出一些基本的方法的概念,第三章用帧技术来提出描述和匹配协定指纹,第四章,是完成这种方法的算法,第五部分,利用实验来验证他的有效有效性并分析结果最后第六部分是总结全文,及未来的发展方向。
该程序是为了获取信息,提取指纹和匹配的指纹库里的记录,以便知道类型。
本节确定获取信息的方法,采取的做法和通信的状况,还区分指纹。
这些工作为下一节如何建立一个帧系统来识别指纹做好准备要插入“表”或“数字” ,请粘贴下文所述数据。
所有表格和数字必须使用连续数字( 1 , 2 , 3等),并有一个标题放在下面的数字(“ FigCaption ” )或在表的上面(“ FigTalbe ” )用8pt字体和从风格兰中下拉菜单中的类别中选择指定的样式“标题”。
在本文中,我们提出了一个方法,以确定操作系统的远程主机。
该方法使用帧技术来识别指纹,弥补探针和监控获得的信息和从资料中摘取信息来与指纹库中的匹配,最后识别操作系统。
通过实验,该方法与nmap and xprobe. 相比,能准确识别远程的主机的操作系统。
在未来,我们计划为每个种操作系统汇编更多的指纹,使算法(规则系统)将更加智能化,以提高识别的精度(准确性)。
This paper present a method that classify the fingerprint of protocol(电脑之间通信与资料传送所遵守的规则), use the frame to describe the fingerprint in order to create the frame system, get the information of host(主机)to match the system to identify the type of OS in remote host. Result from experimental(实验性的)appears that this method can identify the OS effectively, the action of is more secretly than other systems such as nmap and xprobe (x-probe:X探针).Key words: TCP/IP Fingerprint OSIt is an important field that identify what OS in remote host. Mastering the OS can analyse and acquire some information such as memory management、the kind of CPU. These information is important for computer network attack and computer network defense.The main way to identify is through the TCP/IP fingerprint to finish. Nearly all kind of OS customize(定制)their own’s protocol stack by following the RFC. This instance cause the fact that every protocol stack has some different details during implementing. These details are known as fingerprint which make it possible to identify the OS .Nmap、Queso[1] use the fingerprint in transport layer. They send the particular packets to the target and analyse the retured packets, matching the fingerprint in the fingerprint warehouse in order to get the result. The information in the warehouse is affected by the specified message for probing. It hardly to distinguish the similar OS (eg.windows98/2000/xp).Xprobe[2] mainly use the ICMP which make use of five kinds of packets in ICMP to identify OS. It can give the probability of all possible situation which maybe the indeed OS. The main shortage is it excessively depend on ICMP Protocol.SYNSCAN[3] use some typical fields’ f ingerprint to identify when it communicaties with target host in application protocol. The warehouse of fingerprint have limited types of field.Ring 、Ttbit[5][6] identify the OS using the performance character of TCP/IP. Because this kind of character is affected by network environment greatly. The result is often not exactly.Literature[7] analysis the action in messages which are acquired through interception(eg. The number of SYN request, a closed port how to response a connection request).Although thisway is availability, it only distinguish a few given OSAbove all the kinds of system, they all be scare of a way to describe the fingerprint of OS integrallty, which cause the proceeding of identify only depend on a part of TCP/IP . This paper propose a new method to resolve the problem: it uniformly the fingerprint of OS, acquire the message by some technology, identify the OS at last.The rest of the paper is organized as followed: Section Ⅱ we present based concept of this method. Section Ⅲpresent how to describe and match the protocol fingerprint using frame technology. Section Ⅳpresent an algorithm to implement the method and Section Ⅴuse experiment to validate its effectiveness and analysis the result. Finally Section Ⅵ present the concluding remark and possible future work.The proceeding of identify is to acquire message, extract the fingerprint and match the record of fingerprint warehouse, in order to know the type. This section define the measure which are to acquire message, the action and status of communication, also classify the fingerprint. These work are all prepared for the next section which how to built a frame system describing the fingerprint.To insert “Tables” or “Figures”, please paste the data as stated below. All tables and figures must be given sequential numbers (1, 2, 3, etc.) and have a caption placed below the figure (“FigCaption”) or above the table(“FigTalbe”) being described, using 8pt font and please make use of the specified style “caption” from the drop-down menu of style categories ConclusionIn this paper, we have presented a method for identifying OS of remote host. The method use frame technology to express the fingerprint, make up of Probe and Monitor to get message and abstract the information from the message to match the warehouse of fingerprint, identify the OS at last. Through experiment, this method can exactly identify the OS of remote hose with more secretly and less number of packets comparing with nmap and xprobe.In the future, we plan to collect more fingerprint for each kind of OS, make the algorithm(规则系统) to be more intelligent, in order to improve the precision(准确性) of identify.无论是日本媒体的打算激发一种民族自强不息的人,或者是美国媒体进一步宣扬“中国威胁论”总之,最近,一些外国媒体一直鼓吹中国已经成为“世界工厂” 。