计算机专业指纹识别操作系统毕业论文外文文献翻译及原文
- 格式:doc
- 大小:73.00 KB
- 文档页数:14
毕业设计(论文)外文资料原文及译文原文出处:《Cloud Computing》作者:M ichael Miller以下为英文原文Beyond the Desktop: An Introduction to Cloud ComputingIn a world that sees new technological trends bloom and fade on almost a daily basis, one new trend promises more longevity. This trend is called cloud computing, and it will change the way you use your computer and the Internet.Cloud computing portends a major change in how we store information and run applications. Instead of running programs and data on an individual desktop computer, everything is hosted in the “cloud”—a nebulous assemblage of computers and servers accessed via the Internet. Cloud computing lets you access all your applications and documents from anywhere in the world, freeing you from the confines of the desktop and making it easier for group members in different locations to collaborate.The emergence of cloud computing is the computing equivalent of the electricity revolution of a century ago. Before the advent of electrical utilities, every farm and business produced its own electricity from freestanding generators. After the electrical grid was created, farms and businesses shut down their generators and bought electricity from the utilities, at a much lower price (and with much greater reliability) than they could produce on their own.Look for the same type of revolution to occur as cloud computing takes hold. Thedesktop-centric notion of computing that we hold today is bound to fall by the wayside as we come to expect the universal access, 24/7 reliability, and ubiquitous collaboration promised by cloud computing.It is the way of the future.Cloud Computing: What It Is—and What It Isn’tWith traditional desktop computing, you run copies of software programs on each computer you own. The documents you create are stored on the computer on which they were created. Although documents can be accessed from other computers on the network, they can’t be accessed by computers outside the network.The whole scene is PC-centric.With cloud computing, the software programs you use aren’t run from your personal computer, but are rather stored on servers accessed via the Internet. If your computer crashes, the software is still available for others to use. Same goes for the documents you create; they’re stored on a collection of servers accessed via the Internet. Anyone with permission can not only access the documents, but can also edit and collaborate on those documents in real time. Unlike traditional computing, this cloud computing model isn’t PC-centric, it’sdocument-centric. Which PC you use to access a document simply isn’t important.But that’s a simplification. Let’s look in more detail at what cloud computing is—and, just as important, what it isn’t.What Cloud Computing Isn’tFirst, cloud computing isn’t network computing. With network computing,applications/documents are hosted on a single company’s server and accessed over the company’s network. Cloud computing is a lot bigger than that. It encompasses multiple companies, multiple servers, and multiple networks. Plus, unlike network computing, cloud services and storage are accessible from anywhere in the world over an Internet connection; with network computing, access is over the company’s network only.Cloud computing also isn’t traditional outsourcing, where a company farms out (subcontracts) its computing services to an outside firm. While an outsourcing firm might host a company’s data or applications, those documents and programs are only accessible to the company’s employees via the company’s network, not to the entire world via the Internet.So, despite superficial similarities, networking computing and outsourcing are not cloud computing.What Cloud Computing IsKey to the definition of cloud computing is the “cloud”itself. For our purposes, the cloud is a large group of interconnected computers. These computers can be personal computers or network servers; they can be public or private. For example, Google hosts a cloud that consists of both smallish PCs and larger servers. Google’s cloud is a private one (that is, Google owns it) that is publicly accessible (by Google’s users).This cloud of computers extends beyond a single company or enterprise. The applications and data served by the cloud are available to broad group of users, cross-enterprise andcross-platform. Access is via the Internet. Any authorized user can access these docs and apps from any computer over any Internet connection. And, to the user, the technology and infrastructure behind the cloud is invisible. It isn’t apparent (and, in most cases doesn’t matter) whether cloud services are based on HTTP, HTML, XML, JavaScript, or other specific technologies.It might help to examine how one of the pioneers of cloud computing, Google, perceives the topic. From Google’s perspective, there are six key properties of cloud computing:·Cloud computing is user-centric.Once you as a user are connected to the cloud, whatever is stored there—documents, messages, images, applications, whatever—becomes yours. In addition, not only is the data yours, but you can also share it with others. In effect, any device that accesses your data in the cloud also becomes yours.·Cloud computing is task-centric.Instead of focusing on the application and what it can do, the focus is on what you need done and how the application can do it for you., Traditional applications—word processing, spreadsheets, email, and so on—are becoming less important than the documents they create.·Cloud computing is powerful. Connecting hundreds or thousands of computers together in a cloud creates a wealth of computing power impossible with a single desktop PC. ·Cloud computing is accessible. Because data is stored in the cloud, users can instantly retrieve more information from multiple repositories. You’re not limited to a single source of data, as you are with a desktop PC.·Cloud computing is intelligent. With all the various data stored on the computers in a cloud, data mining and analysis are necessary to access that information in an intelligent manner.·Cloud computing is programmable.Many of the tasks necessary with cloud computing must be automated. For example, to protect the integrity of the data, information stored on a single computer in the cloud must be replicated on other computers in the cloud. If that one computer goes offline, the cloud’s programming automatically redistributes that computer’s data to a new computer in the cloud.All these definitions behind us, what constitutes cloud computing in the real world?As you’ll learn throughout this book, a raft of web-hosted, Internet-accessible,group-collaborative applications are currently available, with many more on the way. Perhaps the best and most popular examples of cloud computing applications today are the Google family of applications—Google Docs & Spreadsheets, Google Calendar, Gmail, Picasa, and the like. All of these applications are hosted on Google’s servers, are accessible to any user with an Internet connection, and can be used for group collaboration from anywhere in the world.In short, cloud computing enables a shift from the computer to the user, from applications to tasks, and from isolated data to data that can be accessed from anywhere and shared with anyone. The user no longer has to take on the task of data management; he doesn’t even have to remember where the data is. All that matters is that the data is in the cloud, and thus immediately available to that user and to other authorized users.From Collaboration to the Cloud: A Short History of CloudComputingCloud computing has as its antecedents both client/server computing and peer-to-peer distributed computing. It’s all a matter of how centralized storage facilitates collaboration and how multiple computers work together to increase computing power.Client/Server Computing: Centralized Applications and StorageIn the antediluvian days of computing (pre-1980 or so), everything operated on the client/server model. All the software applications, all the data, and all the control resided on huge mainframe computers, otherwise known as servers. If a user wanted to access specific data or run a program, he had to connect to the mainframe, gain appropriate access, and then do his business while essentially “renting”the program or data from the server.Users connected to the server via a computer terminal, sometimes called a workstation or client. This computer was sometimes called a dumb terminal because it didn’t have a lot (if any!) memory, storage space, or processing power. It was merely a device that connected the user to and enabled him to use the mainframe computer.Users accessed the mainframe only when granted permission, and the information technology (IT) staff weren’t in the habit of handing out access casually. Even on a mainframe computer, processing power is limited—and the IT staff were the guardians of that power. Access was not immediate, nor could two users access the same data at the same time.Beyond that, users pretty much had to take whatever the IT staff gave them—with no variations. Want to customize a report to show only a subset of the normal information? Can’t do it. Want to create a new report to look at some new data? You can’t do it, although the IT staff can—but on their schedule, which might be weeks from now.The fact is, when multiple people are sharing a single computer, even if that computer is a huge mainframe, you have to wait your turn. Need to rerun a financial report? No problem—if you don’t mind waiting until this afternoon, or tomorrow morning. There isn’t always immediate access in a client/server environment, and seldom is there immediate gratification.So the client/server model, while providing similar centralized storage, differed from cloud computing in that it did not have a user-centric focus; with client/server computing, all the control rested with the mainframe—and with the guardians of that single computer. It was not a user-enabling environment.Peer-to-Peer Computing: Sharing ResourcesAs you can imagine, accessing a client/server system was kind of a “hurry up and wait”experience. The server part of the system also created a huge bottleneck. All communications between computers had to go through the server first, however inefficient that might be.The obvious need to connect one computer to another without first hitting the server led to the development of peer-to-peer (P2P) computing. P2P computing defines a network architecture inwhich each computer has equivalent capabilities and responsibilities. This is in contrast to the traditional client/server network architecture, in which one or more computers are dedicated to serving the others. (This relationship is sometimes characterized as a master/slave relationship, with the central server as the master and the client computer as the slave.)P2P was an equalizing concept. In the P2P environment, every computer is a client and a server; there are no masters and slaves. By recognizing all computers on the network as peers, P2P enables direct exchange of resources and services. There is no need for a central server, because any computer can function in that capacity when called on to do so.P2P was also a decentralizing concept. Control is decentralized, with all computers functioning as equals. Content is also dispersed among the various peer computers. No centralized server is assigned to host the available resources and services.Perhaps the most notable implementation of P2P computing is the Internet. Many of today’s users forget (or never knew) that the Internet was initially conceived, under its original ARPAnet guise, as a peer-to-peer system that would share computing resources across the United States. The various ARPAnet sites—and there weren’t many of them—were connected together not as clients and servers, but as equals.The P2P nature of the early Internet was best exemplified by the Usenet network. Usenet, which was created back in 1979, was a network of computers (accessed via the Internet), each of which hosted the entire contents of the network. Messages were propagated between the peer computers; users connecting to any single Usenet server had access to all (or substantially all) the messages posted to each individual server. Although the users’connection to the Usenet server was of the traditional client/server nature, the relationship between the Usenet servers was definitely P2P—and presaged the cloud computing of today.That said, not every part of the Internet is P2P in nature. With the development of the World Wide Web came a shift away from P2P back to the client/server model. On the web, each website is served up by a group of computers, and sites’visitors use client software (web browsers) to access it. Almost all content is centralized, all control is centralized, and the clients have no autonomy or control in the process.Distributed Computing: Providing More Computing PowerOne of the most important subsets of the P2P model is that of distributed computing, where idle PCs across a network or across the Internet are tapped to provide computing power for large, processor-intensive projects. It’s a simple concept, all about cycle sharing between multiple computers.A personal computer, running full-out 24 hours a day, 7 days a week, is capable of tremendous computing power. Most people don’t use their computers 24/7, however, so a good portion of a computer’s resources go unused. Distributed computing uses those resources.When a computer is enlisted for a distributed computing project, software is installed on the machine to run various processing activities during those periods when the PC is typically unused. The results of that spare-time processing are periodically uploaded to the distributedcomputing network, and combined with similar results from other PCs in the project. The result, if enough computers are involved, simulates the processing power of much larger mainframes and supercomputers—which is necessary for some very large and complex computing projects.For example, genetic research requires vast amounts of computing power. Left to traditional means, it might take years to solve essential mathematical problems. By connecting together thousands (or millions) of individual PCs, more power is applied to the problem, and the results are obtained that much sooner.Distributed computing dates back to 1973, when multiple computers were networked togetherat the Xerox PARC labs and worm software was developed to cruise through the network looking for idle resources. A more practical application of distributed computing appeared in 1988, when researchers at the DEC (Digital Equipment Corporation) System Research Center developed software that distributed the work to factor large numbers among workstations within their laboratory. By 1990, a group of about 100 users, utilizing this software, had factored a 100-digit number. By 1995, this same effort had been expanded to the web to factor a 130-digit number.It wasn’t long before distributed computing hit the Internet. The first major Internet-based distributed computing project was , launched in 1997, which employed thousands of personal computers to crack encryption codes. Even bigger was SETI@home, launched in May 1999, which linked together millions of individual computers to search for intelligent life in outer space.Many distributed computing projects are conducted within large enterprises, using traditional network connections to form the distributed computing network. Other, larger, projects utilize the computers of everyday Internet users, with the computing typically taking place offline, and then uploaded once a day via traditional consumer Internet connections.Collaborative Computing: Working as a GroupFrom the early days of client/server computing through the evolution of P2P, there has been a desire for multiple users to work simultaneously on the same computer-based project. This type of collaborative computing is the driving force behind cloud computing, but has been aroundfor more than a decade.Early group collaboration was enabled by the combination of several different P2P technologies. The goal was (and is) to enable multiple users to collaborate on group projects online, in real time.To collaborate on any project, users must first be able to talk to one another. In today’s environment, this means instant messaging for text-based communication, with optionalaudio/telephony and video capabilities for voice and picture communication. Most collaboration systems offer the complete range of audio/video options, for full-featured multiple-user video conferencing.In addition, users must be able to share files and have multiple users work on the same document simultaneously. Real-time whiteboarding is also common, especially in corporate andeducation environments.Early group collaboration systems ranged from the relatively simple (Lotus Notes and Microsoft NetMeeting) to the extremely complex (the building-block architecture of the Groove Networks system). Most were targeted at large corporations, and limited to operation over the companies’private networks.Cloud Computing: The Next Step in CollaborationWith the growth of the Internet, there was no need to limit group collaboration to asingle enterprise’s network environment. Users from multiple locations within a corporation, and from multiple organizations, desired to collaborate on projects that crossed company and geographic boundaries. To do this, projects had to be housed in the “cloud”of the Internet, and accessed from any Internet-enabled location.The concept of cloud-based documents and services took wing with the development of large server farms, such as those run by Google and other search companies. Google already had a collection of servers that it used to power its massive search engine; why not use that same computing power to drive a collection of web-based applications—and, in the process, provide a new level of Internet-based group collaboration?That’s exactly what happened, although Google wasn’t the only company offering cloud computing solutions. On the infrastructure side, IBM, Sun Systems, and other big iron providers are offering the hardware necessary to build cloud networks. On the software side, dozens of companies are developing cloud-based applications and storage services.Today, people are using cloud services and storage to create, share, find, and organize information of all different types. Tomorrow, this functionality will be available not only to computer users, but to users of any device that connects to the Internet—mobile phones, portable music players, even automobiles and home television sets.The Network Is the Computer: How Cloud Computing WorksSun Microsystems’s slogan is “The network is the computer,”and that’s as good as any to describe how cloud computing works. In essence, a network of computers functions as a single computer to serve data and applications to users over the Internet. The network exists in the “cloud”of IP addresses that we know as the Internet, offers massive computing power and storage capability, and enables widescale group collaboration.But that’s the simple explanation. Let’s take a look at how cloud computing works in more detail.Understanding Cloud ArchitectureThe key to cloud computing is the “cloud”—a massive network of servers or even individual PCs interconnected in a grid. These computers run in parallel, combining the resources of eachto generate supercomputing-like power.What, exactly, is the “cloud”? Put simply, the cloud is a collection of computers and servers that are publicly accessible via the Internet. This hardware is typically owned and operated by a third party on a consolidated basis in one or more data center locations. The machines can run any combination of operating systems; it’s the processing power of the machines that matter, not what their desktops look like.As shown in Figure 1.1, individual users connect to the cloud from their own personal computers or portable devices, over the Internet. To these individual users, the cloud is seen as a single application, device, or document. The hardware in the cloud (and the operating system that manages the hardware connections) is invisible.FIGURE 1.1How users connect to the cloud.This cloud architecture is deceptively simple, although it does require some intelligent management to connect all those computers together and assign task processing to multitudes of users. As you can see in Figure 1.2, it all starts with the front-end interface seen by individual users. This is how users select a task or service (either starting an application or opening a document). The user’s request then gets passed to the system management, which finds the correct resources and then calls the system’s appropriate provisioning services. These services carve out the necessary resources in the cloud, launch the appropriate web application, and either creates or opens the requested document. After the web application is launched, the system’s monitoring and metering functions track the usage of the cloud so that resources are apportioned and attributed to the proper user(s).FIGURE 1.2The architecture behind a cloud computing system.As you can see, key to the notion of cloud computing is the automation of many management tasks. The system isn’t a cloud if it requires human management to allocate processes to resources. What you have in this instance is merely a twenty-first-century version ofold-fashioned data center–based client/server computing. For the system to attain cloud status, manual management must be replaced by automated processes.Understanding Cloud StorageOne of the primary uses of cloud computing is for data storage. With cloudstorage, data is stored on multiple third-party servers, rather than on the dedicated servers used in traditional networked data storage.When storing data, the user sees a virtual server—that is, it appears as if the data is stored in a particular place with a specific name. But that place doesn’t exist in reality. It’s just a pseudonym used to reference virtual space carved out of the cloud. In reality, the user’s data could be stored on any one or more of the computers used to create the cloud. The actual storage location may even differ from day to day or even minute to minute, as the cloud dynamically manages available storage space. But even though the location is virtual, the user sees a “static”location for his data—and can actually manage his storage space as if it were connected to his own PC.Cloud storage has both financial and security-associated advantages.Financially, virtual resources in the cloud are typically cheaper than dedicated physical resources connected to a personal computer or network. As for security, data stored in the cloud is secure from accidental erasure or hardware crashes, because it is duplicated across multiple physical machines; since multiple copies of the data are kept continually, the cloud continues to function as normal even if one or more machines go offline. If one machine crashes, the data is duplicated on other machines in the cloud.Understanding Cloud ServicesAny web-based application or service offered via cloud computing is called a cloud service. Cloud services can include anything from calendar and contact applications to word processing and presentations. Almost all large computing companies today, from Google to Amazon to Microsoft, are developing various types of cloud services.With a cloud service, the application itself is hosted in the cloud. An individual user runs the application over the Internet, typically within a web browser. The browser accesses the cloud service and an instance of the application is opened within the browser window. Once launched, the web-based application operates and behaves like a standard desktop application. The only difference is that the application and the working documents remain on the host’s cloud servers.Cloud services offer many advantages. If the user’s PC crashes, it doesn’t affect either the host application or the open document; both remain unaffected in the cloud. In addition, an individual user can access his applications and documents from any location on any PC. He doesn’t have to have a copy of every app and file with him when he moves from office to home to remote location. Finally, because documents are hosted in the cloud, multiple users can collaborate on the same document in real time, using any available Internet connection. Documents are no longer machine-centric. Instead, they’re always available to any authorized user.Companies in the Cloud: Cloud Computing TodayWe’re currently in the early days of the cloud computing revolution. Although many cloud services are available today, more and more interesting applications are still in development. That said, cloud computing today is attracting the best and biggest companies from across the computing industry, all of whom hope to establish profitable business models based in the cloud.As discussed earlier in this chapter, perhaps the most noticeable company currently embracing the cloud computing model is Google. As you’ll see throughout this book, Google offers a powerful collection of web-based applications, all served via its cloud architecture. Whether you want cloud-based word processing (Google Docs), presentation software (Google Presentations), email (Gmail), or calendar/scheduling functionality (Google Calendar), Google has an offering. And best of all, Google is adept in getting all of its web-based applications to interface with each other; their cloud services are interconnected to the user’s benefit.Other major companies are also involved in the development of cloud services. Microsoft, for example, offers its Windows Live suite of web-based applications, as well as the Live Mesh initiative that promises to link together all types of devices, data, and applications in a common cloud-based platform. Amazon has its Elastic Compute Cloud (EC2), a web service that provides cloud-based resizable computing capacity for application developers. IBM has established a Cloud Computing Center to deliver cloud services and research to clients. And numerous smaller companies have launched their own webbased applications, primarily (but not exclusively) to exploit the collaborative nature of cloud services.As we work through this book, we’ll examine many of these companies and their offerings. All you need to know for now is that there’s a big future in cloud computing—and everybody’s jumping on the bandwagon.Why Cloud Computing MattersWhy is cloud computing important? There are many implications of cloud technology, for both developers and end users.For developers, cloud computing provides increased amounts of storage and processing power to run the applications they develop. Cloud computing also enables new ways to access information, process and analyze data, and connect people and resources from any location anywhere in the world. In essence, it takes the lid off the box; with cloud computing, developers are no longer boxed in by physical constraints.For end users, cloud computing offers all those benefits and more. A person using a web-based application isn’t physically bound to a single PC, location, or network. His applications and documents can be accessed wherever he is, whenever he wants. Gone is the fear of losing data if a computer crashes. Documents hosted in the cloud always exist, no matter what happens to the user’s machine. And then there’s the benefit of group collaboration. Users from around the world can collaborate on the same documents, applications, and projects, in real time. It’s a whole new world of collaborative computing, all enabled by the notion of cloud computing.And cloud computing does all this at lower costs, because the cloud enables more efficient sharing of resources than does traditional network computing. With cloud computing, hardware doesn’t have to be physically adjacent to a firm’s office or data center. Cloud infrastructure can be located anywhere, including and especially areas with lower real estate and electricity costs. Inaddition, IT departments don’t have to engineer for peak-load capacity, because the peak load can be spread out among the external assets in the cloud. And, because additional cloud resources are always at the ready, companies no longer have to purchase assets for infrequent intensive computing tasks. If you need more processing power, it’ s always there in the cloud—and accessible on a cost-efficient basis.。
英文参考文献及翻译Linux - Operating system of cybertimesThough for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 , Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely,because the characteristic of the freedom software makes it not almost have advertisement that support (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose byothers, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。
英文参考文献及翻译Linux - Operating system of cybertimesThough for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 , Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely,because the characteristic of the freedom software makes it not almost have advertisement that support (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose byothers, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。
1 . Introduction To Objects1.1The progress of abstractionAll programming languages provide abstractions. It can be argued that the complexity of the problems you’re able to solve is directly related to the kind and quality of abstraction。
By “kind” I mean,“What is it that you are abstracting?” Assembly language is a small abstraction of the underlying machine. Many so—called “imperative” languages that followed (such as FORTRAN,BASIC, and C) were abstractions of assembly language。
These languages are big improvements over assembly language,but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve。
The programmer must establish the association between the machine model (in the “solution space,” which is the place where you’re modeling that problem, such as a computer) and the model of the problem that is actually being solved (in the “problem space,” which is the place where the problem exists). The effort required to perform this mapping, and the fact that it is extrinsic to the programming language,produces programs that are difficult to write and expensive to maintain,and as a side effect created the entire “programming methods” industry.The alter native to modeling the machine is to model the problem you’re trying to solve。
计算机专业中英文文献翻译CNCCNC stands for Computerized Numerical Control and has been around since the early1970s. prior to this, it was called NC,for numerical control. While people in most walks of life have never heard of this term, CNC has touched almost every form of manufacturing process in one way or another. If you'll be working in manufacturing, it's likely that you'll be dealing with CNC on a regular basis.Before CNCWhile there are exceptions to this statement,CNC machines typically replace (or work in conjunction with) some existing manufacturing processes. Take one of the simplest manufacturing processes,drilling holes,for example.A drill press can of course be used to machine holes. A person can place a drill in the drill chuck that is secured in the spindle of the drill press. They can then (manually) select the desired speed for rotation (commonly by switching belt pulleys), and activate the spindle. Then they manually pull on the quill lever to drive the drill into the workpiece being machined.As you can easily see, there is a lot of manual intervention required to use a drill press to holes. A person is required to do something almost every step along the way! While this manual intervention may be acceptable for manufacturing companies if but a small number of holes workpieces must be machined, as quantities grow, so does the likelihood for fatigue due to the tediousness of the operation. And do note that we've used one of the simplest machining operations(drilling) for our example. There are more complicated machining operations that would require a much higher skill level (and increase the potential for mistakes resulting in scrap workpieces) of the person running the conventional machine tool. (We commonly refer to style of machine that CNC is replacing as the conventional machine.)By comparison, the CNC equivalent for a drill press (possibly a CNC machining center or CNC drilling & tapping center) can be programmed to perform this operation in a much more automatic fashion. Everything that the drill press operator was doing manually will now be done by the CNC machine, including:placing thedrill in the spindle, activating the spindle,positioning the workpiece under the drill, machining the hole, and turning off the spindle.How CNC worksAs you might already have guessed,everything that an operator would be required to do with conventional machine tools is programmable with CNC machines. Once the machine is setup and running, a CNC machine is quite simple to keep running. In fact CNC operators tend to get quite bored during lengthy production runs because there is so little to do. With some CNC machines, some of the specific programmable functions.Motion controlAll CNC machine types share this commonalty: They all have two or more programmable directions of motion called axes. An axis of motion can be linear(along a straight line) or rotary(along a circular path). One of the first specifications that imply a CNC machine's complexity is how many axes it has. Generally speaking, the more axes, the more complex the machine.The axes of any CNC machine are required for the purpose of causing the motions needed for the manufacturing process. In the drilling example, these axes would position then tool over the hole to be machined (in two axes) and machine the hole (with the third axis). Axes are named with mon linear axis named X,Y,and Z. Common rotary names are A,B,and C. There are related to the coordinate system.Programmable accessoriesA CNC machine wouldn't be very helpful if all it could only move the workpiece in two or more axes. Almost all CNC machines are programmable in several other ways. The specific CNC machine type has a lot to do with its appropriate programmable accessories. Again,any required function will be programmable on full-blown CNC machine tools. Here are some examples for one machine type(machining centers).Automatic tool changerMost machining centers can hold many tools in a tool magazine. When required,the required tool can be automatically placed in spindle for machining.Spindle speed and activationThe spindle speed(in revolutions per minute) can be easily specified and the spindle can be turned on in a forward or reverse direction.It can also,of course, be turned off.CoolantMany machining operations require coolant for lubrication and cooling purposes. Coolant can be turned on and off from within the machine cycle.The CNC programThink of giving any series of step-by-step instructions. A CNC program is nothing more than another kind of instruction set. It's written in sentence-like format and the control will execute it in sequential order,step by step.A special series of CNC words are used to communicate what the machine is intended to do. CNC words begin with letter address(like F for feedrate,S for spindle speed,and X,Y,and Z for axis motion). When placed together in a logical method, a group of CNC words make up a command that resemble a sentence.The CNC controlThe CNC control will interpret a CNC program and active the series of commands in sequential order. As it reads the program, the CNC control will activate the appropriate machine functions, cause axis motion, and in general, follow the instructions given in the program.Along with interpreting the CNC program, the CNC control has several other purposes. All current model CNC controls allow programs to be modified(edited) if mistakes are found. The CNC control allows special verification functions(like dry run) to confirm the correctness of the CNC program. The CNC control allows certain important operator inputs to be specified separate from the program, like tool length values. In general, the CNC control allows functions of the machine to be manipulated.What is a CAM system?For simple applications (like drilling holes),the CNC program can developedmanually. That is ,a programmer will sit down to write the program armed only with pencil,paper, and calculator. Again, for simple applications,this may be the very best way to develop CNC programs.As applications get more complicated, and especially when new programs are required on a regular basis, writing programs manually becomes much more difficult.To simplify the programming process,a computer aided manufacturing (CAM) system can be used. A CAM system is a software program that runs on a computer(commonly a PC) that helps the CNC programmer with the programming process. Generally speaking, a CAM system will take the tediousness and drudgery out of programming.In many companies the CAM system will work with the computer aided design(CAD) drawing developed by the computer's design engineering department.This eliminates the need for redefining the workpiece configuration to the CAM system .The CNC programmer will simply specify the machining operations to be performed and the CAM system will create the CNC program(much like the manual programmer would have written) automatically.What is a DNC system?Once the program is developed (either manually or with a CAM system), it must be loaded into the CNC control. Tough the setup person could type the program right into the control, this would be like using the CNC machine as a very expensive typewrite. If the CNC program is developed with the help of a CAM system, then it is already in the form of a text file.If the program is written manually,it can be typed into any computer using a common word processor (though most companies use a special CNC text editor for this purpose). Either way, the program is in the form of a text file that can be transferred right into the CNC machine. A distributive numerical control (DNC) system is used for this purpose.A DNC system is nothing more than a computer that is networked with one or more CNC machines. Until only recently, rather crude serial communications protocol (RS-232C) had to be used for transferring programs. Newer controls have more current communications capabilities and can be networked in more conventional ways(Ethernet, etc.). Regardless of methods , the CNC program must of course be loaded into the CNC machine before it can be run.When Numerical Control is performed under computer supervision, it is called Computer Numerical Control (CNC). Computers are the control units of CNC machines. They are built in or linked to the machines via communications channels. When a programmer inputs some information in the program by tape and so on, the computer calculates all necessary data to get the job done.Today’s systems have computers control data, so they are called Computer Numerically Controlled Machines. For both NC and CNC systems, work principles are the same. Only the way in which the execution is controlled is different. Normally, new systems are faster, more powerful, and more versatile unit.The Construction of CNC MachinesCNC machine tools are complex assemblies. However, in general, any CNC machine tool consists of the following units: computers, control systems, drive motors and tool changers.According to the construction of CNC machine tools, CNC machines work in the following manner:(1) The CNC machine language, which is a programming language of binary notation used on computers, is not used on CNC machines.(2) When the operator starts the execution cycle, the computer translates binary codes into electronic pulses that are automatically sent to the machine’s power units. The control units compare the number of pulses sent and received.(3) When the motors receive each pulse, they automatically transform the pulses into rotations that drive the spindle and lead screw, causing the spindle rotation and slide or table movement. The part on the milling machine table or the tool in the lathe turret is driven to the position specified by the program.putersAs with all computers, the CNC machine computer works on binary principle using only two characters 1 and 0, for information processing precise time impulses from the circuit. There are two states, a state with voltage, 1, and a state without voltage, 0. Series of ones and zeroes are the only states that the computer distinguishes are called machine language, and it is the only language the computer understands. When creating the program, the programmer does not care about themachine language. He or she simply uses a list of codes and keys in the meaningful information.Special built-in software compiles the program into the machine language and the machine moves the tool by its servomotors. However, the programmability of the machine is dependent on whether there is a computer in the machine’s control. If there is a minicomputer programming, say, a radius (which is a rather simple task), the computer will calculate all the points on the tool path.On the machine without a minicomputer, this may prove to be a tedious task, since the programmer must calculate all the points of intersection on the tool path. Modern CNC machines use 32-bit processors in their computers that allow fast and accurate processing of information.2.Control systemsThere are two types of control systems on NC/CNC machines: the open loop and the closed loop. The type of control loop used determines the overall accuracy of the machine.The open-loop control system does not provide positioning feedback to the control unit. The movement pulses are sent out by the control and they are received by a special type of servomotor called a stepper motor.The number of pulses that the control sends to the stepper motor controls the amount of the rotation of the motor. The stepper motor then proceeds with the next movement command. Since this control system only counts pulses and cannot identify discrepancies in positioning, the machine will continue this inaccuracy until somebody finds the error.The open-loop control can be used in applications in which there is no change in load conditions, such as the NC drilling machine.The advantage of the open-loop control system is that it is less expensive, since it does not require the additional hardware and electrics needed for positioning feedback. The disadvantage is the difficulty of detecting a positioning error.In the closed-loop control system, the electronic movement pulses are sent from the control to the servomotor, enabling the motor to rotate with each pulse. The movements are detected and counted by a feedback device called a transducer. With each step of movement, a transducer sends a signal back to the control, which compares the current position of the driven axis with the programmed position. When the number of pulses sent and received matches, the control starts sending out pulses for the next movement.Closed-loop systems are very accurate. Most have an automatic compensation for error, since the feedback device indicates the error and the control makes the necessary adjustments to bring the slide back to the position. They use AC, DC or hydraulic servomotors.Position measurement in NC machines can be accomplished through direct or indirect methods. In direct measuring systems, a sensing device reads a graduated scale on the machine table or slide for linear movement. This system is more accurate because the scale is built into the machine and backlash (the play between two adjacent mating gear teeth) in the mechanisms is not significant.In indirect measuring systems, rotary encoders or resolves convert rotary movement to translation movement. In this system, backlash can significantly affect measurement accuracy. Position feedback mechanisms utilize various sensors that are based mainly on magnetic and photoelectric principles.3.Drive MotorsThe drive motors control the machine slide movement on NC/CNC equipment. They come in four basic types: stepper motors, DC servomotors, AC servomotors and fluid servomotors.Stepper motors convert a digital pulse generated by the microcomputer unit (MCU) into a small step rotation. Stepper motors have a certain number of steps that they can travel. The number of pulses that the MCU sends to the stepper motor controls the amount of the rotation of the motor.Stepper motors are mostly used in applications where low torque is required.Stepper motors are used in open-loop control systems, while AC, DC or hydraulic servomotors are used in closed-loop control systems.Direct current (DC) servomotors are variable speed motors that rotate in response to the applied voltage. They are used to drive a lead screw and gear mechanism. DC servomotors provide higher-torque output than stepper motors.Alternative current (AC) servomotors are controlled by varying the voltage frequency to control speed. They can develop more power than a DC servomotor. They are also used to drive a lead screw and gear mechanism.Fluid or hydraulic servomotors are also variable speed motors. They are able to produce more power, or more speed in the case of pneumatic motors than electric servomotors. The hydraulic pump provides energy to values that are controlled by the MCU.4.Tool ChangersMost of the time, several different cutting tools are used to produce a part. The tools must be replaced quickly for the next machining operation. For this reason, the majority of NC/CNC machine tools are equipped with automatic tool changers, such as magazines on machining centers and turrets on turning centers. Typically, an automatic tool changer grips the tool in the spindle, pulls it out, and replaces it with another tool.On most machines with automatic tool changers, the turret or magazine can rotate in either direction, forward or reverse.Tool changers may be equipped for either random or sequential selection. In random tool selection, there is no specific pattern of tool selection. On the machining center, when the program calls for the tool, it is automatically indexed into waiting position, where it can be retrieved by the tool-handling device. On the turning center, the turret automatically rotates, bringing the tool into position.While the specific intention and application for CNC machines vary from one machine type to another, all forms of CNC have common benefits. Here are but a few of the more important benefits offered by CNC equipment.The first benefit offered by all forms of CNC machine tools is improved automation. The operator intervention related to producing workpieces can be reduced or eliminated. Many CNC machine can run unattended during their entire machining cycle, freeing the operator to do other tasks. This gives the CNC user several side benefits including reduced operator fatigue, fewer mistakes caused by human error, and consistent and predictable machining time for each workpiece. Since the machine will be running under program control, the skill level required of the CNC operator (related to basic machining practice) is also reduced as compared to a machinist producing workpieces with conventional machine tools.The second major benefit of CNC technology is consistent and accurate workpieces. Today's CNC machines boast almost unbelievable accuracy and repeatability specification. This means that once a program is verified, two,ten, or one thousand identical workpieces can be easily produced with precision and consistency.A third benefit offer by most forms of CNC machine tools is flexibility.Since these machines are run from programs, running a different workpiece is almost as easy as easy loading a different program. Once a program has been verified and executed for one production run, it can be easily recalled the next time the workpiece is to be run.This leads to yet another benefit, fast change overs. Since these machinesare very easy to set up and run, and since programs can be easily loaded,they allow very short setup time. This is imperative with today's just-in-time(JIT) product requirements.Motion control - the heart of CNCThe most basic function of any CNC machine is automatic,precise,and consistent motion control. Rather than applying completely mechanical devices to cause motion as is required on most conventional machine tools,CNC machines allow motion control in a revolutionary manner. All forms of CNC equipment have two or more directions of motion,called axes.These axes can be precisely and automatically positioned along their lengths of travel.The two most common axis types are linear(driven along a straight path) and rotary(drive along a circular path).Instead of causing motion by turning cranks and handwheels as is required on conventional machine tools. CNC machines allow motions to be commanded though programmed commands. Generally speaking ,the motion rate ( feedrate ) are programmable with almost all CNC machine tools.A CNC command executed within the control tells the drive motor to rotate a precise number of times.The rotation of the drive motor in turn rotates the ball screw. And the ball screw drives the linear axis(slide).A feedback device (linear scale) on the slide allows the control to confirm that the commanded number of rotations has taken place.数控技术CNC代表计算机数(字)控(制),自20世纪70年代以来一直受到人们的关注。
题目Programming Overlay Networkswith Overlay SocketsProgramming Overlay Networks with Overlay Sockets The emergence of application-layer overlay networks has inspired the development of new network services and applications. Research on overlay net-workshas focused on the design of protocols to maintain and forward data in an overlay network, however, less attention has been given to the software development process of building application programs in such an environment. Clearly,the complexity of overlay network protocols calls for suitable application programming interfaces (APIs) and abstractions that do not require detailed knowledge of the overlay protocol, and, thereby, simplify the task of the application programmer. In this paper, we present the concept of an overlay socket as a new programming abstraction that serves as the end point of communication in an overlay network. The overlay socket provides a socket-based API that is independent of the chosen overlay topology, and can be configured to work for different overlay topologies. The overlay socket can support application data transfer over TCP, UDP, or other transport protocols. This paper describes the design of the overlay socket and discusses API and configuration options.1 IntroductionApplication-layer overlay networks [5, 9, 13, 17] provide flexible platforms for develop-ing new network services [1, 10, 11, 14, 18–20] without requiring changes to the network-layer infrastructure. Members of an overlay network, which can be hosts, routers, servers, or applications, organize themselves to form a logical network topology, and commu-nicate only with their respective neighbors in the overlay topology. A member ofan overlay network sends and receives application data, and also forwards data intended for other members. This paper addresses application development in overlay networks. We use the term overlay network programming to refer to the software development process of building application programs that communicate with one another in an application-layer overlay_This work is supported in part by the National Science Foundation through grant work. The diversity and complexity of building and maintaining overlay networks make it impractical to assume that application developers can be concerned with the complexity of managing the participation of an application in a specific overlay networktopology.We present a software module, called overlay socket, that intends to simplify the task of overlay network programming. The design of the overlay socket pursues the following set of objectives: First, the application programming interface (API) of the overlay socket does not require that an application programmer has knowledge of the overlay network topology. Second, the overlay socket is designed to accommodate dif-ferent overlay network topologies. Switching to different overlay network topologies is done by modifying parameters in a configuration file. Third, the overlay socket, which operates at the applicationlayer,can accommodate different types of transport layer protocols. This is accomplished by using network adapters that interface to the un-derlying transport layer network and perform encapsulation and de-encapsulation of messages exchanged by the overlay socket. Currently available network adapters are TCP, UDP, and UDP multicast. Fourth, the overlay socket provides mechanisms for bootstrapping new overlay networks. In this paper, we provide an overview of the overlay socket design and discuss over-lay network programming with the overlay socket. The overlay socket has been imple-mented in Java as part of the HyperCast 2.0 software distribution [12]. The software has been used for various overlay applications, and has been tested in both local-area as well as wide-area settings. The HyperCast 2.0 software implements the overlay topolo-gies described in [15] and [16]. This paper highlights important issues of the overlay socket, additional information can be found in the design documentation available from[12]. Several studies before us have addressed overlay network programming issues. Evenearly overlay network proposals, such as Yoid [9], Scribe [4], and Scattercast [6], have presented APIs that aspire to achieve independence of the API from the overlay network topology used. Particularly, Yoid and Scattercast use a socket-like API, how-ever, these APIs do not address issues that arise when the same API is used by different overlay network topologies. Several works on application-layer multicast overlays inte-grate the application program with the software responsible for maintaining the overlay network, without explicitly providing general-purpose APIs.These include Narada [5], Overcast [13], ALMI [17], and NICE [2]. A recent study [8] has proposed a common API for the class of so-called structured overlays, which includes Chord [19], CAN [18], and Bayeux [20], and other overlays that were originally motivated by distributed hash tables. Our work has a different emphasis than [8], since we assume a scenario where an application programmer must work with several, possibly fundamentally dif-ferent, overlay network topologies and different transmission modes (UDP, TCP), and, therefore, needs mechanisms that make it easy to change the configuration of the un-derlying overlay network..Internet Overlay socket Application Overlay socket Application Application Overlay socket Application Application Overlay socket Application Overlay Network. Fig. 1. The overlay network is a collection of overlay sockets. Root (sender) Root (receiver) (a) Multicast (b) Unicast.Fig. 2. Data forwarding in overlay networks.The rest of the paper is organized as following. In Section 2 we introduce con-cepts, abstractions, and terminology needed for the discussion of the overlay socket. In Section 3 we present the design of the overlay socket, and discuss its components. In Section 4 we show how to write programs using the overlay socket. We present brief conclusions in Section 5.2 Basic ConceptsAn overlay socket is an endpoint for communication in an overlay network, and an overlay network is seen as a collection of overlay sockets that self-organize using an overlay protocol (see Figure 1). An overlay socket offers to an application programmer a Berkeley socket-style API [3] for sending and receiving data over an overlay network.Each overlay socket executes an overlay protocol that is responsible for maintaining the membership of the socket in the overlay network topology. Each overlay socket has a logical address and a physical address in the overlay network. The logical address is dependent on the type of overlay protocol used. In the overlay protocols currently implemented in HyperCast 2.0, the logical addresses are 32- bit integers or_x_y_coordinates, where x and y are positive 32-bit positive integers. The physical address is a transport layer address where overlay sockets receive messages from the overlay network. On the Internet, the physical address is an IP address and a TCP or UDP port number. Application programs that use overlay sockets only work with logical addresses, and do not see physical addresses of overlay nodes. When an overlay socket is created, the socket is configured with a set of configu-ration parameters, called attributes. The application program can obtain the attributes from a configuration file or it downloads the attributes from a server. The configuration file specifies the type of overlay protocol and the type of transport protocol to be used,.but also more detailed information such as the size of internal buffers, and the value of protocol-specific timers. The most important attribute is the overlay identifier (overlay ID) which is used as a global identifier for an overlay network and which can be used as a key to access the other attributes of the overlay network. Each new overlay ID corresponds to the creation of a new overlay network. Overlay sockets exchange two types of messages, protocol messages and application messages. Protocol messages are the messages of the overlay protocol that main-tain the overlay topology. Application messages contain applicationdata that is encap-sulatedn an overlay message header. An application message uses logical addresses in the header to identify source and, for unicast, the destination of the message. If an overlay socket receives an application message from one of its neighbors in the over-laynetwork, it determines if the message must be forwarded to other overlay sockets, and if the message needs to be passed to the local application. The transmission modes currently supported by the overlay sockets are unicast, and multicast. In multicast, all members in the overlay network are receivers.In both unicast and multicast,the com-mon abstraction for data forwarding is that of passing data in spanning trees that are embedded in the overlay topology. For example, a multicast message is transmitted downstream a spanning tree that has the sender of the multicast message as the root (see Figure 2(a)). When an overlay socket receives a multicast message, it forwards the message to all of its downstream neighbors (children) in the tree, and passes the mes-sage to the local application program. A unicast message is transmitted upstream a tree with the receiver of the message as the root (see Figure 2(b)). An overlay socket that receives a unicast message forwards the message to the upstream neighbor (parent) in the tree that has the destination as the root. An overlay socket makes forwarding decisions locally using only the logical ad-dresses of its neighbors and the logical address of the root of the tree. Hence, there is a requirement that each overlay socket can locally compute its parent and its children in a tree with respect to a root node. This requirement is satisfied by many overlay network topologies, including [15, 16, 18–20].3 The Components of an Overlay SocketAn overlay socket consists of a collection of components that are configured when the overlay socketis created, using the supplied set of attributes. These components include the overlay protocol, which helps to build and maintain the overlay network topology, a component that processes application data, and interfaces to a transport-layer network. The main components of an overlay socket, as illustrated in Figure 3, are as follows:The overlay node implements an overlay protocol that establishes and maintains the overlay network topology. The overlay node sends and receives overlay protocol messages, and maintains a set of timers. The overlay node is the only component of an overlay socket that is aware of the overlay topology. In the HyperCast 2.0. Overlay socket Forwarding EngineApplication Programming InterfaceStatistics InterfaceProtocol MessagesApplicationReceiveBufferApplicationTransmitBuffer Overlay NodeO verlay NodeInterfac eNode AdapterAdapter InterfaceSocket AdapterA dapter InterfaceApplication MessagesApplication ProgramTransport-layer NetworkApplication MessagesFig. 3. Components of an overlay socket.software, there are overlay nodes that build a logical hypercube [15] and a logical Delaunay triangu-lartion [16].The forwarding engine performs the functions of an application-layer router, that sends, receives, and forwards formatted application-layer messages in the overlay network. The forwarding engine communicates with the overlay node to query next hop routing information for application messages. The forwarding decision is made using logical addresses of the overlay nodes. Each overlay socket has two network adapters that each provides an interface to transport-layer protocols, such as TCP or UDP. The nodeadapter serves as the in-terface for sending and receiving overlay protocol messages, and the socket adapter serves as the interface for application messages. Each adapter has a transport level address, which, in the case of the Internet, consists of an IP address and a UDP or TCP port number. Currently, there are three different types of adapters, for TCP, UDP, and UDP multicast. Using two adapters completely separates the handling of messages for maintaining the overlay protocol and the messages that transport application data.The application receive buffer and application transmit buffer can temporarily store messages that, respectively, have been received by the socket but not been deliv-ered to theapplication, or that have been released by the application program, but not been transmitted by the socket. The application transmit buffer can play a role when messages cannot be transmitted due to rate control or congestion control con-straints. The application transmit buffer is not implemented in the HyperCast 2.0 software.Each overlay socket has two external interfaces. The application programming in-terface (API) of the socket offers application programs the ability to join and leave existing overlays, to send data to other members of the overlay network, and receive data from the overlay network. The statistics interface of the overlay socket provides access to status information of components of the overlay socket, and is used for monitoring and management of an overlay socket. Note in Figure 3 that some components of the overlay socket also have interfaces, which are accessed by other components of the overlay socket. The overlay manager is a component external to the overlay socket (and not shown in Figure 3). It is responsible for configuring an overlay socket when the socket is created. The overlay manager reads a configuration file that stores the attributes of an overlay socket, and, if it is specified in the configuration file, may access attributes from a server, and then initiates the instantiation of a new overlay socket.4 Overlay Network ProgrammingAn application developer does not need to be familiar with the details of the components of an overlay socket as described in the previous section. The developer is exposed only to the API of the overlay socket and to a file with configuration parameters.The configuration file is a text file which stores all attributes needed to configure an overlay socket. The configuration file is modified whenever a change is needed to the transport protocol, the overlay protocol, or some other parameters of the overlay socket. In the following, we summarize only the main features of the API, and we refer to [12] for detailed information on the overlay socket API.4.1 Overlay Socket APISince the overlay topology and the forwarding of application-layer data is transparent to the application program, the API for overlay network programming can be made simple. Applications need to be able to create a new overlay network, join and leave an existing overlay network, send data to and receive data from other members in the overlay.The API of the overlay socket is message-based, and intentionally stays close to the familiar Berkeley socket API [3]. Since space considerations do not permit a description of the full API, we sketch the API with the help of a simplified example. Figure 4 shows the fragment of a Java program that uses an overlay socket. An application program configures and creates an overlay socket with the help of an overlay manager (o m). The overlay manager reads configuration parameters for the overlay socket from a configu-ration file (hypercast.pro p), which can look similarly as shown in Figure 5. The applica-tion program reads the overlay ID with command om.getDefaultProperty(“OverlayID”) from the file, and creates an configuration object (confi g) for an overlay socket with the.// Generate the configuration object OverlayManager om = newOverlayManager("hypercast.prop");String MyOverlay = om.getDefaultProperty("OverlayID"); OverlaySocketConfig config = new om.getOverlaySocketConfig(MyOverlay); // create an overlay socketOL Socket socket = config.createOverlaySocket(callback);// Join an overlaysocket.joinGroup();// Create a messageOL Message msg = socket.createMessage(byte[] data, int length);// Send the message to all members in overlay networksocket.sendToAll(msg);// Receive a message from the socketOL Message msg = socket.receive();Fig. 4. Program with overlay sockets.# OVERLAY Server:OverlayServer =# OVERLAY ID:OverlayID = 1234KeyAttributes= Socket,Node,SocketAdapter# SOCKET:Socket = HCast2-0HCAST2-0.TTL = 255HCAST2-0.ReceiveBufferSize = 200# SOCKET ADAPTER:SocketAdapter = TCPSocketAdapter.TCP.MaximumPacketLength = 16384# NODE:Node = DT2-0DT2-0.SleepTime = 400# NODE ADAPTER:NodeAdapter = NodeAdptUDPServer NodeAdapter.UDP.MaximumPacketLength = 8192 NodeAdapter.UDPServer.UdpServer0 =128.143.71.50:8081Fig. 5. Configuration file (simplified) given overlay ID. The configuration objectalso loads all configuration information from the configuration file, and then creates the overlay socket(config.createOverlaySocke t).Once the overlay socket is created, the socket joins the overlay network (socket.join-Grou p). When a socket wants to multicast a message, it instantiates a new message (socket.createMessage) and trans-mits the message using the sendToAll method. Other transmission options are send-To-Parent, send-To-Children, sendToNeighbors, and sendToNode, which, respectively, send a message to the upstream neighbor with respect to a given root (see Figure 2), to the downstream neighbors, to all neighbors, or to a particular node with a given logical address.4.2 Overlay Network Properties ManagementAs seen, the properties of an overlay socket are configured by setting attributes in a configuration file. The overlay manager in an application process uses the attributes to create a new overlay socket. By modifying the attributes in the configuration file, an application programmer can configure the overlay protocol or transport protocol that is used by the overlay socket. Changes to the file must be done before the socket is created. Figure 5 shows a (simplified) example of a configuration file. Each line of the configuration file assigns a value to an attribute. The complete list of attributes and the range of values is documented in [12]. Without explaining all entries in Figure 5, the file sets, among others, the ov erlay ID to …1234 ‟, selects version 2.0 of the DT protocol as overlay protocol (…Node=DT2-0 ‟), and it sets the transport protocol of the socket adaptor to TCP(…SocketAdapter=TCP ‟).Each overlay network is associated with a set of attributes that characterize the properties of the over-lay sockets that participate in the overlay network. As mentioned earlier, the most important attribute is the overlay ID, which is used to identify an y network, andwhich can be used as a key toaccess all other attributes of an overlay network. The overlay ID should be a globally unique identifier.A new overlay network is created by generating a new overlay ID and associating a set of attributes that specify the properties of the overlay sockets in the overlay network. To join an overlay network, an overlay socket must know the overlay ID and the set of attributes for this overlay ID. This information can be obtained from a configuration file, as shown in Figure 5.All attributes have a name and a value, both of which are strings. For example, the overlay protocol of an overlay socket can be determined by an attribute with name NODE. If the attribute is set to NOD-E=DT2- 0, then the overlay node in the overlay socket runs the DT (version 2) overlay protocol. The overlay socket distinguishes between two types of attributes: key attributes and configurable attributes. Key attributes are specific to an overlay network with a given overlay ID. Key attributes are selectedwhen the overlay ID is created for an overlay network, and cannot be modified after-wards.Overlay sockets that participate in an overlay network must have identical key attributes, but can have different configurable attributes. The attributes OverlayID and KeyAttributes are key attributes by default in all overlay networks. Configurable at-tributes specify parameters of an overlay socket, which are not considered essential for establishing communication between overlay sockets in the same overlay network, and which are considered …tunable‟.5 ConclusionsWe discussed the design of an overlay socket which attempts to simplify the task of overlay network programming. The overlay socket serves as an end point of commu-nication in the overlay network. The overlay socket can be used for various overlay topologies and support different transport protoc-ols. The overlay socket supports a simple API for joining and leaving an overlaynetwork, and for sending and receiving data to and from other sockets in the overlay network. The main advantage of the overlay socket is that it is relatively easy to change the configuration of the overlay network. An implementation of the overlay socket is distributed with the HyperCast2.0 soft-ware. The software has been extensively tested. A variety of different applications, such as distributed whiteboard and a video streaming application, have been developed with the overlay sockets. Acknowledgement. In addition to the authors of this article the contributors include Bhupinder Sethi, Tyler Beam, Burton Filstrup, Mike Nahas, Dongwen Wang, Konrad Lorincz, Jean Ablutz, Haiyong Wang, Weisheng Si, Huafeng Lu, and Guangyu Dong.应用层覆盖网络的出现促进了新网络服务和应用的发展。
计算机类毕业外文翻译The Phase to Develop the systemWith the society's development, the personal relationship is day by day intense. How enhances the personal relationship, reduces the management cost, the enhancement service level and pensonal competitive ability, is every one superintendent most matter of concern. More and more superintendents thought the implementation computer scientific style management solves this question.Management information systems (MIS), are information systems, typically computer based, that are used within an organization. World net described an information system as‖ a system consisting of the network of all communication channels used with an organization‖.Generally speaking, MIS involved the following parts:1 Conduct a Preliminary Investigation(1)What is the objective of the first phase of the SDLC?Attention: SDLC means Systems Development Life Cycle.The objectives of phase 1, preliminary investigation, are to conduct a preliminary analysis, propose alternative solutions, describe the costs and benefits of each solution, and submit a preliminary plan with recommendations. The problems are briefly identified and a few solutions are suggested. This phase is often called a feasibility study.(2)Conduct the preliminary analysisIn this step, you need to find out what the organization’s objectives are and to explore the nature and scope of the problems under study.Determine the organization’s objectives: Even if a problem pertains to only a small segment of the organization, you cannot study it in isolation. You need to find out what the overall objectives of the organization are and how groups and departments with in the organization interact. Then you need to examine the problem in that context.Determine the nature and scope of the problems: you may already have a sense of the nature and scope of a problem. However, with a fuller understanding of the goals of the organization, you can now take a closer look at the specifics. Is too much time being wasted on paperwork? On waiting for materials? On nonessential tasks? How pervasive is the problem within the organization? Outside of it? What people are most affected? And so on. Your reading and your interviews should give you a sense of the character of the problem.(3)Propose alternative solutionsIn delving into the organization’s objectives and the specific problems, you may have already discovered some solutions. Other possible solutions may be generated by interviewing people inside the organization, clients or customers, suppliers, and consultants and by studying what competitors are doing. With this data, you then have three choices. You can leave the system as is, improve it, or develop a new system.Leave the system as is: often, especially with paper-based or no technological systems, the problem really isn’t bad enough to justify the measures and expenditures required to get rid of it.Improve the system: sometimes changing a few key elements in the system upgrading to a new computer or new software, or doing a bit of employee retraining, for example will do the trick. Modifications might be introduced over several months, if the problem is no serious.Develop a new system: if the existing system is truly harmful to the organization, radical changes may be warranted. A new system would not mean just tinkering around the edges or introducing some new piece of hardware or software. It could mean changes in every part and at every level.(4)Describe costs and benefitsWhichever of the three alternatives is chose, it will have costs and benefits. In this step, you need to indicate what these are.The changes or absence of changes will have a price tag, of course, and you need to indicate what it is. Greater costs may result in greater benefits, which, in turn, may offer savings. The benefits may be both tangible—such as costly savings –and intangible—such as worker satisfaction. A process may be speeded up, streamlined through the elimination of unnecessary steps, or combined with other processes. Input errors or redundant output may be reduced. Systems and subsystems may be better integrated. Users may be happier with the system. Customers or suppliers may interact more efficiently with the system. Security may be improved. Costs may be cut.(5)Submit a preliminary planNow you need to wrap up all your findings in a written report, submitted to the executives(probably top managers) who are in a position to decide in which direction to proceed—make no changes, change a little, or change a lot—and how much money to allow the project. You should describe the potential solutions, costs, and benefits and indicate your recommendations. If management approves the feasibility study, then the systems analysis phase can begin.2 Do a Detailed Analysis of the System(1)What tools are used in the second phase of the SDLC to analyze data?The objectives of phase 2, systems analysis, are to gather data, analyze the data, and write a report. The present system is studied in depth, and new requirements are specified. Systems analysis describes what a system is already doing and what it should do to meet the needs of users. Systems design—the next phase—specifies how the system will accommodate the objective.In this second phase of the SDLC, you will follow the course prescribed by management on the basis of your phase/feasibility report. We are assuming what you have been directed to perform phase 2—to do a careful analysis of the existing system, in order to understand how the new system you propose would differ. This analysis will also consider how people’s positions and tasks will have to change if the new system is put into effect. In general, it involves a detailed study of: The information needs of the organization and all users;The actives, resources, and products or any present information systems;The information systems capabilities required to need the established information needs and user needs.(2)Gather dataIn gathering data, systems analysts use a handful of tools. Most of them not tem ply technical. They include written documents, interviews, questionnaires, observation, and sampling.Written documents: a great deal of what you need is probably available in the form of written documents, and so on. Documents are a good place to start because they tell you how things are or are supposed to be. These tools will also provide leads on people and areas to pursuer further.Interviews: interviews with managers, workers, clients, suppliers, and competitors will also give you insights. Interviews may be structured or unstructured.Questionnaires: questionnaires are useful for getting information for large groups of people when you can’t get around to interviewing everyone. Questionnaires may also yield more information if respondents can be anonymous. In addition, this tool is convenient, is inexpensive, and yields a lot of data. However, people may not return their forms, results can be ambiguous, and with anonymous questionnaires you’ll have no opportunity to follow up.Observation: no doubt you’ve sat in a coffee shop or on a park bench and just alone ―a person is watching‖. This can be a tool for analysis, too. Through observation you can see how people interact with one another and how paper moves through an organization. Observation can be non-participant or participant. If you are a non-participant observer, and people knew they are a participant observer, you may gain more insights by experiencing the conflicts and responsibilities of the people you are working with.(3)Analyze the dataOnce the data is gathered, you need to come to grips with it and analyze it. Many analytical tools, or modeling tools, are available. Modeling tools enables a systems analyst to present graphic representations of a system. Examples are CASE tools,data flow diagrams, systems flow charts, connectivity diagrams, grid charts, decision tables, and object-oriented analysis.For example, in analyzing the current system and preparing data flow diagrams, the systems analyst must also prepare a data dictionary, which is then used and expanded during all remaining phases of the SDLC. A data dictionary defines all the elements that make up the data flow. Among other things, it records what each data element is by name, how long it is, are where it is used, as well as any numerical values assigned to it. This information is usually entered into a data dictionary software program.The Phase: Design the System(4)At the conclusions of the third phase of the SDLC, what should have been created?The objectives of phase 3, systems design, are to do a preliminary design and then a detail and to write a report. In this third phase of the SDLC, you will essentially create a rough draft and then a detail draft of the proposed information system.(5)Do a preliminary designA preliminary design describes the general foundational capabilities of proposed information system. It reviews the system requirements and then considers major components of the system. Usually several alternative systems are considered, and the costs and the benefits of each are evaluated.Some tools that may be used in the preliminary design an the detail design are following:CASE tools: they are software programs that automate various activities of the SDLC in several phases. This screen is from one of their banking system tools. It shows a model for an A TM transaction. The purchaser of the CASE tool would enter details relative to the particular situation. This technology is intended to speed up to the process of developing systems and to improve the quality of the resulting systems.Project management software: it consists of programs used to plan, schedule, a control the people, costs, and resources required to complete a project on time.3 A detail designA detail design describes how a proposed information system will deliver the general capabilities in the preliminary design. The detail design usually considers the following parts of the system, in this order: output requirements, and system controls and backup.(1) Output requirements: the first thing to determine is what you want the system to produce. In this first step, the systems analyst determines what media the appearance or format of the output, such as headings, columns, and menus.(2) Input requirements: once you know the output, you can determine the inputs, here, too, you must define the type of input, such as keyboard or source data entry. You must determine in what form data will be input and how it will be checked for accuracy. You also need to figure out what volume of data the system can be allowed to take in.(3) Storage requirements: using the data dictionary as a quite, you need to define the files and databases in the information system. How will the files be organized? What kind of storage devices will be used? How will they interface with other storage devices inside and outside of the organization? What will be the volume of database activity?(4) Processing and networking requirements, what kind of computer or computers will be used to handle the processing? What kind of operating system and applications software will be used? Will the computer or computers be tied to others in a network? Exactly what operations will be performed on the input data to achieve the desired output information? What kinds of user interface are desired?(5) System controls backup: finally, you need to think about matters of security, privacy, and data accuracy. You need to prevent unauthorized users from breaking into the system, for example, and snooping in private files. You need to devise auditing procedures and to set up specifications for testing the new system. Finally, you need to institute automatic ways of backing up information and storing it else where in case the system fails or is destroyed.4 Develop/Acquire the System(1)What general tasks do systems analysts perform in the fourth phase of the SDLC?Systems development/acquisition, the systems analysts or others in the organization acquire the software, acquire thehardware, and then test the system. This phase begins once management has accepted the report containing the design and has‖green lighted‖the way to development. Depending on the size of the project, this phase will probably involve substantial expenditures of money and time. However, at the end you should have a workable system.(2)Acquire softwareDuring the design stage, the systems analyst may have had to address what is called the ―make-or-buy‖ decision; if not, that decision certainly cannot be avoided now. In the make-or-buy decision, you decide whether you have to create a program –have it custom-written—or buy it. Sometimes programmers decide they can buy an existing software package and modify it rather than write it from scratch.If you decide to create a new program, then the question is whether to use the organization’s own staff programmers or to hair outside contract programmers. Whichever way you go, the task could take months.(3)Acquire hardwareOnce the software has been chosen, the hardware to run it must be acquired or upgraded. It’s possible you will not need to obtain any new hardware. It’s also possible that the new hardware will cost millions of dollars and involve many items: models, and many other devices. The organization may prefer to lease rather than buy some equipment, especially since chip capability was traditionally doubled about every 18 months.(4)Test the systemWith the software and hardware acquired, you can now start testing the system in two stages: first unit testing and then system testing. If CASE tools have been used throughout the SDLC, testing is minimized because any automatically generated program code is more likely to be error free.5 Implement the System(1)What tasks are typically performed in the fifth phase of the SDLC?Whether the new information system involves a few handheld computers, and elaborate telecommunications network, or expensive mainframes, phase 5,systems implementation, with involve some close coordination to make the system not just workable but successful, and people are tainted to use it.6 Maintain the System(1)What two tools are often used in the maintenance phase of the SDLC?Phase 6, systems maintain, adjusts and improves the system by having system audits and periodic evaluations and by making changes based on new conditions.Even with the conversion accomplished and the users trained, the system won’t just run itself. There is a sixth-and never-ending –phase in which the information system must—monitored to ensure that it is effective. Maintenance includes not only keeping the machinery running but also updating and upgrading the system to keep pace with new products, services, customers, government regulations, and other requirements.附件二英汉翻译系统开发阶段随着社会的发展,个人关系管理在日常生活中起的左右显而易见,怎样增强个人管理管理能力,减少管理成本,加强服务水平和个人的竞争力是困扰每一个主管的重要问题之一。
英文文献及翻译(计算机专业)The increasing complexity of design resources in a net-based collaborative XXX common systems。
design resources can be organized in n with design activities。
A task is formed by a set of activities and resources linked by logical ns。
XXX managementof all design resources and activities via a Task Management System (TMS)。
which is designed to break down tasks and assign resources to task nodes。
This XXX。
2 Task Management System (TMS)TMS is a system designed to manage the tasks and resources involved in a design project。
It poses tasks into smaller subtasks。
XXX management of all design resources and activities。
TMS assigns resources to task nodes。
XXX。
3 Collaborative DesignCollaborative design is a process that XXX a common goal。
In a net-based collaborative design environment。
n XXX n for all design resources and activities。
附件1:外文资料翻译译文 大容量存储器 由于计算机主存储器的易失性和容量的限制, 大多数的计算机都有附加的称为大容量存储系统的存储设备, 包括有磁盘、 CD 和 磁带。相对于主存储器,大的容量储存系统的优点是易失性小,容量大,低成本, 并且在许多情况下, 为了归档的需要可以把储存介质从计算机上移开。 术语联机和脱机通常分别用于描述连接于和没有连接于计算机的设备。联机意味着,设备或信息已经与计算机连接,计算机不需要人的干预,脱机意味着设备或信息与机器相连前需要人的干预,或许需要将这个设备接通电源,或许包含有该信息的介质需要插到某机械装置里。 大量储存器系统的主要缺点是他们典型地需要机械的运动因此需要较多的时间,因为主存储器的所有工作都由电子器件实现 。 1. 磁盘 今天,我们使用得最多的一种大量存储器是磁盘,在那里有薄的可以旋转的盘片,盘片上有磁介质以储存数据。盘片的上面和(或)下面安装有读/写磁头,当盘片旋转时,每个磁头都遍历一圈,它被叫作磁道,围绕着磁盘的上下两个表面。通过重新定位的读/写磁头,不同的同心圆磁道可以被访问。通常,一个磁盘存储系统由若干个安装在同一根轴上的盘片组成,盘片之间有足够的距离,使得磁头可以在盘片之间滑动。在一个磁盘中,所有的磁头是一起移动的。因此,当磁头移动到新的位置时,新的一组磁道可以存取了。每一组磁道称为一个柱面。 因为一个磁道能包含的信息可能比我们一次操作所需要得多,所以每个磁道划分成若干个弧区,称为扇区,记录在每个扇区上的信息是连续的二进制位串。传统的磁盘上每个磁道分为同样数目的扇区,而每个扇区也包含同样数目的二进制位。(所以,盘片中心的储存的二进制位的密度要比靠近盘片边缘的大)。 因此,一个磁盘存储器系统有许多个别的磁区, 每个扇区都可以作为独立的二进制位串存取,盘片表面上的磁道数目和每个磁道上的扇区数目对于不同的磁盘系统可能都不相同。磁区大小一般是不超过几个KB; 512 个字节或 1024 个字节。 磁道和扇区的位置不是磁盘的物理结构的固定部分,它是通过称为磁盘格式化或初始化形成的,它通常是由磁盘的厂家完成的,这样的盘称为格式化盘,大多数的计算机系统也能执行这一个任务。因此, 如果一个磁盘上的信息被损坏了磁盘能被再格式化,虽然这一过程会破坏所有的先前在磁盘上被记录的信息。 磁盘储存器系统的容量取决于所使用盘片的数目和所划分的磁道与扇区的密度。低容量的系统仅有一张塑料盘片组成,称为软磁盘或软盘,另一个名称是floppy disk,强调它的灵活性。 (现在直径3.5英寸的软盘封装在硬的塑料盒子里,没有继续使用老的为5.25英寸的软盘的柔软纸质包装)软盘很容易插入到相应的读写装置里,也容易读取和保存,因此,软盘通常用于信息的脱机存储设备,普通的3.5英寸软盘的容量是1.44MB,而特殊的软盘会有较高的容量,一个例子是INMEGA公司的ZIP盘,单盘容量达几百兆。 大容量的磁盘系统的容量可达几个GB,它可能有5-10个刚性的盘片,这种磁盘系统出于所用的盘片是刚性的,所以称为硬盘系统,为了使盘片可以比较快的旋转,硬盘系统里的磁头不与盘片是表面接触,而是依靠气流“浮”在上面,磁头与盘片表面的间隙非常小,甚至一颗尘粒都会造成磁头和盘片卡住,或者两者毁坏(这个现象称为划道)。因此,硬盘系统出厂前已被密封在盒子里。 评估一个磁盘系统的性能有几个指标: (1)寻道时间,读/写磁头从当前磁道移到目的磁道(依靠存取臂)所需要的时间 。 (2)旋转延迟或等待时间,读/写磁头到达所要求的磁道后,等待盘片旋转使读/写磁头位于所要存取的数据(扇区)上所需要的时间。它平均为盘片旋转一圈时间的一半。 (3)存取时间,寻道时间和等待时间之和。 (4)传输速率,数据从磁盘上读出或写入磁盘的时间。 硬盘系统的性能通常大大优于软盘,因为硬盘系统里的读/写磁头不接触盘片表面,所以盘片旋转速度达到每分种几千转,而软盘系统只有每分300转。因此,硬盘系统的传输速率通常以每秒MB数目来标称,比软盘系统大得多,因为后者仅为每秒数KB。 因为磁盘系统需要物理移动来完成它的们的操作,因此软盘系统和硬盘系统都难以与电子工业线路的速度相比。电子线路的延迟时间是以毫微秒或更小单位度量的,而磁盘系统的寻道时间,等待时间和存取时间是以毫秒度量的,因此,从磁盘系统检索信息所需要的时间与电子线路的等待时间相比是一个漫长的过程。 2. 光盘 另一种流行的数据存储技术是光盘,盘片直径是12厘米(大约5英寸),由反射材料组成,上面有光洁的保护层。通过在它们反射层上创建反射偏差的方法在上面记录信息,这种信息可以借助激光束检测出来,因为在CD旋转时激光束监视它的反射面上的反射偏差。 CD技术原来用于音频录制,采用称为CD-DA(光盘数字音频)的记录格式,今天作为计算机数据存储器使用的CD实际上使用同样的格式。CD上的信息是存放在一条绕着CD的螺旋形的磁道上,很象老式唱片里的凹槽;与老式唱片不同的是,CD上的磁道是从里向外的,这条磁道被分成称为扇区的单元。每个扇区有自己的标识,有2KB的数据容量,相当于在音频录制时1/75的音乐。 CD上保存的信息在整个螺旋形的磁道是按照统一的线性刻度,这就意味着,螺旋形磁道靠边的环道存放的信息比靠里边的环道要多。所以,如果盘片旋转一整圈,那么激光束在扫描螺旋形磁道外边时读到的扇区个数要比里边多。因而,为了获得一致的数据传输速率,CD-DA播放器能够根据激光束在盘片上的位置调整盘片的旋转速度。但是,作为计算机数据存储器使用的大多数CD驱动器都以一种比较快的、恒定的速度旋转盘片,因此必须适应数据传输速率的变化。 这种设计思想就使得CD存储系统在对付长而连续的数据串时有最好的表现,如音乐复制。相反,当一个应用需要以随机的方法存取数据时,那么磁盘存储器所用的方法(独立的、同心的磁道)就胜过CD所用的螺旋形方法。 传统CD的容量为600~700MB。但是,较新的DVD的容量达到几个GB。DVD由多个半透明的层构成,精确聚焦的激光可以识别不同的层。这种盘能够储存冗长的多媒体演示,包括整个电影。 3. 磁带 一种比较老式的大容量存储器设备是磁带。这时,信息储存在一条细薄的的塑料带的磁介质涂层上,而塑料带则围在磁带盘上作为存储器,要存取数据时,磁带装到称为磁带驱动器的设备里,它在计算机控制下通常可以读带,写带和倒带,磁带机有大有小,从小的盒式磁带机到比较老式的大型盘式磁带机,前者称为流式磁带机,它表面上类似于立体声收录机,虽然这些磁带机的存储容量依赖于所使用的格式,但是大多数都达几个GB。 现代的流式磁带机都将磁带划分为许多段,每段的标记是格式化过程中磁化形成的,类似于磁盘驱动器。每一段含有若干条纵向相互平行的磁道,这些磁道可以独立地存取,因而可以说,磁带是由许多单独的二进制位串组成的,好比磁盘的扇区。 磁带技术的主要缺点是:在一条磁带上不同位置之间移动非常耗费时间,因为在磁带卷轴之间要移动很长的磁带,于是,磁带系统的数据存取时间比磁盘系统的长,因为对于不同的扇区,磁盘的读/写磁头只要在磁道之间作短的移动,因此,磁带不是流行的联机的数据存储设备,但是,磁带系统常使用在脱机档案数据应用中,原因是它具有容量大,可靠性高和性价比好等优势。虽然例如DVD非传统技术的进展正迅速向这磁带的最后痕迹提出挑战。 4. 文件存储和检索 在大容量存储系统中,信息是以称为文件的大的单位储存的,一个典型的文件可以是一个全文本的资料,一张照片,一个程序或一组关于公司员工的数据,大容量存储系统的物理特性表明,这些文件是按照许多字节为单位存储的检索的,例如,磁盘上每个扇区必须作为一个连续的二进制位串进行操作,符合存储系统物理特性的数据块称为物理记录,因此存放在大容量存储系统中的文件通常包含许多物理记录。 与这种物理记录划分相对的是,一个文件通常有一种由它所表示的信息决定的自然划分,例如,一个关于公司员工信息的文件由许多单元组成,每个单元由一个员工的信息组成。这些自然产生的数据块称为逻辑记录,其次,逻辑记录通常由更小的称为字段的单元组成,例如,包含员工信息的记录大概由姓名,地址,员工标识号等字段组成。 逻辑记录的大小很少能够与大容量存储系统的物理记录相匹配,因此,可能许多个逻辑记录可以存放在一个物理记录中,也可能一个逻辑记录分成几个物理记录,因此,从大容量存储系统中存取数据时需要一定的整理工作,对于这个问题的常用解决方法是,在主存储系统里设置一个足够大的存储区域,它可以存放若干个物理记录并可以通过它重新组织数据。(以符合逻辑记录(读)或物理记录(写)的要求)也就是说,在主存储器与大容量存储系统之间传输的数据应该符合物理记录的要求。同时位于主存储器区域的数据按照逻辑记录可以被查阅。 主存储器中的这种存储区域称为缓冲区,通常,缓冲区是在一个设备向另一个设备传输数据时用来临时保存数据的,例如,现代的打印机都有自己的存储芯片,其大部分的作为缓冲区,以保存该打印机已经收到但还没有打印的那部分数据。 由此可知,主存储器,磁盘,光盘和磁带依次表示随机存取程度降低的设备,主存储器里所用的编址系统可允许快速随机地存取某个字节。磁盘只能随机存取整个扇区的数据。其次,检索一个扇区涉及寻道时间和旋转延迟,光盘也能够随机存取单个扇区,但是延迟时间比磁盘长一些,因为把读/写头定位到螺旋形磁道上并调准盘片的旋转速度需要的时间较长,最后,磁带几乎没有随机存取的机制,现代的磁带系统都在磁带上做标记,使得可以单独存取磁带上指定的段,但是磁带的物理结构决定了存取远距离的段需要花费比较多的时间。
计算机英文文献翻译INDUSTTRY PERSPECTIVEUSING A DSS TO KEEP THE COST OF GAS DOWNThink you spend a lot on gas for you car every yer?J.B.Hunt Transportation Inc.spends a lot more..J.B.Hunt moves freight around the country on its 10,000trucks and 48,000 trailers.The company spent$250 million in 2004 on fuel.That figure was up by 40 percent over the previous year.Diesel fuel is the company''s second-largest expense(drivers''wages is the largest),and the freight hauler wanted to find a way to reduce that.part of the answer lay,as it often does,in IT.In2000,J.B.Hunt installed a decision support system that provides drivers with help in deciding which gas station to stop at for ing statellite communications,the system beams diesel-fuel prices from all over the country straight into the cabs of the tricks.The software accesses a database with local taxes for each area of the country and then calculates for the drivers how much refueling will actually cost.J.B.Hunt doesn''t require drivers to use this system,but provides incentives for those who do.The company estimates that the system saves about $1 million annually.Decision Support SystemIn Chapter 3,you saw how data mining can help you make business decisions by giving you the ability to slice and dice your way through massive amounts of information.Actually,a data warehouse with data-mining tools is a form of decision support.The term decision support system ,used broadly ,means any computerized system that helps you make decisions.Medicine can mean the whole health care industy or in can mean cough syrup,depending on the context.Narrowly definrd,a decision support system(DSS) si a highly flexible and interantive IT system that is designed to support decision making when the problem is not structured.A DSS is an alliance between you,the decision maker,and specialized support provided by IT(see figure4.4).IT brings speed,vast amounts information,and sophisticated processing capabilities to help you create information useful in making a decision.You bring know-how in the form of your experience,intuition,judgment,and knowledge of the relevant factors.IT provides great power ,but you-as the decision maker-must know what kinds of questions to ask of the information and how to process the information to get those questions answered.In fact,theprimary objective of a DSS is to improve your effectiveness as a decision maker by providing you with assistance that will complement your insights.This union of your know-how and IT power helps you generate business intelligence so that you can quickly respond to changes in the marketplace and manage resources in themost effective and efficient ways possible.Following are some example of the varid applicatins of DSSs:.。
毕业设计(论文)外文文献翻译文献、资料中文题目:指纹识别操作系统文献、资料英文题目:文献、资料来源:文献、资料发表(出版)日期:院(部):专业:班级:姓名:学号:指导教师:翻译日期: 2017.02.14摘要:本文拟在提出一种可以区分protocol指纹识别的方法,用帧描述指纹识别代替建立帧系统获得主机信息与系统配对从而分辨出主机操作系统的类别。
实验的结果表明这种方法能够有效的辨别操作系统,这一方法比其他例如nmap 和xprobe的系统更为隐秘。
关键词:传输控制)协议/ 协议指纹识别操作系统辨别远程主机的操作系统,这是一个很重要的领域。
了解主机操作系统可以分析和获取一些信息,例如记忆管理,CPU的类型。
这些信息对于计算机网络的攻击与防御非常重要。
主要的辨别是通过TCP/IP指纹识别来完成的。
几乎所有的操作系统的定制他们自己的协议栈都通过以下的RFC。
这种情况导致一个实例,每个协议栈会有细节上的不同。
这些不同的细节就是所知道的使辨别操作系统称为可能的指纹识别。
Nmap、Queso在传输层里使用指纹。
他们将特殊数据包发送到目标并分析返回的数据包,在指纹库中寻找配对的指纹,以便得到的结果。
指纹库中的信息受指定的探测信息的影响.很难区分类似的操作系统(例如:windows98/2000/xp)Xprobe主要是利用ICMP协议,这是利用五种包来识别操作系统。
它能够提供的在所有可能的情况下确实是操作系统的概率。
主要不足是它过分依赖ICMP协议议定书。
SYNSCAN是在应用协议中与目标主机联系时,使用的一些典型的指纹识别方法。
指纹库对在这个领域有限制。
Ring,Ttbit查明操作系统所使用TCP / IP 的性能特点。
因为这种性能受网络环境极大。
其结果往往是不完全确定的。
文献分析资料中的行动而获得的拦截(如一些同步的要求,一个封闭的端口如何响应连接请求)。
虽然这种方式是有效,它在少数特定操作系统区分上述的各种系统,都没有完整的描述指纹系统,引起他们进行分辨的主要是依靠部分的TCP/IP。
这篇文章的目的就是要简绍一种新的方法来解决这些问题。
它们都被吓跑的方式来描述指纹的OS integrallty ,造成诉讼程序的确定只能依靠部分TCP / IP协议。
本文提出了一种新的方法来解决这一问题:它是指纹操作系统,是通过利用科技来获取一些信息,获取的信息的一些技术,查明操作系统。
第二章我们提出一些基本的方法的概念,第三章用帧技术来提出描述和匹配协定指纹,第四章,是完成这种方法的算法,第五部分,利用实验来验证他的有效有效性并分析结果最后第六部分是总结全文,及未来的发展方向。
该程序是为了获取信息,提取指纹和匹配的指纹库里的记录,以便知道类型。
本节确定获取信息的方法,采取的做法和通信的状况,还区分指纹。
这些工作为下一节如何建立一个帧系统来识别指纹做好准备要插入“表”或“数字” ,请粘贴下文所述数据。
所有表格和数字必须使用连续数字( 1 , 2 , 3等),并有一个标题放在下面的数字(“ FigCaption ” )或在表的上面(“ FigTalbe ” )用8pt字体和从风格兰中下拉菜单中的类别中选择指定的样式“标题”。
在本文中,我们提出了一个方法,以确定操作系统的远程主机。
该方法使用帧技术来识别指纹,弥补探针和监控获得的信息和从资料中摘取信息来与指纹库中的匹配,最后识别操作系统。
通过实验,该方法与nmap and xprobe. 相比,能准确识别远程的主机的操作系统。
在未来,我们计划为每个种操作系统汇编更多的指纹,使算法(规则系统)将更加智能化,以提高识别的精度(准确性)。
This paper present a method that classify the fingerprint of protocol(电脑之间通信与资料传送所遵守的规则), use the frame to describe the fingerprint in order to create the frame system, get the information of host(主机)to match the system to identify the type of OS in remote host. Result from experimental(实验性的)appears that this method can identify the OS effectively, the action of is more secretly than other systems such as nmap and xprobe (x-probe:X探针).Key words: TCP/IP Fingerprint OSIt is an important field that identify what OS in remote host. Mastering the OS can analyse and acquire some information such as memory management、the kind of CPU. These information is important for computer network attack and computer network defense.The main way to identify is through the TCP/IP fingerprint to finish. Nearly all kind of OS customize(定制)their own’s protocol stack by following the RFC. This instance cause the fact that every protocol stack has some different details during implementing. These details are known as fingerprint which make it possible to identify the OS .Nmap、Queso[1] use the fingerprint in transport layer. They send the particular packets to the target and analyse the retured packets, matching the fingerprint in the fingerprint warehouse in order to get the result. The information in the warehouse is affected by the specified message for probing. It hardly to distinguish the similar OS (eg.windows98/2000/xp).Xprobe[2] mainly use the ICMP which make use of five kinds of packets in ICMP to identify OS. It can give the probability of all possible situation which maybe the indeed OS. The main shortage is it excessively depend on ICMP Protocol.SYNSCAN[3] use some typical fields’ f ingerprint to identify when it communicaties with target host in application protocol. The warehouse of fingerprint have limited types of field.Ring 、Ttbit[5][6] identify the OS using the performance character of TCP/IP. Because this kind of character is affected by network environment greatly. The result is often not exactly.Literature[7] analysis the action in messages which are acquired through interception(eg. The number of SYN request, a closed port how to response a connection request).Although thisway is availability, it only distinguish a few given OSAbove all the kinds of system, they all be scare of a way to describe the fingerprint of OS integrallty, which cause the proceeding of identify only depend on a part of TCP/IP . This paper propose a new method to resolve the problem: it uniformly the fingerprint of OS, acquire the message by some technology, identify the OS at last.The rest of the paper is organized as followed: Section Ⅱ we present based concept of this method. Section Ⅲpresent how to describe and match the protocol fingerprint using frame technology. Section Ⅳpresent an algorithm to implement the method and Section Ⅴuse experiment to validate its effectiveness and analysis the result. Finally Section Ⅵ present the concluding remark and possible future work.The proceeding of identify is to acquire message, extract the fingerprint and match the record of fingerprint warehouse, in order to know the type. This section define the measure which are to acquire message, the action and status of communication, also classify the fingerprint. These work are all prepared for the next section which how to built a frame system describing the fingerprint.To insert “Tables” or “Figures”, please paste the data as stated below. All tables and figures must be given sequential numbers (1, 2, 3, etc.) and have a caption placed below the figure (“FigCaption”) or above the table(“FigTalbe”) being described, using 8pt font and please make use of the specified style “caption” from the drop-down menu of style categories ConclusionIn this paper, we have presented a method for identifying OS of remote host. The method use frame technology to express the fingerprint, make up of Probe and Monitor to get message and abstract the information from the message to match the warehouse of fingerprint, identify the OS at last. Through experiment, this method can exactly identify the OS of remote hose with more secretly and less number of packets comparing with nmap and xprobe.In the future, we plan to collect more fingerprint for each kind of OS, make the algorithm(规则系统) to be more intelligent, in order to improve the precision(准确性) of identify.无论是日本媒体的打算激发一种民族自强不息的人,或者是美国媒体进一步宣扬“中国威胁论”总之,最近,一些外国媒体一直鼓吹中国已经成为“世界工厂” 。