数据库外文翻译
- 格式:doc
- 大小:94.00 KB
- 文档页数:11
外文翻译:索引原文来源:Thomas Kyte.Expert Oracle Database Architecture .2nd Edition.译文正文:什么情况下使用B*树索引?我并不盲目地相信“法则”(任何法则都有例外),对于什么时候该用B*索引,我没有经验可以告诉你。
为了证明为什么这个方面我无法提供任何经验,下面给出两种等效作法:•使用B*树索引,如果你想通过索引的方式去获得表中所占比例很小的那些行。
•使用B *树索引,如果你要处理的表和索引许多可以代替表中使用的行。
这些规则似乎提供相互矛盾的意见,但在现实中,他们不是这样的,他们只是涉及两个极为不同的情况。
有两种方式使用上述意见给予索引:•作为获取表中某些行的手段。
你将读取索引去获得表中的某一行。
在这里你想获得表中所占比例很小的行。
•作为获取查询结果的手段。
这个索引包含足够信息来回复整个查询,我们将不用去查询全表。
这个索引将作为该表的一个瘦版本。
还有其他方式—例如,我们使用索引去检索表的所有行,包括那些没有建索引的列。
这似乎违背了刚提出的两个规则。
这种方式获得将是一个真正的交互式应用程序。
该应用中,其中你将获取其中的某些行,并展示它们,等等。
你想获取的是针对初始响应时间的查询优化,而不是针对整个查询吞吐量的。
在第一种情况(也就是你想通过索引获得表中一小部分的行)预示着如果你有一个表T (使用与早些时候使用过的相一致的表T),然后你获得一个像这样查询的执行计划:ops$tkyte%ORA11GR2> set autotrace traceonly explainops$tkyte%ORA11GR2> select owner, status2 from t3 where owner = USER;Execution Plan----------------------------------------------------------Plan hash value: 1049179052------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes |------------------------------------------------------------------| 0 | SELECT STATEMENT | | 2120 | 23320 || 1 | TABLE ACCESS BY INDEX ROWID |T | 2120 | 23320 || *2 | INDEX RANGE SCAN | DESC_T_IDX | 8 | |------------------------------------------------------------------Predicate Information (identified by operation id):---------------------------------------------------2 - access(SYS_OP_DESCEND("OWNER")=SYS_OP_DESCEND(USER@!))filter(SYS_OP_UNDESCEND(SYS_OP_DESCEND("OWNER"))=USER@!)你应该访问到该表的一小部分。
数据库概论A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage, and each field typically contains information pertaining to one aspect or attribute of the entity described by the database. Using keywords and various sorting commands, users can rapidly search, rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregates of data.一个数据库由一个文件或文件集合组成。
这些文件中的信息可分解成一个个记录,每个记录有一个或多个域。
域是数据存储的基本单位,每个域一般含有由数据库描述的属于实体的一个方面或一个特性的信息。
用户使用键盘和各种排序命令,能够快速查找、重排、分组并在查找的许多记录中选择相应的域,建立特定集上的报表。
Database records and files must be organized to allow retrieval of the information. Early systems were arranged sequentially (i.e., alphabetically, numerically, or chronologically); the development of direct-access storage devices made possible random access to data via indexes. Queries are the main way users retrieve database information. Typically, the user provides a string of characters, and the computer searches the database for a corresponding sequence and provides the source materials in which those characters appear. A user can request, for example, all records in which the content of the field for a pe rson’s last name is the word Smith.数据库记录和文件的组织必须确保能对信息进行检索。
毕业设计(论文)——外文翻译(原文)Database management moves into the GridDatabase management software (DBMS) has been the backbone of enterprise computing for the past many years. The market is growing bigger in terms of size, and will continue to gain prominence in 2004. With the consolidation, standardisation and centralisation of IT systems underway in most organisations, the demand for highly scalable and reliable database systems is on the rise.According to reliable industry estimates, the Indian database market is currently at about $100 million, and the top three players put together have a market share of more than 70 percent. IDC expects the information and data management software segment to grow at a compounded annual growth rate (CAGR) of 17 percent till 2006. “There will be independent solutions like business intelligence that are largely going to drive the use and adoption of databases,” says Tarun Malik, product marketing manager, Microsoft India.The importance of having a database and data warehouses for various specific applications will also be a factor of growth to drive the market. Early adapters of sophisticated database management and business intelligence tools would be large computing verticals like the government, the banking, financial services and insurance (BFSI) sector, telecom, IT services, manufacturing and the retail sector.Current statusFour or five years ago DBMS was just like a data store, with medium and large companies only looking at it as a tool for storing data. Then around three years ago it really moved into what is called the relational database space. This is where the concept of applications on databases came into the picture.In terms of users there has been a shift from meagre database administrators to developers to data warehouse managers and also towards business intelligence usage that involves a whole lot of people and not just CIOs. This means users have also evolved with the evolution of the product, its usage and market. Till the time it was a data store, database administrators could have managed it. But when it became a data warehouse, CIOs and skilled technical experts got involved.That is why DBMS is now an integral and crucial part of the overall IT policy of large enterprises. The importance of DBMS has come to fore especially after the adoption of ERP and CRM solutions. If you look at the top of the pyramid, for the top few IT spenders, DBMS has become as important as network infrastructure. “As a matter of fact, that is why it is also driving the platform strategy of vendors,” says Malik. However, the trend is still evolving in the SME space.One can now see a very strong momentum in the marketplace. As data continues to grow exponentially, one witnesses the type of information changing from record-oriented to content-oriented data. Databases have become content or information repositories. Handling that and supporting applications is not only transaction-oriented but analysis-oriented. Mixed content is going to be a way in which databases differentiate themselves. There is the trend to push more analytics into the database, with abilities like data mining in real-time to support new applications.XML will be important as users now store and build content repositories to represent that kind of content. In terms of topology of database performance, the ability to get performance, scalability and high availability in different environments is also gaining importance.Another clear trend in the database space is towards building infrastructure that is robust, secure and low-cost. That is why almost all vendors are looking at offering unlimited scalability and reliability on low-cost computers.DriversApart from the increasing adoption of databases in different verticals, the return on investment (RoI) and functionality of databases are also fuelling the growth of DBMS in the country. Consumers, especially after the dot-com debacle, have started looking at spending less and deriving more RoI from new technology, products and software. Any vendor who relates his offering to RoI would be a successful vendor.Open SourceNo one has so far dumped a clustered Oracle 9i database and replaced it with a free, open source database downloaded from the Web and running on a bunch of Intel-based Linux/free OS servers. But a growing number of users are pioneering these freely available databases. These users say that open source databases are reaching a stage where they can become the latest addition to their inventory of open source tools, including the Linux operating system, the Apache Web server and the Tomcat Java servlet engine According to these users, the main attractions of an open source database are:•V ery fast performance, especially in read-only applications.•No or nominal licensing costs.•Low administrative and operational costs.As to the back-end servers, users are still ingrained with Oracle or DB2, which has a fair amount of support for Linux.It is a typical pattern in companies that are experimenting with open source databases. High-volume database updates, which are the essence of transaction-processing applications, remain anchored on products such as Oracle‟s 9i and IBM‟s DB2 Universal Database, and increasingly Microsoft‟s SQL Server. But there are a host of new application areas that don‟t require t he complex and equally expensive features of conventional databases.MySQL open source database from MySQL has spread from being used by a few groups to the core infrastructure of the Internet portal. MySQL is a core piece of the content-generation system for many large users. Open source databases are typically available for free or for a nominal charge and include the complete source code. Finally, in accordance with the terms of the GNU General Public License (GPL), users typically have the freedom to change any part of the source code and use it without charge as long as they publish the change. Once published, the change can be used by anyone.An alternative arrangement is the Berkeley Software Development licence which is used by . Developers can use, copy, modify, and distribute this software free of cost.There is an array of open source databases. Firebird, based on Borland‟s venerable Inter Base database is one of the few that have the support and blessings of vendors and the well-organised community of coders.MySQL is also proving to be popular among open source communities. Every time a new programming language comes out, the first thing that developers usually do is add database connectivity to MySQL. PostgreSQL is the most matured of the open databases, and maintains an extensive Web presence for its developer community. It is a Canadian company that offers applications along with support services. Red Hat bases its product offerings on PostgreSQL.The open databases are often storehouses of innovation. MySQL has an architecture that has a core relational manager that can be used by different kinds of plug-in data handlers. These open databases tend to be far simpler than their conventional counterparts in all these areas. They also have low operational overheads.A common criticism of open source databases is that they don‟t support transactions or don‟t do as well as commercial products. For example, MySQL has a fast database for content store, but it is still immature in terms of transaction processing at the back-end. However, immaturity in some areas of an open database might not be a problem if the software has what you need in other areas, or has a credible track record of delivering new features on a regular basis.ConclusionThe database segment will continue to grow as businesses rely more and more on information as a source of competitive advantage. However, the market has definitely evolved over the years though it has not yet reached high maturity levels. As the SME segment has started adopting the technology, experts opine that there is going to be huge momentum in the market. The Indian SME market is no longer just a PC market; rather, it has become a well-networked and well-connected segment, which is why it has also started using servers. On the enterprise side one will witness a lot of momentum coming around solutions like applicationintegration, business intelligence and reporting services. It is expected that three factors are going to drive the Indian DBMS market in this fiscal: solutions, RoI and functionality. With vendors focusing on these aspects, one expects the market to experience good growth this fiscal.Oracle IndiaOracle feels that by adopting Grid computing (the recently announced 10G enablement) with databases like Oracle 9i, organisations can reduce the cost of IT by running it on low-cost commodity hardware. Oracle has the ability in terms of delivering all elements of the information architecture. On one hand are the development tools and database and application servers, and on other hand are the comprehensive suite of applications in the Oracle E-Business Suite. Moreover, being based on open standards, customers can adopt a hybrid model, which has a mix of legacy and customised applications, and offers a stepping-stone for organisations to move into an infrastructure with a common data model.In terms of technology, Oracle‟s focus is all on the components of the Oracle 10g infrastructure software. Oracle Database and Oracle Application Server provide a powerful deployment platform for enterprise applications, starting from companies with turnover of Rs 10 crore to the largest corporates . It has immense applicability in BFSI, manufacturing, telecom, and the government sector. It has also one of the most secure database technologies. Currently, a number of state governments are implementing Oracle-based solutions. Oracle has already launched the next release of its infrastructure software: Oracle 10g. Oracle 10g is the infrastructure software for Grid computing, which lets the user combine the power of multiple low-cost computers to work as a single powerful and reliable computer.Apart from enabling Grid computing, Oracle Database 10g includes new self-management and tuning capabilities that empower a DBA to focus on higher value-added jobs rather than the day to day management of a database. It allows database administrators to work with the consumers of technology to determine service level agreements and use policy-based database management capability to manage the system. With the release of Oracle 10g Infrastructure software, Oracle hopes to further increase its market share in India. MicrosoftMicrosoft is very aggressively growing its base for SQL Server 2000. It promises to meet the demands of customers‟ data management systems. The company has also gained strength with the promise of ease of manageability and better RoI. Again, as a corporation, the kind of support Microsoft offers to its consumers is unmatched. It involves its customers in the development of its new products. For example, development of the next version of SQL Server 2000 called …Y ukon‟ has involved not only Microsoft partners but also prime customers worldwide. The kind of investment that Microsoft puts into R&D is huge.In the days to come, Microsoft will be focusing more on business value to consumers. The consumer understands the business value of a solution, be it Business Intelligence or application integration. To increase its focus on the mid-tier and the SME market, the company is also going to enhance its channels. Microsoft is also looking at evolving its product with its new version coming up by the end of this calendar year.Bettering RoI is at the top of Microsoft‟s agenda. It believes that the b iggest RoI is going to come through the deployment of the solution, which is going to help drive the customer‟s business. Microsoft, all across its server lines, is known for ease of use and manageability.The company recently released Reporting Services in SQL Server 2000 and that too at no additional cost. Last year it had introduced a 64-bit version of SQL Server at no additional cost. The kind of rich product functionalities that the company is bringing in will clearly help users in realising better RoI. Microsoft will continue to focus on segments like government, BFSI, telecom, IT services, manufacturing and retail. Sybase-SAP allianceIn a move to provide customers with greater choice, SAP has started offering its business applications for small com panies on Sybase‟s database platform, in addition to Microsoft‟s SQL Server database. Under the agreement, SAP and Sybase will integrate SAP‟s …Business One‟ product suite for small and mid-size businesses (SMEs) into Sybase‟s Adaptive Server Enterprise (A SE) database system.Previously, SAP‟s Business One application was available on Microsoft‟s SQL Server database only.SAP will market its combined offering with Sybase through its partner distribution channels. Both SAP and Sybase will dedicate marketing, alliance and training resources to the partnership. In addition, SAP and Sybase plan to develop and market Sybase mobile solutions for Business One customers.本文来源于:/flk.aspx?id=191779&fn=OA00338786.mht&url=http%3a%2f%2fwww.expresscomp %2f20040329%2fdms01.shtml毕业设计(论文)——外文翻译(译文)网格中的数据库管理在过去的几年时间里,数据库管理系统(DBMS)已成为企业计算机的运行中枢。
外文翻译About the database of the knowledge of the deadlock Database itself provides lock management mechanism, but from a hand, database is the client applications "puppet", this is mainly because the client to the server has complete control of the gain of locks ability. The client in enquiries in the request and the way to query processing tend to have direct control, so, if we application design reasonable enough, then appear database is normal phenomenon dead lock.Below are listed some easy to have locked application examples:A, the client cancel inquires no roll back after practice.Most of the application is inquires often happens homework. However, users through the front desk the client application inquires the backend database, sometimes will cancel inquires for any variety of reasons. If the user to open the window after mouth query, because users find reflect crash or slow compelled to cancel the query. But, when the client when cancel inquires, if not add rollback transaction statement, then at this time, because the user has to the server sends the inquiry's request, so, the backend database involved in the table, all have been added L locked. So even if the user cancel after inquires, all in the affairs for the locks within will remain. At this point, if other users need to check on the table or the user to open the window through input inquires to query conditions to improve the system response speed occurs when the jam phenomenon.Second, the client not to get all the results of my query.Usually, the user will be sent to the server after queries, foreground application must be done at once extraction all the results do. If the application did not extract all the results trip, it produces a problem. For as long as the application did not withdraw promptly all the results, the lock may stay at table and block other users. Since the application has been submitted to the server will SQ statements, the application must be extracted all results do. If the application does not follow the principle words (such as because at that time and no oversight configuration), can't fundamentally solve congestion.Three, inquires the execution time too long.Some inquires a relatively long time will cost. As for the query design is not reasonable or query design to watch and record it is, will make inquires the execution time lengthen. If sometimes need to Update on users record or Delete operation, if the line is involved in it, you will get a lot of lock. These locks whether finally upgrade to watch the lock, can block other inquiries.So often, don't take long time running decision support search and online transaction processing inquires the mixed together.When database meet blocked, often need to check the application submitted to the SQL statement itself, and check and connection management, all the results do processing and other relevant application behavior. Usually, the lock for to avoid the conflict in the jam, the author has the following Suggestions.Suggest a: after the completion of the extraction of all query results do.Some applications in order to improve the response speed of the user inquires, will have the option of extraction need record. The "smart" looks very reasonable, but, but will cause more waste. Because inquires not timely and fruit extraction of words, the lock cannot be released. When others inquires the data, will be happening.So, the author suggest in application design, database query for record to the extraction of in time. Through other means, such as adding inquires the conditions, or the way backstage inquires, to improve the efficiency of the inquires. At the same time, in the application level set reasonable cache, and can also be very significantly improved query efficiency.Suggest two: in the transaction execution don't let the user input content.Although in the affairs of the process with sex, can let the user participation, in order to improve the interactivity. But, we don't recommend the database administrator tend to do so. Because if the user in affairs during the exec ution of the input and number, will extend the affairs of the execution time. Although people smarter, but the response speed still don't have a computer so fast. So, during the implementation of the user participation to let the process, will extend the a ffairs of waiting time. So unless there is a special needs, not in the application's execution process, reminds the user input parameters. Some affairs of the executive must parameters, best provide beforehand. If can through the variables in the parameters such as need to go in.Suggest three: make affairs as far as possible the brief.The author thinks that, database administrator should put some problem is simplified. When a need to many SQL statements to complete, might as well take the task decomposition. At the same time, it breaks down into some brief business affairs.If the database a product information table, its record number two million. Now in a management needs, the one-time change one of the one million five hundred thousand record. If through a change affairs, the time is long. If it involves cascade update it, is time the meeting is longer.In view of this situation, we can learn affairs brief words. If the product information, may have a product type field. So in the update data, can we not one-time updates. But through the product category fields to control, to record the iteration points. So every category of update firm consumption of time may be greatly reduces. So although operation, will need more steps. But, can effectively avoid to go to the occurrence of congestion, and improve the performance of the database. Suggest four: child inquires the and list box, had better not use at the same time.Sometimes in the application of design, through the list box can really improve user input speed and accuracy, but, if foreground application does not have buffer mechanism, you often can cause congestion.As in a order management system, may need frequent input sales representatives. In order to user input convenience, sales representative often design into a list box. Every time need to input, foreground application from the background of all sales representative inquires information (if the application is not involved in the cache). On one hand, the son of nature, would be speed query slow; Second, the list box have growth time operation of the inquiry. The two parties face touch together, may causethe application of improving the running time process query. And the other user queries, such as the system administrator need to maintain customer information, and cause congestion.So, in the application design, the child inquires the best less. And the child inquires the list box and use at the same time, more need to ban. If you can't avoid it, should be in application realize caching mechanism. That way, the applications need to sales representative information, will from application cache made, not every time to check the database.At the same time, can be in the list box design "to search" function. When there is a change to the user information, such as the system administrator to join a new sales representatives. In no again before inquires, because of their application is achieved in the cache data, so not just updated content. At this time, users will need to run to inquires the function, let the foreground application from a database query information again. This kind of design, can increase the list box and the son of the execution time inquires, effectively avoid congestion.Suggest five: in the set when cancel inquires back issues.Foreground application is designed, should allow users to a temporary change in idea, cancel the query. Such as user inquires the all product information, may feel response time is long, hard to bear. At this time, they will think of cancel inquires the. In this case, the application design need to design a cancel inquires the button. The user can in the process of inquires click this button cancel inquires at any time. Meanwhile, in the button affair, need to pay attention to join a rollback command. Let the database server can prompt to records or table to unlock.At the same time to the best lock or query timeout mechanism. This is largely because, sometimes also can cost a lot inquires user host to a large number of resources, and cause client crash. At this time, to be able to lock the inquires the or overtime mechanisms, namely in inquires after overtime, database server of related objects for automatic unlock. This is also the database administrator need to program developers negotiation of a problem.In addition, explicit database connection to take control in the concurrent users, is expected to full load next use application to bear ability test, use the link, each inquires to set use inquires and lock exceeds the overtime, these methods can effectively avoid the lock conflict obstruction. When database administrators found that blocking the symptoms, can from these aspect, looking for solutions.From the above analysis can see, SQL Server database lock is a double-edged sword. The security database data consistency at the same time, they will give the database caused some negative effect. How do these negative influence to the least, is our database administrators task. In application design, follow the advice above, can effectively solve the problems for the lock blockages, improve the performance of the database. Visible, to basically solve congestion problem, need database management personnel and program developers work together.中文关于数据库死锁的知识数据库本身提供了锁管理机制,但是从一方面,数据库客户端应用程序的“傀儡”,这主要是由于客户端到服务器的完全控制获得的锁的能力。
附录附录A: 外文资料翻译-原文部分:CUSTOMER TARGETTINGThe earliest determinant of success in the development of a profitable card scheme will lie in the quality of applicants that are attracted by the marketing effort. Not only must there be sufficient creditworthy applicants to avoid fruitless and expensive application processing, but it is critical that the overall mix of new accounts meets the standard necessary to ensure ultimate profitability. For example, the marketing initiatives may attract sufficient volume of applicants that are assessed as above the scorecard cut-off, but the proportion of acceptances in the upper bands may be insufficient to deliver the level of profit and lesser bad debt required to achieve the financial objectives of the scheme.This chapter considers the range of data sources available to support the development of a credit card scheme and the tools that can be applied to maximize the flow of applications from the required categories.Data availabilityThe data that makes up the ingredients from which marketing campaigns can be constructed can come from many diverse sources. Typically, it will fall into four categories:1 the national or regional register of voters;2 the national or regional register of court judgments that records the outcomeof creditor-debtor legislation;3 any national or regional pooled information showing the credit history of clients of the participating lenders; and4 commercially compiled data including and culled from name and address lists, survey results and other market analysis data, e.g. neighborhoods and lifestyle categorization through geo-demographic information systems.The availability and quality of this data will vary from country to country and bureau to bureau.Availability is not only governed by the extent to which the responsible agency has undertaken to record it, but also by the feasibility of accessing the data and the extent (if any) to which local consumer legislation or other considerations (e.g. religious principles) will allow it to be used. Other limitations on the use of available data may lie in the simple impossibility or expense of accessing the information sources, perhaps because necessary consumer consent for divulgence has been withheld or because the records are not yet stored electronically.The local credit information bureaux will be able to provide guidance on all of these matters, as will many local trade or professional associations or the relevant government departments.Data segmentation and AnalysesThe following remarks deal with the ways in which lawfully obtained data may then be processed and analyzed in order to maximize its value as the basis of a marketing prospect list. Examples of the types and uses of data that will play a role in the credit decision area are discussed later in the chapter, within the context of application processing.The key categories into which prospects may be segmented include lifestyle, propensity to purchase specific products (financial or otherwise) and levels of risk. The leading international information bureaux will be able to provide segmentation systems that are able to correlate each of these data categories to provide meaningful prospect lists in rank order. Additionally, many bureaux will have the capability to further enhance the strength and value of the data. Through the selective purchasing of data from bona fide market sources, and by overlaying generic factors deduced from the analysis of the broad mass of industry information that routinely passes through their systems, the best international operators are now able to offer marketing and credit information support that can add significantly to the quality of new applicants.The importance of the role and standard of this data in influencing the quality of the target population for mailings, etc. should not be underestimated. Information that is dated or inaccurate may not only lead a marketer and the organization into embarrassment and damage their reputations, but it will also open the credit card scheme to applicants from outside either the target sector or ,worse still, applicants outside the lender’s view of an acceptable credit risk.From this, it follows that you should seek to use an information bureau whose business principles and operating practices comply with the highest levels of both competence and integrity.Developing the prospect databaseThis is the process by which the raw data streams are brought together and subjected to progressive refinement, with the output representing the refined base from which prospecting can begin in earnest. A wide experience-often across many different markets and countries-in the sourcing, handling and analysis of data inevitably improves the quality of the ideas and systems that a bureau can offer for the development of the prospect database.In summary, the typical shape of the service available from the very best bureaux will support a process that runs as follows:1.collect and consolidate all data to be screened for inclusion;2.merge the various streams;3.sort and classify the data by market and credit categories;4.screen the date using predetermined marketing and credit criteria; and5.consolidate and output the refined list.Bureaux will charge for the use of their expertise and systems.Therefore, consideration should be given to the volumes of data that are to be processed and the costs involved at each stage. The most cost-effective approach to constructing prospect databases only undertakes the lowest-cost screening process within the earlier stages. The more expensive screening processes are not employed until the mass of the data has been reduced by earlier filtering.It is impossible to be prescriptive about the range and levels of service that are available, but reference to one of the major bureaux operating in the region could certainly be a good starting point.Campaign Management and AnalysisAgain, this is an area where excellent support is available from the best-of-breed bureaux. They will provide both the operational support and software capabilities to mount, monitor and analyse your marketing campaign, should you so wish. Their depth of experience and capabilities in the credit sector will often open up income: cost possibilities from the solicitation exercise that would not otherwise be available to the new entrant.The First Important Applications of DBMS’sData items include names and addresses of customers, accounts, loans and their balance, and the connection between customers and their accounts and loans, e.g., who has signature authority over which accounts. Queries for account balances are common, but far more common are modifications representing a single payment from or deposit to an account.As with the airline reservation system, we expect that many tellers and customers (through ATM machines) will be querying and modifying the bank’s data at once. It is vital that simultaneous accesses to an account not cause the effect of an ATM transaction to be lost. Failures cannot be tolerated. For example, once the money has been ejected from an ATM machine ,the bank must record the debit, even if the power immediately fails. On the other hand, it is not permissible for the bank to record the debit and then not deliver the money because the power fails. The proper way to handle this operation is far from obvious and can be regarded as one of the significant achievements in DBMS architecture.Database system changed significantly. Codd proposed that database system should present the user with a view of data organized as tables called relations. Behindthe scenes, there might be a complex data structure that allowed rapid response to a variety of queries. But unlike the user of earlier database systems, the user of a relational system would not be concerned with storage structure. Queries could be expressed in a very high level language, which greatly increased the efficiency of database programmers. Relations are tables. Their columns are headed by attributes.Client –Server ArchitectureMany varieties of modern software use a client-server architecture, in which requests by one process (the client ) are sent to another process (the server) for execution. Database systems are no exception, and it is common to divide the work of the components shown into a server process and one or more client processes.In the simplest client/server architecture, the entire DBMS is a server, except for the query interfaces that the user and send queries or other commands across to the server. For example, relational systems generally use the SQL language for representing requests from the client to the server. The database server then sends the answer, in the form of a table or relation, back to client. The relationship between client and server can get more complex, especially when answers are extremely large. We shall have more to say about this matter in section 1.3.3. there is also a trend to put more work in the client, since the server will be a bottleneck if there are many simultaneous database users.附录B: 外文资料翻译-译文部分:客户目标:最早判断发展可收益卡的成功性是在于受市场影响的被吸引的申请人的质量。
外文文摘型数据库简介1.Ei Compendex工程索引数据库EI(The Engineering Index:工程索引)是工程技术领域的综合性检索工具,由美国工程信息中心编辑出版。
侧重提供应用科学和工程领域的文摘索引信息,涉及核技术、生物工程、交通运输、化学和工艺工程、照明和光学技术、农业工程和食品技术、计算机和数据处理、应用物理、电子和通信、控制工程、土木工程、机械工程、材料工程等学科。
Ei的数据来源于5100种工程类期刊、会议论文集和技术报告,每年大约增加文献25万篇。
2000年8月,Ei 推出Engineering Information Village-2(简称EI Village2 或EV2)新版本,对文摘录入格式进行了改进,首次将文后参考文献列入Compemdex 数据库。
用户在EI Village2网上可检索到1884年至今的文献。
EI Village2有“简易检索(Easy Search)”、“快速检索(Quick Search)”和“高级检索(Expert Search)”三种检索方法。
检索结果显示方式有:Citation(引文)、Abstracts(摘要)或Detailed record(详细记录)格式,可以通过文摘页左下角Full-Text Links链接Elsevier、Springer等全文数据库查阅全文。
EI Village2数据库及时报道全世界工程与技术文献,为科学研究者和工程技术人员提供专业化、实用化的最新文献信息服务。
2.SCI(科学引文索引)数据库科学引文索引(SCI)是国际学术界公认的“四大权威检索工具”之一,内容涵盖自然科学、工程技术、生物医学、社会科学、艺术与人文等170多个学科领域。
目前该数据库有“扩展版”和“核心版”2种版本。
2006年12月SCI Expanded扩展版收录期刊6613种、SCI核心版收录3766种期刊。
SCI数据每周更新。
SCI数据库检索系统有General Search(普通检索)、Cited Ref Saerch(引文检索)、Structure Search(结构检索)和Advanced Search(高级检索)四种检索方法。
在线数据库的相关英语小知识
在线数据库的英文:
online databases
adj. 联机的,在线的
Time flies when you are online.
上网的时候时间过得飞快。
I follow you online.
我在网上跟随着你们。
The syllabus is already available online.
现在课程表已经放在网上了。
They have bee a juggernaut in online advertising, pictures, video and online games.
他们在网络广告、图片、视频、网络游戏各个领域横冲直撞。
A sock puppet is an online identity used for purposes of deception within an online munity.
“马甲”是网络社区中为了隐藏身份而的ID。
databases是什
么意思:
n.
(储存在计算机中的)数据库
dump database mand
转储数据库命令 Are you sure to constrict the picture database?
您确定要压缩当前的图像数据库吗?The latter must be presented in the database.
后者必需在数据库中呈现。
The database is updated monthly.
数据库每月更新。
The information is stored on a large database.
信息储存在宏大的数据库中。
数据库英文DatabaseA database is an organized collection of data that is stored electronically. It enables users to access, manipulate, and analyze data efficiently and effectively. Databases can be used for a wide range of applications, from simple record-keeping to complex data analysis and decision-making.Relational DatabaseA relational database is a database that is organized around tables, which are related to each other through common fields. Each table contains records, which are represented by rows, and fields, which are represented by columns. The relationships between the tables are based on common fields, such as a customer ID or an order ID.SQLStructured Query Language (SQL) is a programming language that is used to manage and manipulate data in a relational database. It is used to create, modify, and delete data and to retrieve data from the database. SQL is widely used in business and industry to manage and analyze data.Data TypesIn a relational database, each field has a data type, which defines the kind of data that can be stored in that field. Common data types include text,numeric, date/time, and Boolean. Other data types, such as binary data or images, may also be used in some databases.Primary KeyA primary key is a field or set of fields in a relational database that uniquely identifies each record in a table. The primary key is used to enforce data integrity, ensuring that each record is unique and that records can be related properly between tables.Foreign KeyA foreign key is a field or set of fields in a table that refers to the primary key of another table. The foreign key is used to establish relationships between tables and to maintain data integrity by enforcing referential integrity constraints.ERDAn entity-relationship diagram (ERD) is a graphical representation of the tables and relationships in a relational database. It is used to model the data and to design the database schema.NormalizationNormalization is the process of organizing data in a database to reduce redundancy and improve data integrity. It involves breaking down tables into smaller, more specific tables and establishing relationships between them toeliminate duplicate data and ensure that data is consistent across the database.IndexesIndexes are used to improve the performance of queries in a database by providing faster access to data. An index is a data structure that is created on one or more fields in a table, allowing the database to quickly locate records that match certain criteria.TriggersTriggers are automated procedures that are executed in response to certain database events, such as the insertion, deletion, or modification of data. Triggers can be used to enforce business rules or to automate certain database tasks.TransactionsA transaction is a sequence of database operations that must be executed as a single, atomic unit. Transactions are used to ensure data integrity and to provide a consistent view of the database to all users.Backup and RecoveryBackup and recovery are critical components of database management. Regular database backups are essential for protecting data against loss orcorruption, while recovery procedures are used to restore data in the event of a disaster or other catastrophic event.Concurrency ControlConcurrency control is the process of managing simultaneous access to a database by multiple users or applications. It ensures that transactions are executed in a correct and consistent manner, while also maintaining data integrity and preventing conflicts or errors.。
本科毕业设计(外文翻译)题目小区物业综合管理系统的设计与实现学生姓名专业班级学号院(系)指导教师(职称)完成时间Database space organizationSpatial data management has been an activearea of research in the database field for two decades,with much of the research being focused on develop-ing data structures for storing and indexing spatialdata. However, no commercial database system pro-vides facilities for directly defining and storing spa-tial data, and forniulating queries based on researchconditions on spatial data. We believe the followingare the relevant issues on which near-term researchshould be focused (in the order of decreasing impor-tance and urgency).First, relational query optimization techniquesneed to be extended to deal with spatial queries,thatis,queries that contain search conditions on spatialpredicates to be developed.Second, more work needs to be done on experi-mental validation of the relative performance of someof the more promising data structures and indexingstructures proposed thus far, with consideration of amuch broader set of operations than just a few opera-tions that have typically been used in the limited per-formance studies conducted thus far.Third, it is difficult to build into a single data-base system multiple data structures for spatial index-ing, and all spatial operators that are useful for awide variety of spatial applications,as such, it isdesirable to build a database system so that it will beas easy as possible to extend the system with addi-tional data structures and spatial operators.If the DBMS provides a way to interactively and update the database, as well as interrogate it capability allows for managing personal data-Aces however, it does not automatically leave an audit trail of actions and does not provide the kinds of controla necessary in a multiuser organization. These-controls are only available when a set of application programs are customized for each data entry and updating function.Software for personal computers which perform me of the DBMS functions have beenvery popular arsenal computers were intended for use by individuals for personal information storage and process- These machines have also been used extensively small enterprises, professionals like doctors, acrylics, engineers, lawyers and so on .By the nature of. intended usage, database systems on these machines except from several of the requirements of full doge database systems.Since data sharing is not:Tended, concurrent operations even less so, the) fewer can be less complex. Security and integrity7aintenance are de-emphasized or absent. As data-) limes will be small, performance efficiency Is also important. In fact, the only aspect of a database system that is Important 'is data Independence. Data-.dependence, as stated earlier. Means that applicant programs and user queries need not recognizant‘physical organization of data on secondary storage. The importance of this aspect, particularly for the personal computer user, is that this greatly simplifies database usage. The user can store, access and manipulate data at a high level (close to the application) and be totally shielded from the low level (close to the machine) details of data organization. We will not discuss details of specific PC DBMS software packages here.Let us summarize in the following the strengths and weaknesses of personal computer data-base software systems:The most obvious positive factor is the user friendliness of the software. A user with no prior computer background would be able to use the system to store personal and professional data, retrieve and perform relayed processing. The user should, of course, satiety himself about the quality of software and the freedom from errors (bugs) so that invest-merits in data arc protected.For the programmer implementing applications with them, the advantage lies in the support for applications development in terms of input screen generations, output report generation etc.offered by theses stems.The main negative point concerns absence of data protection features.Unless encrypted, data cane accessed by whoever has access to the machine Data can be destroyed through mistakes or maliciousintent. The second weakness of manv of the PC-basedsystems is that of performance. If data volumes growup to a few thousands of records,performance couldbe a bottleneck.For organization where growth in data volumesis expected, availability of, the same or compatiblesoftware on large machines should be considered.This is one of the most common misconceptionsabout database management systems that are used inpersonal computers.Thoroughly comprehensive andsophisticated business systems can be developed indBASh, Paradox and other DBMSs.However, theyare created by experienced programmers using theDBMS's own programming language. That is not thesame as users who create and manage personal filesthat are not part of the mainstream company system.Data security prevents unauthorized users from viewing or updating the database.Using passwords, the database, called subschema(pronounced "sub-scheme"),For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.The DBMS can maintain the integrity of the database by not allowing more than one user to up-date the same record at the same time, The DBMS can keep duplicate records out of the database, for example, no two customers with the same customer numbers (Key fields) can be entered into the Data-Base.When a DBMS is used, the detailed knowledge of the physical organization of the data does not have to be built into every application program. The application program asks the DBMS for data by field pine, for example, I coded representation of "give customer name and balance due" would be sent to he DBMS.Without a DBMS, the programmer must secrecy space for the full structure of the in the program, Any change in data structure requires£hangs in all the applications programs.The multiple-database model is represented by proposals for shared and private database architect rues, checkout and checking of data to and from shared and private databases.Each user may populates/her private database with data checked out of the shared database, perform updates against the data. and check them back into the shared Database.The multiple-database model can be used to work around" the conflict situations inherent in long lunation database sessions.Since each user from the shared database and work against. is/her private database, "disconnected" from the、hared database (at least on the surface),the usersan avoid the conflict situations.In particular, multile users may besimultaneously updating the same thought having to wait for other users to comate late their updates.However, when updated is to benecked into the shared database, it may have to be checked in as a new version, necessitating version management. Further, 'when data in a private data-base references data in the shared data,or vice versa, a private database is not really disconnected from the shared database. For example, the evaluation of query in general will require the database system to access both a private database and the shared data-base, even if the query may have been formulated against a private database.The multiple-database model is more appropriate than the single-database model in an environment where it is easy to determine in advance logical partitions of the database that correspond to work to be performed.The single-database model has received considerably more attention than the multiple- database model.The focus of research into single-database model has been on addressing the twin problem of long waits and loss of work that the long duration of transactions brings about.The single-database model requires the intro-diction of a notion of database consistency, and protocols for concurrency control and recovery, that are different from those supported in traditional database systems.If a "reasonable" notion of database consistency is to be supported (I.c. the database system is to enforce it automatically), there are bound to be” conflict" situations where one transaction comes into an access conflict against some other transaction. If await is to be avoided,some means of a negotiated settlement of the conflict must be provided, thereby dragging the users into the details of concurrency control.The single-database model is more appropriate-ate than the multiple-database model in an environment where it is difficult to determine in advance logical partitions of the database that correspond to work to be performed, and the users closely cooperate.The objective of long-duration transactions is to model long-duration, interactive Database access sessions in application environments.The fundamental assumption about short-duration of transactions that underlies the traditional model of transactions is inappropriate for long-duration transactions.The implementation of the traditional model of transactions may cause intolerably long waits when transactions aleph to acquire locksbefore accessing data, and may also cause a large amount of work to be lost when transactions are backed out in response to user-initiated aborts or system failure situations.The objective of a transaction model is to pro-vide a rigorous basis for automatically enforcing criterion for database consistency for a set of multiple concurrent read and write accesses to the database in the presence of potential system failure situations.The consistency criterion adopted for traditional transactions is the notion of scrializability.Scrializa-bility is enforced in conventional database systems through the use of locking for automatic concurrency control, and logging for automatic recovery from system failure situations.A "transaction”that doesn’t provide a basis for automatically enforcing data-base consistency is not really a transaction. To be sure, a long-duration transaction need not adopt seri-alizability as its consistency criterion.However. there must be some consistcricv criterion.Despite a large number of proposals on version support in the context of computer aided design and software engineering, the absence of a consensus on version semantics has been a key impediment to version support in database systems.Because of the differences between files and databases, it is intuitively clear that the model of versions in database systems cannot be as simple as that adopted in file systems to support software engineering.For data-bases, it may be necessary to manage not only versions of single objects (e.g. a software module, document, but also versions of a collection of objects (e.g.a compound document, a user manual,etc.and perhaps even versions of the schema of database (c.g. a table or a class, a collection of tables or classes).Broadly, there are three directions of research and development in versioning.First is the notion of a parameterized versioning",that is, designing and implementing a versioning system whose behavior may be tailored by adjusting system parameters This may be the only viable approach, in view of the fact that there are various plausible choices for virtually every single aspect of versioning.The second is to revisit these plausible choices for every aspect of versioning, with the view to discarding some of themes either impractical or flawed. The third is the investigation into the semantics and implementationof versioning collections of objects and of versioning the database schema.数据库的空间组织二十年来空间数据管理一直是数据库研究领域的活跃区,研究的焦点集中在发展存储和索引空间数据的数据结构上。
附录1 外文原文COLOR SYSTEM OVERVIEWIn the age of office automation and electronic imaging, office documents are being processed, transported, and displayed in a variety of ways. The scope of document processing is enormous; it encompasses page layout, document length, collation, simplex/duplex, color, image quality, finishing, and binding. If the office system is networked, then another dimension of network-related issues-protocol, file format, page description language, compression/decompression, job management, error handling, user interface, and device driver-has to be addressed. Digital color-imaging systems process electronic information from various sources; images may come from a local-area network, a remote-sensing device, different color workstations, or a local scanner. After processing, a document is usually compressed and transmitted to several places via a computer network for viewing, editing, or printing. Moreover, the trend in the industry is moving toward an open environment. This means that various devices such as scanners, computers, workstations, modems, and printers from multiple vendors are assembled into one system. Implementations should be based on public-domain technology rather than proprietary standards. This will allow vendors equal access to the market for system components and give users the widest choice in selecting components. It is a vastly large task to enable the communication of all system components regardless of differences in the operating system, file format, page description language, and information content. Ideally, the exchange should not cause information loss or alteration. A closer look at a document may reveal that it consists of different types of images, primarily text, graphs, and pictorial images. These all have different image characteristics and representations such as ASCII (American Standard Code for Information Interchange) for text, vector for graphs, and raster for pictorial images. Each type of image and its associated attributes like the font, font size, halftone, gray level, resolution, and color have to be dealt with differently. In such a complex environment, there is no doubt that many compatibility problems occur when an image is acquired, transmitted, displayed, and rendered. ?With the fast development of Internet technology, large volumes of data in the form of electronic documents from the Web. For the purposes of data integration and data exchange, more and moreexisting sources, such as relational databases, support public XML export, and increasing amount of public and private data is described in a semi-structured way. A number of issues need to be addressed when we integrate data from different sources, including heterogeneous and duplicate data, multiple divisions and partners, and changes.? Data heterogeneity results from the use of different information management systems to store data and each system has its own data structure and access methods. Relational database management systems benefit from the universal acceptance of Structured Query Language (SQL) as the primary means of getting answers whilst document and email repositories are generally accessed using text search engines with varying interfaces and capabilities. Because these systems were not designed with interoperability in mind, each must generally be accessed using source-specific applications or application programming interfaces (APIs).? Another difficulty in data integration is data duplication-different systems represent the same piece of data in different ways. For example, customers may be identified by name in one database, but by account number in a second repository, may identify the same customer by email address. Frequently a required piece of information is derived from multiple data points. Data integration is further complicated when customers do business with multiple divisions within a large company, or with other partners. Similarly, answering questions about the state of a company's supply chain requires access to vendor and distributor information sources. Doing business electronically across the firewall gives rise to security and data ownership issues. Finally, data integration has to deal with different types of changes; change in business requirements and strategies, in IT systems, mergers and acquisitions, and new product launches. This demands that a data integration solution be sufficiently flexible and adaptable.One possible solution for the data integration problems mentioned above is to provide an XML Web services break down the barriers between different computing platforms, development environments and communications networks, allowing organizations to work together electronically without the expense and delay of agreeing on semantics, schema, interfaces, and other application integration. XML provides the flexibility for handling data with differing structures. As XML is becoming the principal medium for data exchange over the Web and for information integration in general,increasing amounts of public and private data are described in XML. XML data is usually defined in a tree or graph based, self-describing object instance model (Boncz and Kersten, 1999). However, semi-structured data is incompatible with the flat structure of relational database tables, and so the growth of XML data requires new and complex query optimization techniques.Creating XML files with a text editor would be a lot easier if you didn't have to close all those HTML tags. First you have to add the XML declaration and the root opening and closing HTML tags. Next, you start adding element opening and closing tags one at a time. Of course, once you have the initial sequence completed you can just copy and paste to repeat the required elements. After doing this hundreds of times you'll be looking for a faster way to create XML files.Some XML editors will automatically add the closing tag after you have finished typing the opening tag but, you still have to type the brackets around the opening tag. I kept thinking this process should be easier. So, I came up with a solution that allows you to create XML files without using HTML tags.This console application will create an XML file based on user input. Just enter the file name, how many element fields you want, and the name of each field. Optionally, you can include a data type separated by a comma after the field name. You can just enter the field name because the data type is not required. The structure of the XML file that is created will be compatible with the .NET Dataset and can be easily added to a database.In addition to creating the XML file, an XSL file and HTML file are also created. The HTML file uses client side JavaScript to transform the XML file using the XSL file. This provides an easy way to view the new XML file by displaying it in a table layout.The download includes both the source code and the already compiled application. You can start using the executable right away or customize it to meet your needs. All you will need is the .NET Framework and a text editor, like Notepad, to build this application.Improving ASP Performance with Data CachingOne of the nicest features of is the ability to cache page content. This can be used to substantially reduce load on a website's database - which is an obvious attraction if the site uses Microsoft's Access to store data rather than SQL Server. Unfortunately there is no built in cachingsystem in classic ASP, but it is easy to build one by using the Application object to store data.When to use ASP Caching. Caching is most useful for data that changes - but not too often. For example an e-commerce store could display a list of popular products, or an information site could display a list of press releases.Don't forget that it is also possible to build functionality into the admin part of the site so that the cache would be flushed if new content is added to the database. That way the website administrator would not have to wait until the cache timed out in order for new content to appear on the website. Remember that data stored in Application variables is visible by all the users of the website。
英文摘要Data Transformation ServicesDTS facilitates the import, export, and transformation of heterogeneous data. It supports transformations between source and target data using an OLE DB-based architecture. This allows you to move and transform data between the following data sources:∙Native OLE DB providers such as SQL Server, Microsoft Excel, Microsoft Works, Microsoft Access, and Oracle.∙ODBC data sources such as Sybase and Informix using the OLE DB Provider for ODBC.∙ASCII fixed-field length text files and ASCII delimited text files.For example, consider a training company with four regional offices, each responsible for a predefined geographical region. The company is using a central SQL Server to store sales data. At the beginning of each quarter, each regional manager populates an Excel spreadsheet with sales targets for each salesperson. These spreadsheets are imported to the central database using the DTS Import Wizard. At the end of each quarter, the DTS Export Wizard is used to create a regional spreadsheet that contains target versus actual sales figures for each region.DTS also can move data from a variety of data sources into data marts or data warehouses. Currently, data warehouse products are high-end, complex add-ons. As companies move toward more data warehousing and decision processing systems, the low cost and ease of configuration of SQL Server 7.0 will make it an attractive choice. For many, the fact that much of the legacy data to be analyzed may be housed in an Oracle system will focus their attention on finding the most cost-effective way to get at that data. With DTS, moving and massaging the data from Oracle to SQL Server is less complex and can be completely automated.DTS introduces the concept of a package, which is a series of tasks that are performed as a part of a transformation. DTS has its own in-process component object model (COM) server engine that can be used independent of SQL Server and that supports scripting for each column using Visual Basic® and JScript® development software. Each transformation can include data quality checks and validation, aggregation, and duplicate elimination. You can also combine multiple columns into a single column, or build multiple rows from a single input.Using the DTS Wizard, you can:∙Specify any custom settings used by the OLD DB provider to connect to the data source or destination.∙Copy an entire table, or the results of an SQL query, such as those involving joins of multiple tables or distributed queries. DTS also can copy schema and data between relational databases. However, DTS does not copy indexes, stored procedures, or referential integrity constraints.∙Build a query using the DTS Query Builder Wizard. This allows users inexperienced with the SQL language to build queries interactively.∙Change the name, data type, size, precision, scale, and nullability of a column when copying from the source to the destination, where a valid data-type conversion applies.∙Specify transformation rules that govern how data is copied between columns of different data types, sizes, precisions, scales, and nullabilities.∙Execute an ActiveX script (Visual Basic or JScript) that can modify (transform) the data when copied from the source to the destination. Or you can perform any operation supported by Visual Basic or JScript development software.∙Save the DTS package to the SQL Server MSDB database, Microsoft Repository, or a COM-structured storage file.Schedule the DTS package for later execution.Once the package is executed, DTS checks to see if the destination table already exists, then gives you the option of dropping and recreating the destination table. If the DTS Wizard does not properly create the destination table, verify that the column mappings are correct, select a different data type mapping, or create the table manually and then copy the data.Each database defines its own data types and column and object naming conventions. DTS attempts to define the best possible data-type matches between a source and a destination. However, you can override DTS mappings and specify a different destination data-type, size, precision, and scale properties in the Transform dialog box.Each source and destination may have binary large object (BLOB) limitations. For example, if the destination is ODBC, then a destination table can contain only one BLOB column and it must have a unique index before data can be imported. For more information, see the OLE DB for ODBC driver documentation.Note DTS functionality may be limited by the capabilities of specificdatabase management system (DBMS) or OLE DB drivers.DTS uses the source object’s name as a default. However, you can also add double quote marks (“ “) or square brackets ([ ])around multiword table and column names if this is supported by your DBMS.Data Warehousing and OLAPDTS can function independent of SQL Server and can be used as a stand-alone tool to transfer data from Oracle to any other ODBC- or OLE DB-compliant database. Accordingly, DTS can extract data from operational databases for inclusion in a data warehouse or data mart for query and analysis.Figure 4. DTS and data warehousingIn the previous diagram, the transaction data resides on an IBM DB2 transaction server. A package is created using DTS to transfer and clean the data from the DB2 transaction server and to move it into the data warehouse or data mart. In this example, the relational database server is SQL Server 7.0, and the data warehouse uses OLAP Services to provide analytical capabilities. Client programs (such as Excel) access the OLAP Services server using the OLE DB for OLAP interface, which is exposed through a client-side component called Microsoft PivotTable® Service. Client programs using PivotTable Service can manipulate data in the OLAP server and even change individual cells.SQL Server OLAP Services is a flexible, scalable OLAP solution, providing high-performance access to information in the data warehouse. OLAP Services supports all implementations of OLAP equally well: multidimensional OLAP (MOLAP), relational OLAP (ROLAP), and a hybrid (HOLAP). OLAP Services addresses the most significant challenges in scalability through partial preaggregation, smart client/server caching, virtual cubes, and partitioning.DTS and OLAP Services offer an attractive and cost-effective solution. Data warehousing and OLAP solutions using DTS and OLAP Services are developed with point-and-click graphical tools that are tightly integrated and easy to use. Furthermore, because the PivotTable Service client is using OLE DB, the interface is more open to access by a variety of client applications.Issues for Oracle versions 7.3 and 8.0Oracle does not support more than one BLOB data type per table. This prevents copying SQL Server tables that contain multiple text and image data types with modification. You may want to map one or more BLOBs to the varchar data type and allow truncation, or split a source table into multiple tables. Oracle returns numeric data types such as precision = 38 and scale =0, even when there are digits to the right of the decimal point. If you copy this information, it will be truncated to integer values. If mapped to SQL Server, the precision is reduced to a maximum of 28 digits.The Oracle ODBC driver does not work with DTS and is not supported by Microsoft. Use the Microsoft Oracle ODBC driver that comes with SQL Server. When exporting BLOB data to Oracle using ODBC, the destination table must have an existing unique primary key.Heterogeneous Distributed QueriesDistributed queries access not only data currently stored in SQL Server (homogeneous data), but also access data traditionally stored in a data store other than SQL Server (heterogeneous data). Distributed queries behave as if all data were stored in SQL Server. SQL Server 7.0 will support distributed queries by taking advantage of the UDA architecture (OLE DB) to access heterogeneous data sources, as illustrated in the following diagram.Figure 5. Accessing heterogeneous data sources with UDA翻译DTS 使进口,出口和不同的数据的转变变得容易。
。 -可编辑修改- A introduction to Database Management System Raghu Ramakrishnan A database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device. A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data. Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The 。 -可编辑修改- programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.) A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers. A database management system (DBMS) is composed of three major parts:(1)a storage subsystem that stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems; Managers: who require more up-to-data information to make effective decision 。 -可编辑修改- Customers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts. Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages. Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors. The Database Model A data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data. Hierarchical Model The first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on. 。 -可编辑修改- The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data. In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy. Relational Model A major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table as its data structure. The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data
信息与计算科学毕业设计Advanced Database ApplicationsThe 1990s have seen significant changes in the computer industry. In database systems, we have seen the widespread acceptance of RDBMSs for traditional business applications, such as order processing, inventory control, banking, and airline reservations. However, existing RDBMSs have proven inadequate for applications whose needs are quite different from those of traditional business database applications. These applications include:■computer-aided design (CAD);■computer-aided manufacturing (CAM);■computer-aided software engineering (CASE);■office information systems (OIS) and multimedia systems;■digital publishing;■eographic information systems (GIS);■interactive and dynamic Web sites.Computer-aided design (CAD)A CAD database stores data relating to mechanical and electrical design covering, for example, buildings, aircraft, and integrated circuit chips. Designs of this type have some common characteristics:■Design data is characterized by a large number of types, each with a small number of instances.Conventional databases are typically the opposite. For example, the DreamHome database consists of only a dozen or so relations, although relations such as PropertyForRent, Client, and Viewing may contain thousands of tuples.■Designs may be very large, perhaps consisting of millions of parts, often with many interdependent subsystem designs.■The design is not static but evolves through time. When a design change occurs, its implications must be propagated through all design representations. The dynamic nature of design may mean that some actions cannot be foreseen at the beginning.■Updates are fat-reaching because of topological or functional relationships, tolerances, and so on. One change is likely to affect a large number of design objects.■Often, many design alternatives are being considered for each component, and the correct version for each part must be maintained. This involves some form of version control and1configuration management.■There may be hundreds of staff involved with the design, and they may work in parallel on multiple versions of a large design. Even so, the end-product must be consistent and coordinated. This is sometimes referred to as cooperative engineering.Computer-aided manufacturing (CAM)A CAM database stores similar data to a CAD system, in addition to addition to data relating to discrete production (such as cars on an assembly line) and continuous production (such as chemical synthesis). For example, in chemical manufacturing there will be applications that monitor information about the state of the system, such as reactor Bessel temperatures, flow rates, and yields. There will also be applications that control various physical processes, such as opening valves, applying more heat to reactor vessels, and increasing the flow of cooling systems. These applications are often organized in a hierarchy, with a top-level application monitoring the entire factory and lower=level applications monitoring individual manufacturing processes. These applications must respond in real time and be capable of adjusting processes to maintain optimum performance within tight tolerances. The applications use a combination of standard algorithms and custom rules to respond to different conditions. Operators may modify these rules occasionally to optimize performance based on complex historical data that the system has to maintain. In this example, the system has to maintain large volumes of data that is hierarchical in nature and maintain complex relationships between the data. It must also be able to rapidly navigate the data to review and respond to changes.Computer-aided software engineering (CASE)A CASE database stores data relating to stages of the software development lifecycle: planning, requirements collection and analysis, design, implementation, testing, maintenance, and documentation. As with CAD, designs may be extremely large, and cooperative engineering is the norm. For example, software configuration management tools allow concurrent sharing of project design, code, and documentation. They also track the dependencies between these components and assist with change management. Project management tools facilitate the coordination of various project management activities, such as the scheduling of potentially highly complex interdependent tasks, cost estimation, and progress monitoring.Network management systemsNetwork management systems coordinate the delivery of communication services across a2信息与计算科学毕业设计computer network. These systems perform such tasks as network path management, problem management, and network planning. As with the chemical manufacturing example we discussed earlier, these systems also handle complex data and require real-time performance and continuous operation. For examples, a telephone call might involve a chain of network switching devices that route a message from sender to receiver, such as:Node⇔Link⇔Node⇔Link⇔Node⇔Link⇔NodeWhere each Node represents a port on a network device and each Link represents a slice of bandwidth reserved for that connection. However, a node may participate in several different connections and any database that is created has to manage a complex graph of relationships. To route connections, diagnose problems, and balance loadings, the network management systems have to be capable of moving through this complex graph in real time.Office information systems (OIS) and multimedia systemsAn OIS database stores data relating to the computer control of information in a business, including electronic mail, documents, invoices, and so on. To provide better support for this area, we need to handle a wider range of data types other than names, addresses, dates, and money. Modern systems now handle free-form text, photographs, diagrams, and audio and video sequences. For example, a multimedia document may handle text, photographs, spreadsheets, and voice commentary. The documents may have a specific structure imposed on them, perhaps described using a mark-up language such as SGML (Standardized Generalized Markup Language), HTML (HyperText Markup Language), or XML (eXtended Markup Language), as we discuss in Chapter 29.Documents may be shared among many users using systems such as electronic mail and bulletin-boards based on Internet technology. Again ,such applications need to store data that has a much richer structure than tuples consisting of numbers and text strings. There is also an increasing need to capture handwritten notes using electronic devices. Although many notes can be transcribed into AS CⅡtext using handwriting analysis techniques, most such data cannot. In addition to words, handwritten data can include sketches, diagrams and so on.In the DreamHome case study, we may find the following requirements for handling multimedia.■Image data A client may query an image data base of properties for rent. Some queries may simply use a textual description to identify images of desirable properties. In other cases it may3be useful for the client to query using graphical images of features that may be found in desirable properties (such as bay windows, internal cornicing, or roof gardens).■Video data A client may query a video database of properties for rent. Some queries may simply use a textual description to identify the video images of desirable properties. In other cases it may be useful for the client to query using video features of the desired properties (such as views of the sea or surrounding hills).■Audio data A client may query an audio database that describes the features of properties for rent. Some queries may simply use a textual description to identify the desired property. In other cases it may be useful for the client to use audio features of the desired properties (such as the noise level from nearby traffic).■Handwritten data A member of staff may create notes while carrying out inspections of properties for rent. At a later data, he or she may wish to query such data to find all notes made about a flat in Novar Drive with dry rot.Digital publishingThe publishing industry is likely to undergo profound changes in business practices over the next decade. It is becoming possible to store books, journals, papers, and articles electronically and deliver them over high-speed networks to consumers. As with office information systems, digital publishing is being extended to handle multimedia documents consisting of text, audio, image, and video data and animation. In some cases, the amount of information available to be put online is enormous, in the order of petabytes (1015 bytes), which would make them the largest databases that a DBMA has ever had to manage.Geographic information systems (GIS)A GIS database stores various types of spatial and temporal information, such as that used in land management and underwater exploration. Much of the data in these systems is derived from survey and satellite photographs, and tends to be very large. Searches may involve identifying features based, for example, on shape, color, or texture, using advanced pattern-recognition techniques.For example, EOS (Earth Observing System) is a collection of satellites launched by NASA in the 1990s to gather information that will support sci8entists concerned with long-term trends regarding the earth’s atmosphere, oceans, and land. It is anticipated that these satellites will return over one-third of a petabyte of information per year. This data will be integrated with other data4信息与计算科学毕业设计sources and will be stored in EOSDIS (EOS Data and Information System). EOSDIS will supply the information needs of both scientists and non-scientists. For example, schoolchildren will be able to access EOSDIS to see a simulation of world weather patterns. The immense size of this database and the need to support thousands of users with very heavy volumes of information requests will provide many challenges for DBMSs.Interactive and dynamic Web sites■Consider a Web site that has an online catalog for selling clothes. The Web site maintains a set of preferences for previous visitors to the site and allows a visitor to:■browse through thumbnail images of the items in the catalog and select one to obtain a full-size image with supporting details;■search for items that match a user-defined set of criteria;■obtain a 3D rendering of any item of clothing based on a customized specification (for example, color, size, fabric);■modify the rendering to account for movement, illumination, backdrop, occasion, and so on;■select accessories to go with the outfit, from items presented in a sidebar;■select a voiceover commentary giving additional details of the item;■view a running total of the bill, with appropriate discounts;■conclude the purchase through a secure online transaction.The requirements for this type of application are not that different from some of the above advanced applications: there is a need to handle multimedia content (text, audio, image, video data, and animation) and to interactively modify the display based on user preferences and user selections. As well as handling complex data, the site also has the added complexity o9f providing 3D rendering. It is argued that in such a situation the database is not just presenting information to the visitor but is actively engaged in selling, dynamically providing customized information and atmosphere to the visitor (King, 1997).As we discuss in Chapters 28 and 29, the Web now provides a relatively new paradigm for data management, and languages such as XML hold significant promise, particularly for the e-Commerce market. The Forrester Research Group is predicting that business-to-business transactions will rise by 99% annually and is expected to reach US$1.3 trillion by 2003. in addition, e-Commerce is expected to account for US$3.2 trillion in worldwide corporate revenue by 2003 and potentially represent 5% of sales in the global economy. As the use of the Internet5increases and the technology becomes more sophisticated, then we will see Web sites and business-to-business transactions handle much more complex and interrelated data.Other advanced database applications include:■Scientific and medical applications, which may store complex data representing systems such as molecular models for synthetic chemical compounds and genetic material.■Expert systems, which may store knowledge and rule bases for artificial intelligence (AI) applications.■Other applications with complex and interrelated objects and procedural data.原文摘自2004年出版的Database Systems A Practical Approach to Design Implementation and Management第580页6信息与计算科学毕业设计高级数据库应用20世纪90年代,计算机工业发生了巨大变化。
毕业设计(论文)文献翻译英文资料:Computer Networks and DatabaseworksSome reasons are causing centralized computer systems to give way to networks.The first one is that many organizations already have a substantial number of computers in operation ,often located far apart .Initially ,each of these computers may have worked in isolation from the other ones ,but at a certain time ,management may have decided to connect them to be able to correlate information about the entire organization .Generally speaking ,this goal is to make all programs ,data ,and other resources available to anyone on the network without regard to the physical location of the resource and the user .The second one is to provide high reliability by having alternative sources of supply .With a network ,the temporary loss of a single computer is much less serious ,because its users can often be accommodated elsewhere until the service is restored .Yet another reason of setting up a computer network is computer network can provide a powerful communication medium among widely separated people .Application of NetworksOne of the main areas of potential network sue is access to remote database .It may someday be easy for people sitting at their terminals at home to make reservations for airplanes trains , buses , boats , restaurants ,theaters ,hotels ,and so on ,at anywhere in the world with instant confirmation .Home banking ,automated newspaper and fully automated library also fall in this category .Computer aided education is another possible field for using network ,with many different courses being offered.Teleconferencing is a whole new form communication. With it widely separated people can conduct a meeting by typing messages at their terminals .Attendees may leave at will and find out what they missed when they come back .International contacts by human begin may be greatly enhanced by network based communication facilities .Network StructureBroadly speaking,there are two general types of designs for the communication subnet:(1)Point –to –point channels(2)Broadcast channelsIn the first one ,the network contains numerous cables or lesased telephone lines ,each one connecting a pair of nodes .If two nodes that do not share a cablewish to communicate ,they must do this indirectly via other nodes .When a message is sent from node to another via one or more inter mediate modes ,each intermediate node will receive the message and store it until the required output line is free so that it can transmit the message forward .The subnet using this principle is called a point –to –piont or store –and –forward subnet .When a point –to –point subnet is used ,the important problem is how to design the connected topology between the nodes .The second kind of communication architecture uses broadcasting.In this design there is a single communication channel shared by all nodes .The inherence in broadcast systems is that messages sent by any node are received by all other nodes .The ISO Reference ModelThe Reference Model of Open System Interconnection (OSI),as OSI calls it ,has seven layers .The major ones of the principles ,from which OSI applied to get the seven layers ,are as follows:(1)A layer should be created where a different level of abstraction is needed.(2)Each layer should perform a well defined function .(3)The function of each layer should be chosen with an eye toward defininginternationally standardized protocols.(4)The layer boundaries should be chosen to minimize the information flow acrossthe interfaces .(5)The number of layers should be large enough so that distinct need not be puttogether in the same layer without necessity ,and small enough so that the architecture will not become out control .The Physical LayerThe physical layer is concerned with transmitting raw bits over a communication channel .Typical questions here are how many volts shoule be used to represent an 1 and how many a 0,how many microseconds a bit occupies ,whether transmission may proceed simultaneously in both are finished ,how to establish the initial connection and what kind of function each pin has .The design issues here largely deal with mechanical ,electrical and procedural interfacing to the subnet .The data link layerThe task of the data link layer is to obtain a raw transmission facility and to transform it into a line that appears free of transmission errors to the network layer .It accomplishes this task by breading the input data up into dataframes ,transmitting the frames sequentially ,and processing the acknowledgment frames sent back the receiver .Since the physical layer merely accepts and transmits a stream of bits without any regard to meaning or structure ,it can create and recognize frame boundaries until the data link layer .This can be accomplished by attaching special bits patterns to the beginning and the end of the frame .But it produce two problems :one is a noise burst on the line can destroy a frame completely .In this case ,the software in the source machine must retransmit the frame .The other is that some mechanismmust be employed to let the transmitter know how much buffer space the receiver has at the moment .The network layerThe network layer controls the operation of subnet .It determines the chief characteristics of the node-host interface ,and how packets ,the units of information exchanged in this layer ,are routed within the subnet .What this layer if software does ,basically ,is to accept messages from the source host ,convert them to packets ,and observe the packets to get the destination .The key design issue is how the route is determined .It could not only base on static table ,either are “wired into”the network and rarely changed ,by also adopt highly dynamic manner ,which can determine packet again to reflect the current network load .The transport layerThe basic function of transport layer is to accept data from the session layer ,split it up into smaller units ,if necessary ,pass these to the network layer ,and ensure that the pieces all arrive correctly at the other end .This layer is a true end-to-end layer .In other words ,a program on the source machine carries on a convene station with as similar program on the destination machine , using the message header and control messages .The session layerWith the session layer , the user must negotiate to establish a connection with a process on another machine .The connection is usually called a session. A session might be used to allow a user to log into a remote time-sharing system or to transfer a file between two machines .The operation of setting up a session between two processes is often called binding .Another function of the session layer is to manage the session once it has been setup .The presentation layerThe presentation layer could be designed to accept ASCⅡstrings as input and produce compressed bit patterns as output .This function of the presentation layer is called text compression .In addition ,this layer can also perform other trans formations .Encryption provide security is one possibility .Conversion between character codes ,such as ASCⅡto EBCDIC,might often be useful .More generally ,different computers usually have incompatible file formats ,so a file conversion option might be useful at times .The application layerMany issues occur here .For example ,all the issues of network transparency ,hiding the physical distribution of resources from user .Another issue is problem partitioning :how to divide the problem among the various machine in order to take maximum advantage of the network .2.Database systemThe conception used for describing files and databases has varied substantially in the same organization .A database may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve one or more applications in an optimal fashion ;the data are stored so that they are independent of programs which use the data ;a common and retrieving existing data within the databases if they are entirely separate in structure .A database may be designed for batch processing ,real-time processing ,or in-line processing .A database system involve application program ,DBMS ,and database.One of the most important characteristics of most databases is that they will constantly need to change and grow .Easy restructuring of the database must be possible as new data types and new applications are added .The restructuring should be possible without having to rewrite the ap0plication program and in general should cause as little upheaval as possible .The ease with which a database can be changed will have a major effect on the rate at which data-processing can be developed in a corporation .The tem data independence is often quoted as being one of the main attributes of a data base .It implies that the data and the application programs which use them are independent so that either may be changed without changing the other .When a single set of data items serves a variety of applications ,different application programs perceive different relationships between the data items .To a large extent ,data-base organization is concerned with the representation between the data item about which we store information referred to as entities .An entity may be a tangible object or nontangible .It has various properties which we may wish to record .It can describes the real world .The data item represents an attribute ,and the attribute must be associated with the relevant entity .We design values to the attributes ,one attribute has a special significance in that it identifies the entity .An attribute or set of attributes which the computer uses to identify a record or tuple is referred to as a key .The primary key is defined as that key used to uniquely identify one record or tuple .The entity identifier consisting of one or more attributes .The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm .If the function of a data base were merely to store data ,its organization would be simple .Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored .It is different to describe the data in logical or physical .The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used .It gives the entities and attributes ,and specifics the relations between them .It is formwork into which the values of the data-items can be fitted .We must distinguish between a record type and a instance of the record .When we talk about a “personnel record”,this is really a record typed .There are no data vales associated with it .The term schema is used to mean an overall chart of all of the data-types and record types stored in a data base .The term subschema refers to an application programmer’s view of the data he uses .Many different sub schemas can be derived from one schema .The schema and the subschema are both used by the data-base management system ,the primary function of which is to serve the application programs by executing their data operations .A DBMS will usually be handing multiple data calls concurrently .It must organize its system buffers so that different data operations can be in process together .It provides a data definition language to specify the conceptual schema and most likely ,some of the details regarding the implementation of the conceptual schema by the physical schema .The data definition language is a high-level language ,enabling one to describe the conceptual schema in terms of a “data model”.The choice of a data model is a difficult one ,since it must be rich enough in structure to describe significant aspects of the real world ,yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema .It should be emphasized that while a DBMS might be used to build small data bases ,many data bases involve millions of bytes ,and an inefficient implementation can be disastrous .We will discuss the data model in the following and the .NET Framework is part of Microsoft's overall .NET framework, which contains a vast set of programming classes designed to satisfy any conceivable programming need. In the following two sections, you learn how fits within the .NET framework, and you learn about the languages you can use in your pages.The .NET Framework Class LibraryImagine that you are Microsoft. Imagine that you have to support multiple programming languages—such as Visual Basic, JScript, and C++. A great deal of the functionality of these programming languages overlaps. For example, for each language, you would have to include methods for accessing the file system, working with databases, and manipulating strings.Furthermore, these languages contain similar programming constructs. Every language, for example, can represent loops and conditionals. Even though the syntax of a conditional written in Visual Basic differs from the syntax of a conditional written in C++, the programming function is the same.Finally, most programming languages have similar variable data types. In most languages, you have some means of representing strings and integers, for example. The maximum and minimum size of an integer might depend on the language, but the basic data type is the same.Maintaining all this functionality for multiple languages requires a lot of work. Why keep reinventing the wheel? Wouldn't it be easier to create all this functionality once and use it for every language?The .NET Framework Class Library does exactly that. It consists of a vast set of classes designed to satisfy any conceivable programming need. For example, the .NET framework contains classes for handling database access, working with the file system, manipulating text, and generating graphics. In addition, it contains more specialized classes for performing tasks such as working with regular expressions and handling network protocols.The .NET framework, furthermore, contains classes that represent all the basic variable data types such as strings, integers, bytes, characters, and arrays.Most importantly, for purposes of this book, the .NET Framework Class Library contains classes for building pages. You need to understand, however, that you can access any of the .NET framework classes when you are building your pages.Understanding NamespacesAs you might guess, the .NET framework is huge. It contains thousands of classes (over 3,400). Fortunately, the classes are not simply jumbled together. The classes of the .NET framework are organized into a hierarchy of namespaces.ASP Classic NoteIn previous versions of Active Server Pages, you had access to only five standard classes (the Response, Request, Session, Application, and Server objects). , in contrast, provides you with access to over 3,400 classes!A namespace is a logical grouping of classes. For example, all the classes that relate to working with the file system are gathered together into the System.IO namespace.The namespaces are organized into a hierarchy (a logical tree). At the root of the tree is the System namespace. This namespace contains all the classes for the base data types, such as strings and arrays. It also contains classes for working with random numbers and dates and times.You can uniquely identify any class in the .NET framework by using the full namespace of the class. For example, to uniquely refer to the class that represents a file system file (the File class), you would use the following:System.IO.FileSystem.IO refers to the namespace, and File refers to the particular class. NOTEYou can view all the namespaces of the standard classes in the .NET Framework Class Library by viewing the Reference Documentation for the .NET Framework. Standard NamespacesThe classes contained in a select number of namespaces are available in your pages by default. (You must explicitly import other namespaces.) These default namespaces contain classes that you use most often in your applications:•System—Contains all the base data types and other useful classes such as those related to generating random numbers and working with dates and times. •System.Collections— Contains classes for working with standard collection types such as hash tables, and array lists.•System.Collections.Specialized— Contains classes that represent specialized collections such as linked lists and string collections.•System.Configuration— Contains classes for working with configuration files (Web.config files).•System.Text— Contains classes for encoding, decoding, and manipulating the contents of strings.•System.Text.RegularExpressions— Contains classes for performing regular expression match and replace operations.•System.Web— Contains the basic classes for working with the World Wide Web, including classes for representing browser requests and server responses. •System.Web.Caching—Contains classes used for caching the content of pages and classes for performing custom caching operations.•System.Web.Security— Contains classes for implementing authentication and authorization such as Forms and Passport authentication.•System.Web.SessionState— Contains classes for implementing session state. •System.Web.UI—Contains the basic classes used in building the user interface of pages.•System.Web.UI.HTMLControls— Contains the classes for the HTML controls. •System.Web.UI.WebControls— Contains the classes for the Web controls..NET Framework-Compatible LanguagesFor purposes of this book, you will write the application logic for your pages using Visual Basic as your programming language. It is the default language for pages (and the most popular programming language in the world). Although you stick to Visual Basic in this book, you also need to understand that you can create pages by using any language that supports the .NET Common Language Runtime. Out of the box, this includes C# (pronounced See Sharp), (the .NET version of JavaScript), and the Managed Extensions to C++.NOTEThe CD included with this book contains C# versions of all the code samples. Dozens of other languages created by companies other than Microsoft have been developed to work with the .NET framework. Some examples of these other languages include Python, SmallTalk, Eiffel, and COBOL. This means that you could, if you really wanted to, write pages using COBOL.Regardless of the language that you use to develop your pages, you need to understand that pages are compiled before they are executed. This means that pages can execute very quickly.The first time you request an page, the page is compiled into a .NET class, and the resulting class file is saved beneath a special directory on yourserver named Temporary Files. For each and every page, a corresponding class file appears in the Temporary Files directory. Whenever you request the same page in the future, the corresponding class file is executed.When an page is compiled, it is not compiled directly into machine code. Instead, it is compiled into an intermediate-level language called Microsoft Intermediate Language (MSIL). All .NET-compatible languages are compiled into this intermediate language.An page isn't compiled into native machine code until it is actually requested by a browser. At that point, the class file contained in the Temporary Files directory is compiled with the .NET framework Just in Time (JIT) compiler and executed.The magical aspect of this whole process is that it happens automatically in the background. All you have to do is create a text file with the source code for your page, and the .NET framework handles all the hard work of converting it into compiled code for you.ASP CLASSIC NOTEWhat about VBScript? Before , VBScript was the most popular language for developing Active Server Pages. does not support VBScript, and this is good news. Visual Basic is a superset of VBScript, which means that Visual Basic has all the functionality of VBScript and more. So, you have a richer set of functions and statements with Visual Basic.Furthermore, unlike VBScript, Visual Basic is a compiled language. This means that if you use Visual Basic to rewrite the same code that you wrote with VBScript, you can get better performance.If you have worked only with VBScript and not Visual Basic in the past, don't worry. Since VBScript is so closely related to Visual Basic, you'll find it easy to make the transition between the two languages.NOTEMicrosoft includes an interesting tool named the IL Disassembler (ILDASM) with the .NET framework. You can use this tool to view the disassembled code for any of the classes in the Temporary Files directory. It lists all the methods and properties of the class and enables you to view the intermediate-level code.This tool also works with all the controls discussed in this chapter. For example, you can use the IL Disassembler to view the intermediate-level code for the TextBox control (located in a file named System.Web.dll).About ModemTelephone lines were designed to carry the human voice, not electronic data from a computer. Modems were invented to convert digital computer signals into a form that allows them to travel over the phone lines. Those are the scratchy sounds you hear from a modem's speaker. A modem on the other end of the line can understand it and convert the sounds back into digital information that the computer can understand. By the way, the word modem stands for MOdulator/DEModulator.Buying and using a modem used to be relatively easy. Not too long ago, almost all modems transferred data at a rate of 2400 Bps (bits per second). Today, modems not only run faster, they are also loaded with features like error control and data compression. So, in addition to converting and interpreting signals, modems also act like traffic cops, monitoring and regulating the flow of information. That way, one computer doesn't send information until the receiving computer is ready for it. Each of these features, modulation, error control, and data compression, requires a separate kind of protocol and that's what some of those terms you see like V.32, V.32bis, V.42bis and MNP5 refer to.If your computer didn't come with an internal modem, consider buying an external one, because it is much easier to install and operate. For example, when your modem gets stuck (not an unusual occurrence), you need to turn it off and on to get it working properly. With an internal modem, that means restarting your computer--a waste of time. With an external modem it's as easy as flipping a switch.Here's a tip for you: in most areas, if you have Call Waiting, you can disable it by inserting *70 in front of the number you dial to connect to the Internet (or any online service). This will prevent an incoming call from accidentally kicking you off the line.This table illustrates the relative difference in data transmission speeds for different types of files. A modem's speed is measured in bits per second (bps). A 14.4 modem sends data at 14,400 bits per second. A 28.8 modem is twice as fast, sending and receiving data at a rate of 28,800 bits per second.Until nearly the end of 1995, the conventional wisdom was that 28.8 Kbps was about the fastest speed you could squeeze out of a regular copper telephone line. Today, you can buy 33.6 Kbps modems, and modems that are capable of 56 Kbps. The key question for you, is knowing what speed modems your Internet service provider (ISP) has. If your ISP has only 28.8 Kbps modems on its end of the line, you could have the fastest modem in the world, and only be able to connect at 28.8 Kbps. Before you invest in a 33.6 Kbps or a 56 Kbps modem, make sure your ISP supports them.Speed It UpThere are faster ways to transmit data by using an ISDN or leased line. In many parts of the U.S., phone companies are offering home ISDN at less than $30 a month. ISDN requires a so-called ISDN adapter instead of a modem, and a phone line with a special connection that allows it to send and receive digital signals. You have to arrange with your phone company to have this equipment installed. For more about ISDN, visit Dan Kegel's ISDN Page.An ISDN line has a data transfer rate of between 57,600 bits per second and 128,000 bits per second, which is at least double the rate of a 28.8 Kbps modem. Leased lines come in two configurations: T1 and T3. A T1 line offers a data transfer rate of 1.54 million bits per second. Unlike ISDN, a T-1 line is a dedicated connection, meaning that it is permanently connected to the Internet. This is useful for web servers or other computers that need to be connected to the Internet all the time. It is possible to lease only a portion of a T-1 line using one of two systems:fractional T-1 or Frame Relay. You can lease them in blocks ranging from 128 Kbps to 1.5 Mbps. The differences are not worth going into in detail, but fractional T-1 will be more expensive at the slower available speeds and Frame Relay will be slightly more expensive as you approach the full T-1 speed of 1.5 Mbps. A T-3 line is significantly faster, at 45 million bits per second. The backbone of the Internet consists of T-3 lines.Leased lines are very expensive and are generally only used by companies whose business is built around the Internet or need to transfer massive amounts of data. ISDN, on the other hand, is available in some cities for a very reasonable price. Not all phone companies offer residential ISDN service. Check with your local phone company for availability in your area.Cable ModemsA relatively new development is a device that provides high-speed Internet access via a cable TV network. With speeds of up to 36 Mbps, cable modems can download data in seconds that might take fifty times longer with a dial-up connection. Because it works with your TV cable, it doesn't tie up a telephone line. Best of all, it's always on, so there is no need to connect--no more busy signals! This service is now available in some cities in the United States and Europe.The download times in the table above are relative and are meant to give you a general idea of how long it would take to download different sized files at different connection speeds, under the best of circumstances. Many things can interfere with the speed of your file transfer. These can range from excessive line noise on your telephone line and the speed of the web server from which you are downloading files, to the number of other people who are simultaneously trying to access the same file or other files in the same directory.DSLDSL (Digital Subscriber Line) is another high-speed technology that is becoming increasingly popular. DSL lines are always connected to the Internet, so you don'tneed to dial-up. Typically, data can be transferred at rates up to 1.544 Mbps downstream and about 128 Kbps upstream over ordinary telephone lines. Since a DSL line carries both voice and data, you don't have to install another phone line. You can use your existing line to establish DSL service, provided service is available in your area and you are within the specified distance from the telephone company's central switching office.DSL service requires a special modem. Prices for equipment, DSL installation and monthly service can vary considerably, so check with your local phone company and Internet service provider. The good news is that prices are coming down as competition heats up.The NetWorksBirth of the NetThe Internet has had a relatively brief, but explosive history so far. It grew out of an experiment begun in the 1960's by the U.S. Department of Defense. The DoD wanted to create a computer network that would continue to function in the event of a disaster, such as a nuclear war. If part of the network were damaged or destroyed, the rest of the system still had to work. That network was ARPANET, which linked U.S. scientific and academic researchers. It was the forerunner of today's Internet.In 1985, the National Science Foundation (NSF) created NSFNET, a series of networks for research and education communication. Based on ARPANET protocols, the NSFNET created a national backbone service, provided free to any U.S. research and educational institution. At the same time, regional networks were created to link individual institutions with the national backbone service.NSFNET grew rapidly as people discovered its potential, and as new software applications were created to make access easier. Corporations such as Sprint and MCI began to build their own networks, which they linked to NSFNET. As commercial firms and other regional network providers have taken over the operation of the major Internet arteries, NSF has withdrawn from the backbone business.。
外文资料As information technology advances, various management systems have emerged to change the daily lives of the more coherent, to the extent possible, the use of network resources can be significantly reasonable reduction of manual management inconvenience and waste of time.Accelerating the modernization of the 21st century, the continuous improvement of the scientific and cultural levels, the rapid growth of the number of students will inevitably increase the pressure information management students, the inefficient manual retrieval completely incompatible with the community\'s needs. The Student Information Management Systemis an information management one kind within system, currently information technique continuously of development, the network technique has already been applied in us extensively nearby of every trade, there is the network technical development, each high schools all make use of a calculator to manage to do to learn, the school is operated by handicraft before of the whole tedious affairs all got fast and solve high-efficiencily, especially student result management the system had in the school very big function, all can be more convenient, fast for the student and the teacher coming saying and understand accurately with management everyone noodles information.AbstractIt is a very heavy and baldness job of managing a bulky database by manpower. The disadvantage, such as great capacity of work, low efficiency and long period, exist in data inputting, demanding and modification. So the computer management system will bring us a quite change.Because there are so many students in the school, the data of students' information is huge, it makes the management of the information become a complicated and tedious work. This system aims at the school, passing by practically of demand analysis, adopt mighty VB6.0 to develop the student information management system. The whole system design process follow the principle of simple operation, beautiful and vivid interface and practical request. The student information management system including the function of system management, basic information management, study management, prize andpunishment management , print statement and so on. Through the proof of using, the student information management system which this text designed can satisfy the school to manage the demand of the aspect to students' information. The thesis introduced the background of development, the functions demanded and the process of design. The thesis mainly explained the point of the system design, the thought of design, the difficult technique and the solutions. The student managed the creation of the system to reduce the inconvenience on the manpower consumedly, let the whole student the data management is more science reasonable.The place that this system has most the special features is the backstage database to unify the management to student's information.That system mainly is divided into the system management, student profession management, student file management, school fees management, course management, result management and print the statement.The interface of the system is to make use of the vb software creation of, above few molds pieces are all make use of the vb to control a the piece binds to settle of method to carry out the conjunction toward the backstage database, the backstage database probably is divided into following few formses:Professional information form, the charges category form, student the job form, student the information form, political feature form of student, the customer logs on the form The system used Client/Server structure design, the system is in the data from one server and a number of Taiwan formed LAN workstations. Users can check the competence of different systems in different users submit personal data, background database you can quickly given the mandate to see to the content.Marks management is a important work of school,the original manual management have many insufficiencies,the reasons that,students' population are multitudinous in school,and each student's information are too complex,thus the work load are extremely big,the statistics and the inquiry have beeninconvenient.Therefore,how to solve these insufficiencies,let the marks management to be more convenient and quickly,have a higher efficiency,and become a key question.More and more are also urgent along with school automationthe marksmanagement when science and technology rapid development,therefore is essential to develop the software system of marks register to assist the school teaching management.So that can improve the marks management,enhance the efficiency of management.“We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement that holds throughout our speech community and is codified in the patterns of our language …we cannot talk at all except by subscribing to the organization and classification of data which the agreement decrees.” Benjamin Lee Whorf (1897-1941)The genesis of the computer revolution was in a machine. The genesis of our programming languages thus tends to look like that machine.But computers are not so much machines as they are mind amplification tools (“bicycles for the mind,”as Steve Jobs is fond of saying) and a different kind of expressive medium. As a result, the tools are beginning to look less like machines and more like parts of our minds, and also like other forms of expression such as writing, painting, sculpture, animation, and filmmaking. Object-oriented programming (OOP) is part of this movement toward using the computer as an expressive medium.This chapter will introduce you to the basic concepts of OOP, including an overview of development methods. This chapter, and this book, assumes that you have some programming experience, although not necessarily in C. If you think you need more preparation in programming before tackling this book, you should work through the Thinking in C multimedia seminar, downloadable from .This chapter is background and supplementary material. Many people do not feel comfortable wading into object-oriented programming without understanding the big picture first. Thus, there are many concepts that are introduced here to give you a solid overview of OOP. However, other people may not get the big picture concepts until they’ve seen some of the mechanics first; these people may become boggeddown and lost without some code to get their hands on. If you’re part of this latter group and are eager to get to the specifics of the language, feel free to jump past this chapter—skipping it at t his point will not prevent you from writing programs or learning the language. However, you will want to come back here eventually to fill in your knowledge so you can understand why objects are important and how to design with them.All programming languages provide abstractions. It can be argued that the complexity of the problems you’re able to solve is directly related to the kind and quality of abstraction. By “kind”I mean, “What is it that you are abstracting?”Assembly language is a small abstraction of the underlying machine. Many so-called “imperative”languages that followed (such as FORTRAN, BASIC, and C) were abstractions of assembly language. These languages are big improvements over assembly language, but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve. The programmer must establish the association between the machine model (in the “solution space,”which is the place where you’re implementing that solution, such as a computer) and the model of the problem that is actually being solved (in the 16 Thinking in Java Bruce EckelThe object-oriented approach goes a step further by providing tools for the programmer to represent elements in the problem space. This representation is general enough that the programmer is not constrained to any particular type of problem. We refer to the elements in the problem space and their representations in the solution space as “objects.” (You will also need other objects that don’t have problem-space analogs.) The idea is that the program is allowed to adapt itself to the lingo of the problem by adding new types of objects, so when you read the code describing the solution, you’re reading words that also express the problem. This is a more flexible and powerful language abstraction than what we’ve had before.1 Thus, OOP allows you to describe the problem in terms of the problem, rather than in terms of the computer where the solution will run. There’s still a connection back to the computer:Each object looks quite a bit like a little computer—it has a state, and it has operations that you can ask it to perform. However, this doesn’t seem like such a bad analogy to objects in the real world—they all have characteristics and behaviors.Java is making possible the rapid development of versatile programs for communicating and collaborating on the Internet. We're not just talking word processors and spreadsheets here, but also applications to handle sales, customer service, accounting, databases, and human resources--the meat and potatoes of corporate computing. Java is also making possible a controversial new class of cheap machines called network computers,or NCs,which SUN,IBM, Oracle, Apple, and others hope will proliferate in corporations and our homes.The way Java works is simple, Unlike ordinary software applications, which take up megabytes on the hard disk of your PC,Java applications,or"applets",are little programs that reside on the network in centralized servers,the network that delivers them to your machine only when you need them to your machine only when you need them.Because the applets are so much smaller than conventional programs, they don't take forever to download.Say you want to check out the sales results from the southwest region. You'll use your Internet browser to find the corporate Internet website that dishes up financial data and, with a mouse click or two, ask for the numbers.The server will zap you not only the data, but also the sales-analysis applet you need to display it. The numbers will pop up on your screen in a Java spreadsheet, so you can noodle around with them immediately rather than hassle with importing them to your own spreadsheet program。
本科生毕业设计(论文)外文资料译文( 2011 届)译文题目Java开发2.0:使用Hibernate Shards 进行切分外文资料译文规范说明一、译文文本要求1.外文译文不少于2000汉字;2.外文译文本文格式参照论文正文规范(标题、字体、字号、图表、原文信息等);3.外文原文资料信息列文末,对应于论文正文的参考文献部分,标题用“外文原文资料信息”,内容包括:1)外文原文作者;2)书名或论文题目;3)外文原文来源:□出版社或刊物名称、出版时间或刊号、译文部分所在页码□网页地址二、外文原文资料(电子文本或数字化后的图片):1.外文原文不少于10000印刷字符(图表等除外);2.外文原文若是纸质的请数字化(图片)后粘贴于译文后的原文资料处,但装订时请用纸质原文复印件附于译文后。
指导教师意见:指导教师签名:年月日一、外文资料译文:Java开发2.0:使用Hibernate Shards 进行切分横向扩展的关系数据库Andrew Glover,作者兼开发人员,Beacon50摘要:Sharding并不适合所有网站,但它是一种能够满足大数据的需求方法。
对于一些商店来说,切分意味着可以保持一个受信任的 RDBMS,同时不牺牲数据可伸缩性和系统性能。
在Java 开发 2.0系列的这一部分中,您可以了解到切分何时起作用,以及何时不起作用,然后开始着手对一个可以处理数 TB 数据的简单应用程序进行切分。
日期:2010年8月31日级别:中级PDF格式:A4和信(64KB的15页)取得Adobe®Reader®软件当关系数据库试图在一个单一表中存储数TB 的数据时,总体性能通常会降低。
索引所有的数据读取,显然是很耗时的,而且其中有可能是写入,也可能是读出。
因为NoSQL 数据商店尤其适合存储大型数据,但是NoSQL 是一种非关系数据库方法。
对于倾向于使用ACID-ity 和实体结构关系数据库的开发人员及需要这种结构的项目来说,切分是一个令人振奋的选方法。
第 1 页 共 11 页 Database A database may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve one or more applications in an optimal fashion; the data are stored so that they are independents of programs which use the data; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base one system is said to contain a collection of databases if they are entirely separate in structure. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible the ease with which a database can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation. The term data independence is often quoted as being one of the main attributes of a database int implies that the data and the may be changed without changing the other, when a single setoff data items serves a variety of applications, different application programs perceive different relationships between the data items, to a large extent database organization is concerned with the as how and where the data are stored. A database used for many applications can have multiple interconnection referred to as entities. An entity may be a tangible object or no tangible if it has various properties which we may wish to record. It can describe the real world. The data item represents an attribute and the attribute must be associated which the relevant entity. We relevant entity we design values to the attributes one attribute has a special significance in that it identifies the entity. Logical data description is called a model. We must distinguish a record and a record examples, when talking about all personnel records when it is really a record type, not combined with its data value. A model is used to describe the database used storage in the database data item type and record types of general charts, subschema paragraph refers to an application programmer view of data, many different patterns can get from one mode. The schema 第 2 页 共 11 页
and the subschema are both used by the database management system the primary function of which is to serve the application programs by execution their data operations. A DBMS will usually be handling multiple data calls concurrently, it must organize its system buffers so that different data operations can be in process together, it provides a data definition language to specify the conceptual schema and most likely some of the details regarding the implementation of the conceptual schema by the physical schema the describe the conceptual schema in terms for a “data model”. The choice of a data model is a difficult one, since it must be such enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small databases many databases involve millions of bytes and an inefficient implementation can be disastrous. The hierarchical and network structures have been used for DBMS since the 1960’s . the relational structure was introduced in the early 1970’s. In the relational model two-dimensional tables represent the entities and their relationships every table represents an entities are represented by common columns containing values from a domain or range of possible values .The end user is presented with a simple data model his and her request and don not reflect any complexities due to system-oriented aspects a relational data model is what the user sees , but it is mot necessarily what will be implemented physically. The relational data model removes the details of storage structure and access strategy from the user inter-face the model providers a relatively higher degree of data to make use of this property of the relational data model however, the design of the relations must be complete and accurate. Although some DBMS based on the relational data model are commercially available today it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale it appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.