外文文献翻译---ADONet技术
- 格式:doc
- 大小:67.00 KB
- 文档页数:11
acm论文模板范文ACM是全世界领域影响力最大的专业学术组织。
而acm模板,你们知道吗?这是 ___为大家了两篇acm论文,这样你们对模板会有直观的印象![摘要] 鉴于ACM大学生程序设计竞赛(ACM/ICPC)在人才选拔和培养方面的显著作用,如何将ACM/ICPC竞赛活动嵌入常规教学,创新教学模式,结合专业教学,加强训练管理,提高培训效益,已成为人们关注的问题。
针对这一应用需求,本文设计并开发了基于ACM/ICPC机制的大学生程序设计培训管理系统。
系统采用B/S架构,以SQL Server xx作为后台管理数据库,Visual Studio 和为前端开发工具。
在分析系统功能的基础上,着重阐述了该系统设计与实现的关键技术。
该系统实际运行稳定、可靠,为开展ACM/ICPC竞赛培训和教学提供了一种有效管理途径。
[关键词] ACM/ICPC;培训管理系统;Web开发;;数据库技术doi : 10 . 3969 / j . issn . 1673 - 0194 . xx . 03. 015[] TP311 [] A [] 1673 - 0194(xx)03- 0028- 031 引言ACM国际大学生程序设计竞赛(ACM International Collegiate Programming Contest, ACM ICPC) 由美国计算机协会(ACM)主办,始于1970年,至今已经有40多年的,是世界公认的规模最大、水平最高、影响广泛的国际大学生程序设计竞赛,竞赛优胜者是各大IT企业和科研院所青睐和优先选拔的人才[1]。
近些年来,伴随着ACM/ICPC大学生程序设计竞赛在国内如火如荼地开展,计算机界更加关注在人才培养方面,如何科学合理地引入、借鉴ACM/ICPC竞赛训练,将ACM/ICPC竞赛活动与常规专业课程教学有机结合起来,突破传统教学内容和,以有效培养学生的学习能力、创新意识和综合素质。
这其中,如何有效组织开展ACM/ICPC竞赛训练,加强培训管理,提高培训效益,亦是人们关注的热点问题。
基于ZigBee技术农业无线温湿度传感器网络与农业生产实践相结合,提出了农业无线和湿度传感器网络设计,它基于ZigBee技术。
我们使用基于CC2530 ZigBee协议作为数据的采集,传输和显示的传感器节点和协调器节点的芯片,目的是实现农业生产自动化和精确农业。
关键词:农业,生产,温度和湿度,无线网络,传感器。
1.简介目前,生产和生活的许多方面都需要提取和加工周围环境的温度和湿度信息。
在过去的技术是收集温度和湿度传感器的温湿度信息,并通过RS-485总线或现场总线再次发送数据到监控中心,所以你需要铺设大量的电缆来收集温度和湿度信息。
传统农业主要使用孤立的机械设备,没有沟通能力,主要依靠的人来监控作物生长状况。
然而,如果使用ZigBee无线传感器网络技术,农业将逐步转变为信息和生产的为主的生产模式,使用更加自动化,网络化,智能化的耕作方式,实现远程无线控制设备。
传感器可以收集信息,如土壤水分,氮浓度,pH值,降水,温度,空气湿度,空气压力等。
采集到的上述信息和所收集信息的位置被传递到中央控制设备用于通过ZigBee网络的决策和参考,所以我们可以提前和准确地识别用于帮助维持和提高作物产量的问题。
在许多面向数据的无线网络传输,低成本和复杂性的无线网络被广泛地使用。
2. ZigBee的技术特点ZigBee技术是一种短距离,低复杂度,低功耗,低数据速率,和低成本,双向无线通信技术,主要是采用在自动控制和远程控制的领域中,可以嵌入各种设备中,以实现他们的自动化[1]。
对于现有的各种无线通信技术,ZigBee技术将是最低功耗和成本的技术。
ZigBee的数据传输速率低,在10KB/ s到250KB/ s的范围内,并主要集中在低速率传输。
在低功耗待机模式下,两个普通的5号电池可以持续6至24个月。
ZigBee的数据传输速率低,并且它的协议很简单,所以它大大降低了成本。
而它的网络容量大,可容纳65000设备。
延迟时间很短,一般在15毫秒〜30毫秒。
DeepLearning论⽂翻译(NatureDeepReview)原论⽂出处:by Yann LeCun, Yoshua Bengio & Geoffrey HintonNature volume521, pages436–444 (28 May 2015)译者:零楚L()这篇论⽂性质为深度学习的综述,原本只是想做做笔记,但找到的翻译都不怎么通顺。
既然要啃原⽂献,索性就做个翻译,尽⼒准确通畅。
转载使⽤请注明本⽂出处,当然实在不注明我也并没有什么办法。
论⽂中⼤量使⽤貌似作者默认术语但⼜难以赋予合适中⽂意义或会造成歧义的词语及其在本⽂中将采⽤的固定翻译有:representation-“特征描述”objective function/objective-“误差函数/误差” 本意该是⽬标函数/评价函数但实际应⽤中都是使⽤的cost function+正则化项作为⽬标函数(参考链接:)所以本⽂直接将其意为误差函数便于理解的同时并不会影响理解的正确性这种量的翻译写下⾯这句话会不会太⾃⼤了点,不过应该也没关系吧。
看过这么多书⾥接受批评指正的谦辞,张宇⽼师版本的最为印象深刻:我⽆意以“⽔平有限”为遁词,诚⼼接受批评指正。
那么,让我们开始这⼀切吧。
Nature Deep Review摘要:深度学习能让那些具有多个处理层的计算模型学习如何表⽰⾼度抽象的数据。
这些⽅法显著地促进了⼀些领域最前沿技术的发展,包括语⾳识别,视觉对象识别,对象检测和如药物鉴定、基因组学等其它很多领域。
深度学习使⽤⼀种后向传播算法(BackPropagation algorithm,通常称为BP算法)来指导机器如何让每个处理层根据它们⾃⾝前⾯的处理层的“特征描述”改变⽤于计算本层“特征描述”的参数,由此深度学习可以发现所给数据集的复杂结构。
深度卷积⽹络为图像、视频、语⾳和⾳频的处理领域都带来了突破,⽽递归⽹络(或更多译为循环神经⽹络)则在处理⽂本、语⾳等序列数据⽅⾯展现出潜⼒。
文献信息:文献标题:Research Priorities for Robust and Beneficial Artificial Intelligence(稳健和有益的人工智能的研究重点)国外作者:Stuart Russell, Daniel Dewey, Max Tegmark文献出处:《Association for the Advancement of Artificial Intelligence》,2015,36(4):105-114字数统计:英文2887单词,16400字符;中文5430汉字外文文献:Research Priorities for Robust and Beneficial Artificial Intelligence Abstract Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.Keywords:artificial intelligence, superintelligence, robust, beneficial, safety, societyArtificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, the criterion for intelligence is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic representations and statistical learning methods has led to a large degree of integration and cross-fertilization between AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkablesuccesses in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is valuable to investigate how to reap its benefits while avoiding potential pitfalls.Short-term Research PrioritiesOptimizing AI’s Economic ImpactThe successes of industrial applications of AI, from manufacturing to information services, demonstrate a growing impact on the economy, although there is disagreement about the exact nature of this impact and on how to distinguish between the effects of AI and those of other information technologies. Many economists and computer scientists agree that there is valuable research to be done on how to maximize the economic benefits of AI while mitigating adverse effects, which could include increased inequality and unemployment (Mokyr 2014; Brynjolfsson and McAfee 2014; Frey and Osborne 2013; Glaeser 2014; Shanahan 2015; Nilsson 1984; Manyika et al. 2013). Such considerations motivate a range of research directions, spanning areas from economics to psychology. Below are a few examples that should by no means be interpreted as an exhaustive list.Labor market forecasting:When and in what order should we expect various jobs to become automated (Frey and Osborne 2013)? How will this affect the wages of less skilled workers, the creative professions, and different kinds of informationworkers? Some have have argued that AI is likely to greatly increase the overall wealth of humanity as a whole (Brynjolfsson and McAfee 2014). However, increased automation may push income distribution further towards a power law (Brynjolfsson, McAfee, and Spence 2014), and the resulting disparity may fall disproportionately along lines of race, class, and gender; research anticipating the economic and societal impact of such disparity could be useful.Other market disruptions: Significant parts of the economy, including finance, insurance, actuarial, and many consumer markets, could be susceptible to disruption through the use of AI techniques to learn, model, and predict human and market behaviors. These markets might be identified by a combination of high complexity and high rewards for navigating that complexity (Manyika et al. 2013).Policy for managing adverse effects:What policies could help increasingly automated societies flourish? For example, Brynjolfsson and McAfee (Brynjolfsson and McAfee 2014) explore various policies for incentivizing development of labor-intensive sectors and for using AI-generated wealth to support underemployed populations. What are the pros and cons of interventions such as educational reform, apprenticeship programs, labor-demanding infrastructure projects, and changes to minimum wage law, tax structure, and the social safety net (Glaeser 2014)? History provides many examples of subpopulations not needing to work for economic security, ranging from aristocrats in antiquity to many present-day citizens of Qatar. What societal structures and other factors determine whether such populations flourish? Unemployment is not the same as leisure, and there are deep links between unemployment and unhappiness, self-doubt, and isolation (Hetschko, Knabe, and Scho¨ b 2014; Clark and Oswald 1994); understanding what policies and norms can break these links could significantly improve the median quality of life. Empirical and theoretical research on topics such as the basic income proposal could clarify our options (Van Parijs 1992; Widerquist et al. 2013).Economic measures: It is possible that economic measures such as real GDP per capita do not accurately capture the benefits and detriments of heavily AI-and-automation-based economies, making these metrics unsuitable for policypurposes (Mokyr 2014). Research on improved metrics could be useful for decision-making.Law and Ethics ResearchThe development of systems that embody significant amounts of intelligence and autonomy leads to important legal and ethical questions whose answers impact both producers and consumers of AI technology. These questions span law, public policy, professional ethics, and philosophical ethics, and will require expertise from computer scientists, legal experts, political scientists, and ethicists. For example: Liability and law for autonomous vehicles: If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits. In what legal framework can the safety benefits of autonomous vehicles such as drone aircraft and self-driving cars best be realized (Vladeck 2014)? Should legal questions about AI be handled by existing (software-and internet-focused) ‘‘cyberlaw’’, or should they be treated separately (Calo 2014b)? In both military and commercial applications, governments will need to decide how best to bring the relevant expertise to bear; for example, a panel or committee of professionals and academics could be created, and Calo has proposed the creation of a Federal Robotics Commission (Calo 2014a).Machine ethics: How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost? How should lawyers, ethicists, and policymakers engage the public on these issues? Should such trade-offs be the subject of national standards?Autonomous weapons: Can lethal autonomous weapons be made to comply with humanitarian law (Churchill and Ulfstein 2000)? If, as some organizations have suggested, autonomous weapons should be banned (Docherty 2012), is it possible to develop a precise definition of autonomy for this purpose, and can such a ban practically be enforced? If it is permissible or legal to use lethal autonomous weapons, how should these weapons be integrated into the existing command-and-control structure so that responsibility and liability remain associated with specific human actors? What technical realities and forecasts should inform these questions, and howshould ‘‘meaningful human control’’ over weapons be defined (Roff 2013, 2014; Anderson, Reisner, and Waxman 2014)? Are autonomous weapons likely to reduce political aversion to conflict, or perhaps result in ‘‘accidental’’ battles or wars (Asaro 2008)? Would such weapons become the tool of choice for oppressors or terrorists? Finally, how can transparency and public discourse best be encouraged on these issues?Privacy: How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, etc., interact with the right to privacy? How will privacy risks interact with cybersecurity and cyberwarfare (Singer and Friedman 2014)? Our ability to take full advantage of the synergy between AI and big data will depend in part on our ability to manage and preserve privacy (Manyika et al. 2011; Agrawal and Srikant 2000).Professional ethics:What role should computer scientists play in the law and ethics of AI development and use? Past and current projects to explore these questions include the AAAI 2008–09 Presidential Panel on Long-Term AI Futures (Horvitz and Selman 2009), the EPSRC Principles of Robotics (Boden et al. 2011), and recently announced programs such as Stanford’s One-Hundred Year Study of AI and the AAAI Committee on AI Impact and Ethical Issues.Long-term research prioritiesA frequently discussed long-term goal of some AI researchers is to develop systems that can learn from experience with human-like breadth and surpass human performance in most cognitive tasks, thereby having a major impact on society. If there is a non-negligible probability that these efforts will succeed in the foreseeable future, then additional current research beyond that mentioned in the previous sections will be motivated as exemplified below, to help ensure that the resulting AI will be robust and beneficial.VerificationReprising the themes of short-term research, research enabling verifiable low-level software and hardware can eliminate large classes of bugs and problems ingeneral AI systems; if such systems become increasingly powerful and safety-critical, verifiable safety properties will become increasingly valuable. If the theory of extending verifiable properties from components to entire systems is well understood, then even very large systems can enjoy certain kinds of safety guarantees, potentially aided by techniques designed explicitly to handle learning agents and high-level properties. Theoretical research, especially if it is done explicitly with very general and capable AI systems in mind, could be particularly useful.A related verification research topic that is distinctive to long-term concerns is the verifiability of systems that modify, extend, or improve themselves, possibly many times in succession (Good 1965; Vinge 1993). Attempting to straightforwardly apply formal verification tools to this more general setting presents new difficulties, including the challenge that a formal system that is sufficiently powerful cannot use formal methods in the obvious way to gain assurance about the accuracy of functionally similar formal systems, on pain of inconsistency via Go¨ del’s incompleteness (Fallenstein and Soares 2014; Weaver 2013). It is not yet clear whether or how this problem can be overcome, or whether similar problems will arise with other verification methods of similar strength.Finally, it is often difficult to actually apply formal verification techniques to physical systems, especially systems that have not been designed with verification in mind. This motivates research pursuing a general theory that links functional specification to physical states of affairs. This type of theory would allow use of formal tools to anticipate and control behaviors of systems that approximate rational agents, alternate designs such as satisficing agents, and systems that cannot be easily described in the standard agent formalism (powerful prediction systems, theorem-provers, limited-purpose science or engineering systems, etc.). It may also be that such a theory could allow rigorous demonstrations that systems are constrained from taking certain kinds of actions or performing certain kinds of reasoning.ValidityAs in the short-term research priorities, validity is concerned with undesirable behaviors that can arise despite a system’s formal correctness. In the long term, AIsystems might become more powerful and autonomous, in which case failures of validity could carry correspondingly higher costs.Strong guarantees for machine learning methods, an area we highlighted for short-term validity research, will also be important for long-term safety. To maximize the long-term value of this work, machine learning research might focus on the types of unexpected generalization that would be most problematic for very general and capable AI systems. In particular, it might aim to understand theoretically and practically how learned representations of high-level human concepts could be expected to generalize (or fail to) in radically new contexts (Tegmark 2015). Additionally, if some concepts could be learned reliably, it might be possible to use them to define tasks and constraints that minimize the chances of unintended consequences even when autonomous AI systems become very general and capable. Little work has been done on this topic, which suggests that both theoretical and experimental research may be useful.Mathematical tools such as formal logic, probability, and decision theory have yielded significant insight into the foundations of reasoning and decision-making. However, there are still many open problems in the foundations of reasoning and decision. Solutions to these problems may make the behavior of very capable systems much more reliable and predictable. Example research topics in this area include reasoning and decision under bounded computational resources as Horvitz and Russell (Horvitz 1987; Russell and Subramanian 1995), how to take into account correlations between AI systems’ behaviors and those of their environments or of other agents (Tennenholtz 2004; LaVictoire et al. 2014; Hintze 2014; Halpern and Pass 2013; Soares and Fallenstein 2014c), how agents that are embedded in their environments should reason (Soares 2014a; Orseau and Ring 2012), and how to reason about uncertainty over logical consequences of beliefs or other deterministic computations (Soares and Fallenstein 2014b). These topics may benefit from being considered together, since they appear deeply linked (Halpern and Pass 2011; Halpern, Pass, and Seeman 2014).In the long term, it is plausible that we will want to make agents that actautonomously and powerfully across many domains. Explicitly specifying our preferences in broad domains in the style of near-future machine ethics may not be practical, making ‘‘aligning’’ the values of powerful AI systems with our own values and preferences difficult (Soares 2014b; Soares and Fallenstein 2014a).SecurityIt is unclear whether long-term progress in AI will make the overall problem of security easier or harder; on one hand, systems will become increasingly complex in construction and behavior and AI-based cyberattacks may be extremely effective, while on the other hand, the use of AI and machine learning techniques along with significant progress in low-level system reliability may render hardened systems much less vulnerable than today’s. From a cryptographic perspective, it appears that this conflict favors defenders over attackers; this may be a reason to pursue effective defense research wholeheartedly.Although the topics described in the near-term security research section above may become increasingly important in the long term, very general and capable systems will pose distinctive security problems. In particular, if the problems of validity and control are not solved, it may be useful to create ‘‘containers” for AI systems that could have undesirable behaviors and consequences in less controlled environments (Yampolskiy 2012). Both theoretical and practical sides of this question warrant investigation. If the general case of AI containment turns out to be prohibitively difficult, then it may be that designing an AI system and a container in parallel is more successful, allowing the weaknesses and strengths of the design to inform the containment strategy (Bostrom 2014). The design of anomaly detection systems and automated exploit-checkers could be of significant help. Overall, it seems reasonable to expect this additional perspective – defending against attacks from ‘‘within” a system as well as from external actors – will raise interesting and profitable questions in the field of computer security.ControlIt has been argued that very general and capable AI systems operating autonomously to accomplish some task will often be subject to effects that increasethe difficulty of maintaining meaningful human control (Omohundro 2007; Bostrom 2012, 2014; Shanahan 2015). Research on systems that are not subject to these effects, minimize their impact, or allow for reliable human control could be valuable in preventing undesired consequences, as could work on reliable and secure test-beds for AI systems at a variety of capability levels.If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions that prevent the system from continuing to pursue the task is a natural subgoal (Omohundro 2007; Bostrom 2012) (and conversely, seeking unconstrained situations is sometimes a useful heuristic (Wissner-Gross and Freer 2013)). This could become problematic, however, if we wish to repurpose the system, to deactivate it, or to significantly alter its decision-making process; such a system would rationally avoid these changes. Systems that do not exhibit these behaviors have been termed corrigible systems (Soares et al. 2015), and both theoretical and practical work in this area appears tractable and useful. For example, it may be possible to design utility functions or decision processes so that a system will not try to avoid being shut down or repurposed (Soares et al. 2015), and theoretical frameworks could be developed to better understand the space of potential systems that avoid undesirable behaviors (Hibbard 2012, 2014, 2015).ConclusionIn summary, success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. The research agenda outlined in this paper, and the concerns that motivate it, have been called ‘‘anti-AI”, but we vigorously contest this characterization. It seems self-evident that the growing capabilities of AI are leading to an increased potential for impact on human society. It is the duty of AI researchers to ensure that the future impact is beneficial. We believe that this is possible, and hope that this research agenda provides a helpful step in the right direction.中文译文:稳健和有益的人工智能的研究重点摘要寻求人工智能的成功有可能为人类带来前所未有的好处,因此值得研究如何最大限度地利用这些好处,同时避免潜在危险。
英文翻译:PARTⅠ各种光纤接入技术Optical Fiber Technology With Various Access1 光网络主流1.1 光纤技术光纤生产技术已经成熟,现在大批量生产,广泛应用于今天的零色散波长λ0=1.3μm的单模光纤,而零色散波长λ0=1.55μm的单模光纤已开发并已进入实用阶段,这是非常小的1.55μm的波长衰减,约0.22dB/km,它更适合长距离大容量传输,是首选的长途骨干传输介质。
目前,为了适应不同的线路和局域网的发展要求,已经制定了一个非分散纤维,低色散斜率光纤,大有效面积光纤,水峰光纤等新型光纤。
长波光学研究人员研究认为,传输距离可以达到数千公里的理论,可以实现无中继传输距离,但它仍然是阶段理论。
1.2 光纤放大器1550nm波长掺铒(ER)的光纤放大器(EDFA),掺铒数字,模拟和相干光通信中继器可以以不同的速率传输光纤放大器,也可以发送特定波长的光信号。
在从模拟信号转换成数字信号、从低到高比特率比特率的光纤网络升级中,系统采用光复用技术的扩大,他们都不必改变掺铒放大器电路和设备。
掺铒放大器可作为光接收机前置放大器,后置放大器的光发射机和放大器的补偿光源装置。
1.3 宽带接入不同的环境中企业和住宅客户提供了多种宽带接入解决方案。
接入系统主要完成三大功能:高速传输,复用/路由,网络的扩展。
目前,接入系统的主流技术,ADSL 技术可以双绞铜线传输经济每秒几兆比特的信息,即支持传统的语音服务,而且还支持面向数据的因特网接入位,理事会结束的ADSL多路复用访问的数据流量,路由的分组网络,语音流量将传送到PSTN,ISDN或其它分组网络。
电缆调制解调器在HFC网络提供高速数据通信,将带宽分为上行和下行信道同轴电缆渠道,它可以提供挥发性有机化合物的在线娱乐,互联网接入等服务,同时还提供PSTN业务。
固定无线接入系统如智能天线和接收机的无线接入系统使用了许多高新技术,是一个以创新的方式接入的技术,作为目前仍滞留在今后进一步探索实践的方式最不确定的接入技术。
物联网外文文献翻译中国的物联网市场正在迅速发展,并引起了许多学者和企业的关注。
为了更好地了解物联网的发展动态和技术应用,我们翻译了一篇关于物联网的外文文献,以下是翻译内容:物联网(Internet of Things)是一种通过无线传感器网络和互联网技术将现实物体与网络连接起来的系统。
它可以实现物体之间的通信和信息共享,进而实现智能化管理和控制。
物联网的出现将带来许多新的商业机会和技术挑战。
物联网的核心技术之一是无线传感器网络。
无线传感器网络由大量的传感器节点组成,这些节点可以感知和采集物体的信息,并通过无线通信传输到物联网中。
这些传感器节点具有小巧、低功耗和低成本的特点,可以广泛应用于各种环境中。
通过物联网,我们可以实现智能家居、智能交通、智能医疗等应用场景。
例如,智能家居可以实现远程控制家电、自动化调节室内环境等功能;智能交通可以实现交通流畅监控、车辆定位导航等服务;智能医疗可以实现远程健康监护、医疗设备远程监控等功能。
物联网的发展还面临一些技术挑战和隐私安全等问题。
首先,物联网需要大规模的传感器网络和基础设施支持,这对于网络扩展和管理提出了挑战。
其次,物联网中涉及到大量的数据传输和处理,对网络的带宽和计算能力提出了要求。
此外,物联网涉及到用户隐私和数据安全,需要采取有效的安全措施保护用户的信息。
在实际应用中,物联网已经取得了一些成果。
例如,智能电网可以实现对电力传输和分配的智能化管理,提高电力利用效率;智能农业可以实现对农作物的远程监控和自动化灌溉;智能健康可以实现对人体健康状况的实时监测和预警。
总的来说,物联网作为一种新兴的技术和应用领域,具有巨大的潜力和发展前景。
随着技术的不断进步和应用场景的拓展,物联网有望在未来实现更多的商业应用和社会效益。
以上是对物联网外文文献的翻译内容,希望能够对您对物联网的了解有所帮助。
物联网作为一个快速发展的领域,还有很多相关的研究和应用可以进一步探索和开发。
附 录 一、 英文原文 A Brief Overview of ad hoc Networks: Challenges and Directions One of the most vibrant and active “new” fields today is that of ad hoc networks. Significant research in this area has been ongoing for nearly 30 years, also under the names packet radio or multi-hop networks. ad hoc network is a (possibly mobile) collection of communications devices (nodes) that wish to communicate, but have no fixed infrastructure available, and have no pre-determined organization of available links. Individual nodes are responsible for dynamically discovering which other nodes they can directly communicate with. Ad hoc networking is a multi-layer problem. The physical layer must adapt to rapid changes in link characteristics. The multiple access control (MAC) layer needs to minimize collisions, allow fair access, and semi-reliably transport data over the shared wireless links in the presence of rapid changes and hidden or exposed terminals. The network layer needs to determine and distribute information used to calculate paths in a way that maintains efficiency when links change often and bandwidth is at a premium. It’s also needs to integrate smoothly with traditional, non ad hoc-aware internetworks and perform functions such as auto-configuration in this changing environment. The transport layer must be able to handle delay and packet loss statistics that are very different than wired networks. Finally, applications need to be designed to handle frequent disconnection and reconnection with peer applications as well as widely varying delay and packet loss characteristics. Ad hoc networks are suited for use in situations where infrastructure is either not available, not trusted, or should not be relied on in times of emergency. A few examples include: military solders in the field; sensors scattered throughout a city for biological detection; an infrastructureless network of notebook computers in a conference or campus setting; the forestry or lumber industry; rare animal tracking; space exploration; undersea operations; and temporary offices such as campaign headquarters. History The history of ad hoc networks can be traced back to 1972 and the DoD-sponsored Packet Radio Network (PRNET), which evolved into the Survivable Adaptive Radio Networks(SURAN) program in the early 1980s [l]. The goal of these programs was to provide packetswitched networking to mobile battlefield elements in an infrastructureless, hostile environment (soldiers, tanks, aircraft, etc., forming the nodes in the network). In the early 1990s a spate of new developments signaled a new phase in ad hoc networking. Notebook computers became popular, as did open-source software, and viable communications equipment based on RF and infrared. The idea of an infrstructureless collection of mobile hosts was proposed in two conference papers [2,3], and the IEEE 802.11 subcommittee adopted the term “ad hoc networks.” The concept of commercial (non-military) ad hoc networking had arrived. Other novel non-military possibilities were suggested (as mentioned in the introduction), and interest grew. At around the same time, the DoD continued from where it left off, funding programs such as the Global Mobile Information Systems(GloMo), and the Near-term Digital Radio(NTDR). The goal of GloMo was to provide office-environment Ethernet-type multimedia connectivity anytime, anywhere, in handheld devices. Channel access approaches were now in the CSMA/CA and TDMA molds, and several novel routing and topology control schemes were developed. The NTDR used clustering and linkstate routing, and self-organized into a two-tier ad hoc network. Now used by the US Army,NTDR is the only “real” (non-prototypical) ad hoc network in use today. Spurred by the growing interest in ad hoc networking, a number of standards activities and commercial standards evolved in the mid to late’90s.Within the IETF, the Mobile Ad hoc Networking(MANET) working group was horn, and sought to standardize routing protocols for ad hoc networks. The development of routing within the MANET working group and the larger community forked into reactive (routes ondemand) and proactive (routes ready-to-use) routing protocols [4]. The 802.11 subcommittee standardized a medium access protocol that was based on collision avoidance and tolerated hidden terminals, making it usable, if not optimal,for building mobile ad hoc network prototypes out of notebooks and 802.11 PCMCIA cards.HIPERLAN and Bluetooth were some other standards that addressed and benefited ad hoc networking. Open Problems Despite the long history of ad hoc networking, there are still quite a number of problems that are open. Since ad hoc networks do not assume the availability of a fixed infrastructure, it follows that individual nodes may have to rely on portable, limited power sources. The idea of energy-efficiency therefore becomes an important problem in ad hoc networks. Surprisingly,there has been little published work in the area of energy-efficiency of ad hoc networks until fairly recently. Most existing solutions for saving energy in ad hoc networks revolve around the reduction of power used by the radio transceiver. At the MAC level and above, this is often done by selectively sending the receiver into a sleep mode, or by using a transmitter with variable output power (and proportionate input power draw) and selecting routes that require many short hops, instead of a few longer hops [8]. The ability of fixed, wireless networks to satisfy quality of service (QoS) requirements is another open problem. Ad hoc networks further complicate the known QoS challenges in wireline networks with RF channel characteristics that often
外文参考THE DESIGN AND IMPLEMENTATION OF ANE-COMMERCESITE FOR ONLINE BOOK SALESSwapnaKodaliAbstractThe business-to-consumer aspect of electronic commerce (e-commerce) is the most visible business use of the World Wide Web. The primary goal of an e-commerce site is to sell goods and services online.This project deals with developing an e-commerce website for Online Book Sale. It provides the user with a catalog of different books available for purchase in the store.In order to facilitate online purchase a shopping cart is provided to the user. The system is implemented using a 3-tier approach, with a backend database, a middle tier of Microsoft Internet Information Services (IIS) and , and a web browser as the front end client.In order to develop an e -commerce website, a number of Technologies must be studied and understood. These include multi -tiered architecture, server and client side scripting techniques, implementation technologies such as , programming language (such as C#, ), relational databases (such as MySQL, Access).This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart application and also to know about the technologies used to develop such an application.This document will discuss each of the underlying technologies to create and implement an e-commerce website.ACKNOWLEDGMENTSIn completing this graduate project I have been fortunate to have help, support and encouragement from many people. I would like to acknowledge them for their cooperation.First, I would like to thank Dr.HosseinHakimzadeh, my project advisor, for guiding me through each and every step of the process with knowledge and support. Thank you for your advice, guidance and assistance.I would also like to thank Dr.ShafiiMousavi and Dr.DanaVrajitoru, my project committee members, who showed immense patience and understanding throughoutthe project and provided suggestions.Finally, I would like to dedicate this project to my parents, my husband Ram and my friends Kumar and Soumya, for their love, encouragement and help throughout the project.1. IntroductionE-commerce is fast g aining ground as an accepted and used business paradigm. More and more business houses are implementing web sites providing functionality forperforming commercial transactions over the web. It is reasonable to say that the process of shopping on the web is becoming commonplace.The objective of this project is to develop a general purpose e -commerce store where any product (such as books, CDs, computers, mobile phones, electronic items, and home appliances) can be bought from the comfort of home through the Internet. However, for implementation purposes, this paper will deal with an online book store.An online store is a virtual store on the Internet where customers can browse the catalog and select products of interest. The selected items may be collected in a shopping cart. At checkout time, the items in the shopping cart will be presented as an order. At that time, more information will be needed to complete the transaction. Usually, the customer will be asked to fill or select a billing address, a shipping address, a shipping option, and payment information such as credit card number. An e -mail notification is sent to the customer as soon as the order is placed.2. Literature ReviewElectronic Commerce (e-commerce) applications support the interaction between different parties participating in a commerce transaction via the network, as well as the management of the data involved in the process .The increasing importance of e -commerce is apparent in the study conducted by researches at the GVU (Graphics, Visualization, and Usability) Center at the Georgia Institute of Technology. In their summary of the findings from the eighth survey, the researchers report that “e-commerce is taking off both in terms of the number of users shopping as well as the total amount people are spending via Internet based transactions”.Over three quarters of the 10,000 respondents report having purchased items online. The most cited reason for using the web for personal shopping was convenience (65%), followed by availability of vendor information (60%), no pressure form sales person (55%) and saving time (53%).Although the issue of security remains the primary reason why more people do not purchase items online, the GV A survey also indicates that faith in the security of e-commerce is increasing. As more people gain confidence in current encryption technologies, more and more users can be expected to frequently purchase items online .A good e-commerce site should present the following factors to the customers for better usability :·Knowing when an item was saved or not saved in the shopping cart.·Returning to different parts of the site after adding an item to the shopping cart.·Easy scanning and selecting items in a list.·Effective categorical organization of products.·Simple navigation from home page to information and order links for specific products.·Obvious shopping links or buttons.·Minimal and effective security notifications or messages.·Consistent layout of product information.Another important factor in the design of an e-commerce site is feedback . The interactive cycle between a user and a web site is not complete until the web site responds to a command entered by the user. According to Norman, "feedback--sendingback to the user information about what action has actually been done, what result has been accomplished--is a well known concept in the science of control and information theory. Imagine trying to talk to someone when you cannot even hear your own voice, or trying to draw a picture with a pencil that leaves no mark: there would be no feedback".Web site feedback often consists of a change in the visual or verbal information presented to the user. Simple examples include highlighting a selection made by the user or filling a field on a form based on a user's selection from a pull down list. Another example is using the sound of a cash register to confirm that a product has been added to an electronic shopping cart. Completed orders should be acknowledged quickly. This may be done with an acknowledgment or fulfillment page. The amount of time it takes to generate and download this page, however, is a source of irritation for many e-commerce users. Users are quick to attribute meaning to events. A blank page, or what a user perceives to be "a long time" to receive an acknowledgment, may be interpreted as "there must be something wrong with the order." If generating an acknowledgment may take longer than what may be reasonably expected by the user, then the design should include intermediate feedback to the user indicating the progress being made toward acknowledgment or fulfillment.Finally, feedback should not distract the user. Actions and reactions made by the web site should be meaningful. Feedback should not draw the user's attention away from the important tasks of gathering information, selecting products, and placing orders.3. Project DesignIn order to design a web site, the relational database must be designed first. Conceptual design can be divided into two parts: The data model and the process model. The data model focuses on what data should be stored in the database while the process model deals with how the data is processed. To put this in the context of the relational database, the data model is used to design the relational tables. The process model is used to design the queries that will access and perform operations on those tables.4. Implementation TechnologiesThe objective of this project is to develop an online book store. When the user types in the URL of the Book Store in the address field of the browser, a Web Server is contacted to get the requested information. In the .NET Framework, IIS (Internet Information Service) acts as the Web Server. The sole task of a Web Server is to accept incoming HTTP requests and to return the requested resource in an HTTP response. The first thing IIS does when a request comes in is to decide how to handle the request. Its decision is based upon the requested file's extension. For example, if the requested file has the .asp extension, IIS will route the request to be handled by asp.dll. If it has the extens ion of .aspx, .ascx, etc, it will route the request to be handled by Engine.Figure 21 Relation between IIS and The Engine then gets the requested file, and if necessary contacts the database through for the required file and then the information is sent back to the Client’s browser. Figure 21 shows how a client browser interacts with the Web server and how the Web server handles the request from client.4.1 is a programming framework built on the common language runtime that can be used on a server to build powerful Web applications. has many advantages – both for programmers and for the end users because it is compatible with the .NET Framework. This compatibility allows the users to use the following features through :a) Powerful database-driven functionality: allows programmers to develop web applications that interface with a database. The advantage of is that it is object-oriented and has many programming tools that allow for faster development and more functionality.b) Faster web applications: Two aspects of make it fast -- compiled code and caching. In the code is compiled into "machine language" before a visitor ever comes to the website. Caching is the storage of information in memory for faster access in the future. allows programmers to set up pages or areas of pages that are commonly reused to be cached for a set period of time to improve the performance of web applications. In addition, allows the caching of data from a database so the website is not slowed down by frequent visits to a database when the data does not change very often.c) Memory leak and crash protection: automatically recovers from memory leaks and errors to make sure that the website is always available to the visitors. also supports code written in more than 25 .NET languages (including , C#, and ). This is achieved by the Common Language Runtime (CLR) compiler that supports multiple languages.4.3 MySQL DatabaseIn this project, MySQL is used as the backend database. MySQL is an open -source database management system. The features of MySQL are given below: ·MySQL is a relational database management system. A relational database stores information in different tables, rather than in one giant table. These tables can be referenced to each other, to access and maintain data easily.·MySQL is open source database system. The database software can be used and modify by anyone according to their needs.·It is fast, reliable and easy to use. To improve the performance, MySQL is multi -threaded database engine. A multithreaded application performs many tasks at the same time as if multiple instances of that application were runningsimultaneously.In being multithreaded MySQL has many advantages. A separate thread handles each incoming connection with an extra thread that is always running to manage the connections. Multiple clients can perform read operations simultaneously, but while writing, only hold up another client that needs access to the data being updated. Even though the threads share the same process space, they execute individually and because of this separation, multiprocessor machines can spread the thread across many CPUs as long as the host operating system supports multiple CPUs. Multithreading is the key feature to support MySQL’s performance design goals. It is the core feature around which MySQL is built.MySQL database is connected to using an ODBC driver. Open Database Connectivity (ODBC) is a widely accepted application-programming interface (API) for database access. The ODBC driver is a library that implements the functions supported by ODBC API. It processes ODBC function calls, submits SQL requests to MySQL server, and returns results back to the application. If necessary, the driver modifies an application's request so that the request conforms to syntax supported by MySQL.4.4 Integrating the Website and DatabaseCustomers ordering from an e-commerce website need to be able to get information about a vendor’s products and services, ask questions, select items they wish to purchase, and submit payment information. Vendors need to be able to track customer inquiries and preferences and process their orders. So a well organized database is essential for the development and maintenance of an e-commerce site .In a static Web page, content is determined at the time when the page is created. As users access a static page, the page always displays the same information. Example of a static Web page is the page displaying company information. In a dynamic Web page, content varies based on user input and data received from external sources. We use the term “data-based Web pages” to refer to dynamic Web pages deriv ing some or all of their content from data files or databases.A data-based Web page is requested when a user clicks a hyperlink or the submit button on a Web page form. If the request comes from clicking a hyperlink, the link specifies either a Web server program or a Web page that calls a Web server program. In some cases, the program performs a static query, such as “Display all items from the Inventory”. Although this query requires no user input, the results vary depending on when the query is made. If the request is generated when the user clicks a form’s submit button, instead of a hyperlink, the Web server program typically uses the form inputs to create a query. For example, the user might select five books to be purchased and then submit the input to the Web server program. The Web server program then services the order, generating a dynamic Web page response to confirm the transaction.In either case, the Web server is responsible for formatting the query results by adding HTML tags. The Web server program then sends the program’s output back to the client’s browser as a Web page.5. Web Page Programming OptionsAn e-commerce organization can create data-based Web pages by using server-side and client-side processing technologies or a hybrid of the two. With server-side processing, the Web server receives the dynamic Web page request, performs all processing necessary to create the page, and then sends it to the client for display in the client’s browser. Client-side processing is done on the client workstation by having the client browser execute a program that interacts directly with the database.5.1 Server -side processing.Generally dynamic or data-driven Web pages use HTML forms to collect user inputs, submitting them to a Web server. A program running on the server processes the form inputs, dynamically composing a Web page reply. This program, which is called, servicing program, can be either a compiled executable program or a script interpreted into machine language each time it is run.Compiled server programs. When a user submits HTML- form data for processing by a compiled server program, the Web Server invokes the servicing program. The servicing program is not part of the Web server but it is an independent executable program running on the Web server; it processes the user input, determines the action which must be taken, interacts with any external sources (Eg: database) and finally produces an HTML document and terminates. The Web server then sends the HTML document back to the user’s browser where it is displayed. Figure 23 shows the flow of HTTP request from the client to the Web server, which is sent to the servicing program. The program creates an HTML document to be sent to the client browser.Popular languages for creating compiled server programs are Java, Visual Basic, and C++, but almost any language that can create executable programs can be used, provided that it supports commands used by one of the protocols that establish guidelines for communication between Web servers and servicing programs. The first such protocol, introduced in 1993, for use with HTML forms was the Common Gateway Interface (CGI); many servicing programs on Web sites still use CGI programs. However, a disadvantage of using CGI-based servicing programs is that each form submitted to a Web server starts its own copy of the servicing program on the Web server.A busy Web server is likely to run out of memory when it services many forms simultaneously; thus, as interactive Web sites have gained popularity, Web server vendors have developed new technologies to process form inputs without starting a new copy of the servicing program for each browser input. Examples of these technologies for communicating with Web servers include Java Servlets [8] and Microsoft’s [7]; they allow a single copy of the servicing program to service multiple users without starting multiple instances of the program. has introduced many new capabilities to server-side Web programming, including a new category of elements called server controls that generate as many as 200 HTML tags and one or more JavaScript [9] functions from a single server control tag. Server controls support the processing of user events, such as clicking a mouse orentering text at either the client browser or the Web server. Server controls also encourage the separation of programming code into different files and/or areas from the HTML tags and text of a Web page, thus allowing HTML designers and programmers to work together more effectively.6. Classic ASP pages used ActiveX Data Objects (ADO) to access and modify databases. ADO is a programming interface used to access data. This method was efficient and fairly easy for developers to learn and implement. However, ADO suffered from a dated model for data access with many limitations, such as the inability to transmit data so it is easily and universally accessible. Coupled with the move from standard SQL databases to more distributed types of data (such as XML), Microsoft introduced .Although is known as the next evolution of ADO, it is very different from its predecessor. Whereas ADO was connection-based, relies on short, XML message-based interactions with data sources. This makes much more efficient for Internet-based applications.A fundamental change from ADO to was the adoption of XML for data exchanges. XML is a text-based markup language, similar to HTML that presents an efficient way to represent data. This allows to reach and exchange. It also gives much better performance because XML data is easily converted to and from any type of data.Another major change is the way interacts with databases. ADO requires “locking” of database resources and lengthy connections for its applications, but does not; it uses disconnected data sets, which eliminates lengthy connectionsnd database locks. This makes much more scalable because users are not in contention for database resources.In there are two core objects that allow us to work with data initially: the DataReader and the DataSet. In any .NET data access page, before we connect to a database, we first have to import all the necessary namespaces that will allow us to work with the objects required. Namespace in .NET is a set of classes that can be used while creating an application. The .NET Framework has about 3,500 classes which can be accessed through a namespace. The application will be using a technology known as Open DataBase Connectivity (ODBC) to access the database; therefore we must first import necessary namespaces. Below is a sample namespace declaration used by .NET.7.DataSetThe dataset is a disconnected, in-memory representation of data. It can be considered as a local copy of the relevant portions of the database. The DataSet resides in memory and the data in it can be manipulated and updated independent of the database. If necessary, changes made to the dataset can be applied to the central database. The data in DataSet can be loaded from any valid data source such as a text file, an XML database, Microsoft SQL server database, an Oracle database or MySQL database.8.The Connection ObjectThe Connection object creates the connection to the database. Microsoft VisualStudio .NET provides two types of Connection classes: the SqlConnection object, which is designed specifically to connect to Microsoft SQL Server 7.0 or later, and the OleDbConnection object, which can provide connections to a wide range of database types like Microsoft Access and Oracle. The Connection object contains all of the information required to open a connection to the database.9.The Command ObjectThe Command object is represented by two corresponding classes: SqlCommand and OleDbCommand. Command objects are used to execute commands to a database across a data connection. The Command objects can be used to execute stored procedures on the database, SQL commands, or return complete tables directly. Command objectsprovide three methods that are used to execute commands on the database: ExecuteNonQuery: Executes commands that have no return values such as INSERT, UPDATE or DELETE.ExecuteScalar: Returns a single value from a database queryExecuteReader: Returns a result set by way of a DataReader objectThe DataReader ObjectThe DataReader object provides a read-only, connected stream recordset from a database. Unlike other components of the Data Provider, DataReader objects cannot be directly instantiated. Rather, the DataReader is returned as the result of the Command object's ExecuteReader method. The SqlCommand.ExecuteReader method returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns an OleDbDataReader object. The DataReader can provide rows of data directly to application logic when one does not need to keep the data cached in memory. Because only one row is in memory at a time, the DataReader provides the lowest overhead interms of system performance but requires the exclusive use of an open Connection object for the lifetime of theDataReader.10.TheDataAdapter ObjectThe DataAdapter is the class at the core of ADO .NET's disconnected data access. It is essentially the middleman facilitating all communication between the database and a DataSet. The DataAdapter is used either to fill a DataTable or DataSet with its Fill method. After the memory -resident data has been manipulated, the DataAdapter can commit the changes to the database by calling the Update method. The DataAdapter provides four properties that represent database commands:SelectCommandInsertCommandDeleteCommandUpdateCommandWhen the Update method is called, changes in the DataSet are copied back to the database and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is executed. follows the below process, Figure 24, to connect to the database and retrieve data to the application .外文参考电子商务网站在线订销售的设计和实施SwapnaKodali摘要B/C方面的电子商务(电子商务)是最明显的商业利用万维网上。
网页设计英文翻译郑州轻工业学院专科毕业设计,论文,英文翻译题目个人博客网站的设计与实现学生姓名吕俊涛专业班级计算机网络技术网页设计09级1班学号 620913510120院 (系) 软件职业技术学院指导教师(职称) 李辉(助教) 完成时间 2011年5月 20日翻译题目: 2.0 专业班级:计算机网络技术(网页设计)09级1班姓名:吕俊涛学号:620913510120英文原文 2.0 is a programming framework built on the common language runtime that can be used on a server to build powerful Web applications. The first version of offered several important advantages over previous Web development models. 2.0 improves upon that foundation by adding support for several new and exciting features in the areas of developer productivity, administration and management, extensibility, and performance:1. Developer Productivity 2.0 encapsulates common Web tasks into application services and controls that can be easily reused across web sites. With thesebasic building blocks, many scenarios can now be implemented with far less custom code than was required in previous versions. With 2.0 it is possible to significantly reduce the amount of code and concepts necessary to build common scenarios on the web.(1)New Server Controls. 2.0 introduces many new server controls that enable powerful declarative support for data access, login security, wizard navigation, menus, tree views, portals, and more. Many of these controls take advantage of core application services in for scenarios like data access, membership and roles, and personalization. Some of the new families of controls in 2.0 are described below.(2)Data Controls. Data access in 2.0 can be accomplished completely declaratively (no code) using the new data-bound and data source controls. There are new data source controls to representdifferent data backbends such as SQL database, business objects, and XML, and there are new data-bound controls for rendering common UI for data, such as grid view, details view, and form view...(3)Navigation Controls. The navigation controls provide common UIfor navigating between pages in your site, such as tree view, menu, and sitemap path. These controls use the site navigation service in 2.0 to retrieve the custom structure you have defined for your site.(4)Login Controls. The new login controls provide the buildingblocks to add authentication and authorization-based UI to your site, such as login forms, create user forms, password retrieval, and customUI for logged in users or roles. These controls use the built-in membership and role services in 2.0 to interact with the userand role information defined for your site.1翻译题目: 2.0 专业班级:计算机网络技术(网页设计)09级1班姓名:吕俊涛学号:620913510120(5)Web Part Controls. Web parts are an exciting new family ofcontrols that enable you to add rich, personalized content and layout to your site, as well as the ability to edit that content and layoutdirectly from your application pages. These controls rely on the personalization services in 2.0 to provide a unique experiencefor each user in your application.(6)Master Pages. This feature provides the ability to define common structure and interface elements for your site, such as a page header, footer, or navigation bar, in a common location called a "master page", to be shared by many pages in your site. In one simple place you can control the look, feel, and much of functionality for an entire Web site. This improves the maintainability of your site and avoids unnecessary duplication of code for shared site structure or behavior.(7)Themes and Skins. The themes and skins features in 2.0 allow for easy customization of your site's look-and-feel. You candefine style information in a common location called a "theme", andapply that style information globally to pages or controls in your site.Like Master Pages, this improves the maintainability of your site and avoids unnecessary duplication of code for shared styles.(8)Personalization. Using the new personalization services in 2.0 you can easily create customized experiences within Web applications. The Profile object enables developers to easily build strongly-typed, sticky data stores for user accounts and build highly customized, relationship based experiences. At the same time, a developer can leverage Web Parts and the personalization service to enable Web site visitors to completely control the layout and behavior of the site, with the knowledge that the site is completely customized for them. Personalization scenarios are now easier to build than ever before and require significantly less code and effort to implement.(9)Localization. Enabling globalization and localization in Websites today is difficult, requiring large amounts of custom code and resources. 2.0 and Visual Studio 2005 provide tools and infrastructure to easily build Localizable sites including the ability to auto-detect incoming locales and display the appropriate locale based UI. Visual Studio 2005 includes built-in tools to dynamically generate resource files and localization references. Together, building localized applications becomes a simple and integrated part of the development experience.2. Administration and Management 2.0 is designed with administration and manageability in mind. We recognize that while simplifying the development experience isimportant, deployment and maintenance in a production environment is also a key component of an application's lifetime. 2.0 introduces several new2翻译题目: 2.0 专业班级:计算机网络技术(网页设计)09级1班姓名:吕俊涛学号:620913510120features that further enhance the deployment, management, and operations of servers.(1)Configuration API. 2.0 contains new configurationmanagement APIs, enabling users to programmatically build programsor scripts that create, read, and update Web.config and machine.config configuration files.(2) MMC Admin Tool. 2.0 provides a newcomprehensive admin tool that plugs into the existing IIS Administration MMC, enabling an administrator to graphically read or change common settings within our XML configuration files.(3)Pre-compilation Tool. 2.0 delivers a new application deployment utility that enables both developers and administrators to precompiled a dynamic application prior to deployment. This recompilation automatically identifies any compilation issues anywhere within the site, as well as enables applications to be deployed without any source being stored on the server (one can optionally removethe content of .asp files as part of the compile phase), further protecting your intellectual property.(4)Health Monitoring and Tracing. 2.0 also provides newhealth-monitoring support to enable administrators to be automatically notified when an application on a server starts to experience problems. New tracing features will enable administrators to capture run-time and request data from a production server to better diagnose issues. 2.0 is delivering features that will enable developers andadministrators to simplify the day-to-day management and maintenance of their Web applications.3. Flexible Extensibility 2.0 is a well-factored and open system, where any component can be easily replaced with a custom implementation. Whether it isserver controls, page handlers, compilation, or core application services, you'll find that all are easily customizable and replaceableto tailor to your needs. Developers can plug in custom code anywhere in the page lifecycle to further customize 2.0 to their needs.(1)Provider-driven Application Services. 2.0 now includesbuilt-in support for membership (user name/password credential storage) and role management services out of the box. The new personalization service enables quick storage/retrieval of user settings and preferences, facilitating rich customization with minimal code. The new site navigation system enables developers to quickly build link structures consistently across a site. As all of these services are provider-driven,they can be easily swapped out and replaced with your own custom implementation. With this extensibility option, you have completecontrol over the data store and schema that drives these richapplication services.3翻译题目: 2.0 专业班级:计算机网络技术(网页设计)09级1班姓名:吕俊涛学号:620913510120(2)0Server Control Extensibility. 2.0 includes improved support for control extensibility, such as more base classes that encapsulate common behaviors, improved designer support, more APIs for interacting with client-side script, metadata-driven support for new features like themes and accessibility verification, better state management, and more.(3)Data Source Controls. Data access in 2.0 is now performed declaratively using data source controls on a page. In this model, support for new data backend storage providers can be easily added by implementing custom data source controls. Additionally, the SqlDataSource control that ships in the box has built-in support for any managed provider that implements the new provider factory model in .(4)Compilation Build Providers. Dynamic compilation in 2.0is now handled by extensible compilation build providers, which associate a particular file extension with a handler that knows how tocompile that extension dynamically at runtime. For example, .rest files can be dynamically compiled to resources, .wsdl files to web service proxies, and .sad files to typed Dataset objects. In addition to the built-in support, it is easy to add support for additional extensions by implementing a custom build provider and registering it in Web.config.(5)Expression Builders. 2.0 introduces a declarative new syntax for referencing code to substitute values into the page, called Expression Builders. 2.0 includes expression builders for referencing string resources for localization, connection strings, application settings, and profile values. You can also write your own expression builders to create your own custom syntax to substitute values in a page rendering.4. Performance and Scalability is built to perform, using a compiled execution model for handling page requests and running on the world's fastest web server, Internet Information Services. 2.0 also introduces key performance benefits over previous versions.(1)64-Bit Support. 2.0 is now 64-bit enabled, meaning it can take advantage of the full memory address space of new 64-bit processors and servers. Developers can simply copy existing 32-bit applications onto a 64-bit 2.0 server and have them automatically be JIT compiled and executed as native 64-bit applications (no source code changes or manual re-compile are required).(2)Caching Improvements. 2.0 also now includes automatic database server cache invalidation. This powerful and easy-to-use feature allows developers to aggressively output cache database-driven page and partial page content within a site and have automatically invalidate these cache4翻译题目: 2.0 专业班级:计算机网络技术(网页设计)09级1班姓名:吕俊涛学号:620913510120entries and refresh the content whenever the back-end database changes. Developers can now safely cache time-critical content for long periods without worrying about serving visitors stale data.The remainder of the Quick Start presents practical examples ofthese and other features in .. NET technologies available to developers to bring a new development framework, it has become an exciting, revolutionary and development of new technologies. . NET is the information technology industry made a thorough solution, regardless of Web developers, component developers, information developers, or any Windows-based developer platform,. NET is a new development model to enable developers better and more quickly to complete the work. SQL Server 2000 is a full support for Web-database product, is a new - generation of Web application development tools, the perfect combination of twodevelopment of the database has become the mainstream Web application direction.译文 2.0是一个编程框架,建立在公共语言运行库,可用于在服务器上建立强大的Web应用程式。
英文一原文:ADO. Net TechnologyADO. NET is designed for solving the Web and program problem about the distribution application by Microsoft. As a kind of visit frame of database, ADO. NET can handle the N layer's data architecture which doesn't join that Web application program requires, thus when visiting the database of SQL SERVER, superior performance has gotten extensive application. This paper discusses for this.I.The major object of .Many objects in is similar to the object function in ADO , but the function of the object in is more powerful. At the same time, except the objects of the Connection, Parameter and Command , ADO. NET also add a lot of new object and program melt interface, such as DataSet, DataView, DataAdapter, DataReader and DataSetCommand etc..It make more simple to operate the database.I.a DataSet object: DataSet is the core of , is specially used to handle the data which is read out from the stock , and keep it in local memory as away from line. We can use identical way to operate the data which is got from the different data source, the behavior of DataSet is consistent whatever the base layer database is SQL Server or Oracle. In DataSet, it can contain arbitrary quantity of DataTable, and every DataTable is in correspondence with Table or View in darabase. Generally speaking,a corresponding DataTable is a set of a group of DataRow and DataColumn for the data sheet of elephant.DataTable can take the responsibility for defending each data trip to retain it's initial state and current state, with the concurrent that solves database visit problem. reduce the pressure of being away from line to join pattern decreasing for data server.I.b DataReader object: When looking over mass data with the way that glanced over , it will occupy plenty of memories and cause performance to drop.For instance,when a link (Connection ) reads the record of 1000 lines of database by using the traditional ADO Recordset ,it must give this link memory distribution for this 1000 lines of record untill the life period of this link finish. If there are many users carrying out the same operation for same computer at the same time,the memory will be used out. To solve these problems, DotNET frame has offered DataReader object, from database,it return a read-only string, which can roll downward only to flow ( Strem ), and exists a record every time only in courently.I.c DataView object: DataView object representative one DataTable approach by looking over , it is to put data in order with the form of form that what data acquiesce in look over way, and the order of data that put data in order follow the order taking out the trip that put litre and the fence position of row order in order from database data sheet, usually it is concerned with the Sort Order or reduce power in inquiring string when design the database .ADO. NET offer the way that has elasticity verymuch to let user use row order and condition filter , seek etc. property way of looking over of defining difference , so, edit and glance over data or show the data in DataTable .For instance ,we can establish a DataView object, then designate with with default look over ( DefaultView ) different row order way, or also can use condition filter ( Filter ) look over the partial data in data sheet. This kind of way can tie two above control items to a same data sheet, but what show is thay the different data.I.d DataAdapter object: ADO. NET establish and initialize data sheet through DataAdapter, so as to keep the data in the memory combining with DataSet object. DataAdapter object can hide the details that links up with Connection and Command. DataAdapter object allow to put the data to the data source ( DataSource )which gets from DataSet object.It can also take out data from data source. By the same way,it can also operate the bottom data preserved by adding ,deleting or modifing.II.Visit Database in joiningII.a The analysis of a example which visit the database of SQL SERVER in technology[ abstract ] With the thorough that applied along with distributed database, permanent database connection consumes resource fairly. When visiting the database of SQL SERVER, it can solve this problem perfectly with ADO . NET technology . This paper primarily discusses Net the major object of , and analyse the course of visiting SQL SERVER furtherly.[ keyword ] ADO. NET visit XML independentlyDrawing the bottom graph of the map, add new picture layer according to the fuction of defined , Setting picture layer visibility and so on:/ / constructs functionfunction FlashMap ( width: Number, height: Number, Geo_x: Number, Geo_y: Number ) { layersCount = 0;layers=new Array();stageWidth=width;stageHeight=height;x=Geo_x;y=Geo_y;Map_mc=_root.createEmptyMovieClip("Map_mc",0);Map_mc.createEmptyMovieClip("Map_mc",-200);}//create objectMap = new FlashMap( stagewidth, stagehight, Mapx, Mapy);//Loading mapfunction initMap () { Map. drawMap ();}II.b The information inquiry of the map datas:The inquiry of datais was mainly carried out by the click of mouse and the choose of frame on the flash which is loaded with the line,the point and the side of layer .We can see whether the field of the mouse choosing contain the object data through querying the courrent layer of the picture ,if exiting,to return the inquiry result in the new Form simplyand show it to the user. The code of the script of mouse option:Function select ( ){ var selectedRegion: Array;selectedRegion=new Array();for(j=0;j<yersCount;j++){var coverLayer:MapLayer=yers [j];for(i=0;i<converLayer.Regions_ary.length;i++){var Region:MapRegion=MapRegion(coverLayer.Regions_ary [i]);if(Region.isInRegion(_root._xmouse,_root._ymouse)){selectedRegion.push(coverLay er);selectedRegion.push(Region);}}}III. The practice of concluding remarks proves , in database the campus geography information that doesn't request too high, development data to lose comparatively demonstrate aspect, this is a kind of practical feasible , development the easy method with cheap cost. Besides, with the network electronic map colour of this kind of method produce completion, it is rich , content much shape, if wander the network electronic map plane of fictitious realistic technology and this kind of produce method that combines and can come true the environment of campus geography with space wander. This kind of network electronic map demonstrate effect is traditional GIS software which is nearly impossible to reach , in campus situation demonstrate and construct sub-district demonstrate etc. field ,this kind of method have greater develop potential.中文一译文:技术 是Microsoft 为解决Web 和分布式应用程序问题而设计的。