1 Efficient High-Speed Data Paths for IP Forwarding using Host Based Routers
- 格式:pdf
- 大小:47.50 KB
- 文档页数:12
SIGNAL INTEGRITYRaymond Y. Chen, Sigrid, Inc., Santa Clara, CaliforniaIntroductionIn the realm of high-speed digital design, signal integrity has become a critical issue, and is posing increasing challenges to the design engineers. Many signal integr ity problems are electromagnetic phenomena in nature and hence related to the EMI/EMC discussions in the previous sections of this book. In this chapter, we will discuss what the typical signal integrity problems are, where they come from, why it is important to understand them and how we can analyze and solve these issues. Several software tools available at present for signal integrity analysis and current trends in this area will also be introduced.The term Signal Integrity (SI) addresses two concerns in the electrical design aspects – the timing and the quality of the signal. Does the signal reach its destination when it is supposed to? And also, when it gets there, is it in good condition? The goal of signal integrity analysis is to ensure reliable high-speed data transmission. In a digital system, a signal is transmitted from one component to another in the form of logic 1 or 0, which is actually at certain reference voltage levels. At the input gate of a receiver, voltage above the reference value Vih is considered as logic high, while voltage below the reference value Vil is considered as logic low. Figure 14-1 shows the ideal voltage waveform in the perfect logic world, whereas Figure 14-2 shows how signal will look like in a real system. More complex data, composed of a string of bit 1 and 0s, are actually continuous voltage waveforms. The receiving component needs to sample the waveform in order to obtain the binary encoded information. The data sampling process is usually triggered by the rising edge or the falling edge of a clock signal as shown in the Figure 14-3. It is clear from the diagram that the data must arrive at the receiving gate on time and settle down to a non-ambiguous logic state when the receiving component starts to latch in. Any delay of the data or distortion of the data waveform will result in a failure of the data transmission. Imagine if the signal waveform in Figure 14-2 exhibits excessive ringing into the logic gray zone while the sampling occurs, then the logic level cannot be reliably detected.SI ProblemsT ypical SI Problems“Timing” is everything in a high-speed system. Signal timing depends on the delay caused by the physical length that the signal must propagate. It also depends on the shape of the waveform w hen the threshold is reached. Signal waveform distortions can be caused by different mechanisms. But there are three mostly concerned noise problems:•Reflection Noise Due to impedance mismatch, stubs, visa and other interconnect discontinuities. •Crosstalk Noise Due to electromagnetic coupling between signal traces and visa.•Power/Ground Noise Due to parasitic of the power/ground delivery system during drivers’ simultaneous switching output (SSO). It is sometimes also called Ground Bounce, Delta-I Noise or Simultaneous Switching Noise (SSN).Besides these three kinds of SI problems, there is other Electromagnetic Compatibility or Electromagnetic Interference (EMC/EMI) problems that may contribute to the signal waveform distortions. When SI problems happen and the system noise margin requirements are not satisfied – the input to a switching receiver makes an inflection below Vih minimum or above Vil maximum; the input to a quiet receiver rises above V il maximum or falls below Vih minimum; power/ground voltage fluctuations disturb the data in the latch, then logic error, data drop, false switching, or even system failure may occur. These types of noise faults are extremely difficult to diagnose and solve after the system is built or prototyped. Understanding and solving these problems before they occur will eliminate having to deal with them further into the project cycle,and will in turn cut down the development cycle and reduce the cost[1]. In the later part of thischapter, we will have further investigations on the physical behavior of these noise phenomena, their causes, their electrical models for analysis and simulation, and the ways to avoid them.1. Where SI Problems HappenSince the signals travel through all kinds of interconnections inside a system, any electrical impact happening at the source end, along the path, or at the receiving end, will have great effects on the signal timing and quality. In a typical digital system environment, signals originating from the off-chip drivers on the die (the chip) go through c4 or wire-bond connections to the chip package. The chip package could be single chip carrier or multi-chip module (MCM). Through the solder bumps of the chip package, signals go to the Printed Circuit Board (PCB) level. At this level, typical packaging structures include daughter card, motherboard or backplane. Then signals continue to go to another system component, such as an ASIC (Application Specific Integrated Circuit) chip, a memory module or a termination block. The chip packages, printed circuit boards, as well as the cables and connecters, form the so-called different levels of electronic packaging systems, as illustrated in Figure 14-4. In each level of the packaging structure, there are typical interconnects, such as metal traces, visa, and power/ground planes, which form electrical paths to conduct the signals. It is the packaging interconnection that ultimately influences the signal integrity of a system.2. SI In Electronic PackagingTechnology trends toward higher speed and higher density devices have pushed the package performance to its limits. The clock rate of present personal computers is approaching gigahertz range. As signal rise-time becomes less than 200ps, the significant frequency content of digital signals extends up to at least 10 GHz. This necessitates the fabrication of interconnects and packages to be capable of supporting very fast varying and broadband signals without degrading signal integrity to unacceptable levels. While the chip design and fabrication technology have undergone a tremendous evolution: gate lengths, having scaled from 50 µm in the 1960s to 0.18 µm today, are projected to reach 0.1 µm in the next few years; on-chip clock frequency is doubling every 18 months; and the intrinsic delay of the gate is decreasing exponentially with time to a few tens of Pico-seconds. However, the package design has lagged considerably. With current technology, the package interconnection delay dominates the system timing budget and becomes the bottleneck of the high-speed system design. It is generally accepted today that package performance is one of the major limiting factors of the overall system performance.Advances in high performance sub-micron microprocessors, the arrival of gigabit networks, and the need for broadband Internet access, necessitate the development of high performance packaging structures for reliable high-speed data transmission inside every electronics system.Signal integrity is one of the most important factors to be considered when designing these packages (chip carriers and PCBs) and integrating these packages together.3、SI Analysis3.1. SI Analysis in the Design FlowSignal integrity is not a new phenomenon and it did not always matter in the early days of the digital era. But with the explosion of the information technology and the arrival of Internet age, people need to be connected all the time through various high-speed digital communication/computing systems. In this enormous market, signal integrity analysis will play a more and more critical role to guarantee the reliable system operation of these electronics products. Without pre-layout SI guidelines, prototypes may never leave the bench; without post-layout SI verifications, products may fail in the field. Figure 14-5 shows the role of SI analysis in the high-speed design process. From this chart, we will notice that SI analysis is applied throughout the design flow and tightly integrated into each design stage. It is also very common to categorize SI analysis into two main stages: reroute analysis and post route analysis.In the reroute stage, SI analysis can be used to select technology for I/Os, clock distributions, chip package types, component types, board stickups, pin assignments, net topologies, and termination strategies. With various design parameters considered, batch SI simulations on different corner cases will progressively formulate a set of optimized guidelines for physical designs of later stage. SI analysis at this stage is also called constraint driven SI design because the guidelines developed will be used as constraints for component placement and routing. The objective of constraint driven SI design at the reroute stage is to ensure that the signal integrity of the physical layout, which follows the placement/routing constraints for noise and timing budget, will not exceed the maximum allowable noise levels. Comprehensive and in-depth reroute SI analysis will cut down the redesign efforts and place/route iterations, and eventually reduce design cycle.With an initial physical layout, post route SI analysis verifies the correctness of the SI design guidelines and constraints. It checks SI violations in the current design, such as reflection noise, ringing, crosstalk and ground bounce. It may also uncover SI problems that are overlooked in the reroute stage, because post route analysis works with physical layout data rather than estimated data or models, therefore it should produce more accurate simulation results.When SI analysis is thoroughly implemented throughout the whole design process, a reliable high performance system can be achieved with fast turn-around.In the past, physical designs generated by layout engineers were merely mechanical drawings when very little or no signal integrity issues were concerned. While the trend of higher-speed electronics system design continues, system engineers, responsible for developing a hardware system, are getting involved in SI and most likely employ design guidelines and routing constraints from signal integrity perspectives. Often, they simply do not know the answers to some of the SI problems because most of their knowledge is from the engineers doing previous generations of products. To face this challenge, nowadays, a design team (see Figure 14-6) needs to have SI engineers who are specialized in working in this emerging technology field. When a new technology is under consideration, such as a new device family or a new fabrication process for chip packages or boards, SI engineers will carry out the electrical characterization of the technology from SI perspectives, and develop layout guideline by running SI modeling and simulation software [2]. These SI tools must be accurate enough to model individual interconnections such as visa, traces, and plane stickups. And they also must be very efficient so what-if analysis with alternative driver/load models and termination schemes can be easily performed. In the end, SI engineers will determine a set of design rules and pass them to the design engineers and layout engineers. Then, the design engineers, who are responsible for the overall system design, need to ensure the design rules are successfully employed. They may run some SI simulations on a few critical nets once the board is initially placed and routed. And they may run post-layout verifications as well. The SI analysis they carry out involves many nets. Therefore, the simulation must be fast, though it may not require the kind of accuracy that SI engineers are looking for. Once the layout engineers get the placement and routing rules specified in SI terms, they need to generate an optimized physical design based on these constraints. And they will provide the report on any SI violations in a routed system using SI tools. If any violations are spotted, layout engineers will work closely with design engineers and SI engineers to solve these possible SI problems.3.2.Principles of SI AnalysisA digital system can be examined at three levels of abstraction: log ic, circuit theory, and electromagnetic (EM) fields. The logic level, which is the highest level of those three, is where SI problems can be easily identified. EM fields, located at the lowest level of abstraction, comprise the foundation that the other levels are built upon [3]. Most of the SI problems are EM problems in nature, such as the cases of reflection, crosstalk and ground bounce. Therefore, understanding the physical behavior of SI problems from EM perspective will be very helpful. For instance, in the following multi-layer packaging structure shown in Figure 14-7, a switching current in via a will generate EM waves propagating away from that via in the radial direction between metal planes. The fields developed between metal planes will cause voltage variations between planes (voltage is the integration of the E-field). When the waves reach other visa, they will induce currents in those visa. And the induced currents in that visa will in turn generate EM waves propagating between the planes. When the waves reach the edges of the package, part of them will radiate into the air and part of them will get reflected back. When the waves bounce back and forth inside the packaging structure and superimpose to each other, resonance will occur. Wave propagation, reflection, coupling and resonance are the typical EM phenomena happening inside a packaging structure during signal transients. Even though EM full wave analysis is much more accurate than the circuit analysis in the modeling of packaging structures, currently, common approaches of interconnect modeling are based on circuit theory, and SI analysis is carried out with circuit simulators. This is because field analysis usually requires much more complicated algorithms and much larger computing resources than circuit analysis, and circuit analysis provides good SI solutions at low frequency as an electrostatic approximation.Typical circuit simulators, such as different flavors of SPICE, employ nodal analysis and solve voltages and currents in lumped circuit elements like resistors, capacitors and inductors. In SI analysis, an interconnect sometimes will be modeled as a lumped circuit element. For instance, a piece of trace on the printed circuit board can be simply modeled as a resistor for its finite conductivity. With this lumped circuit model, the voltages along both ends of the trace are assumed to change instantaneously and the travel time for the signal to propagate between the two ends is neglected. However, if the signal propagation time along the trace has to be considered, a distributed circuit model, such as a cascaded R-L-C network, will be adopted to model the trace. To determine whether the distributed circuit model is necessary, the rule of thumb is – if the signal rise time is comparable to the round-trip propagation time, you need to consider using the distributed circuit model.For example, a 3cm long stripling trace in a FR-4 material based printed circuit board will exhibits 200ps propagation delay. For a 33 MHz system, assuming the signal rise time to be 5ns, the trace delay may be safely ignored; however, with a system of 500 MHz and 300ps rise time, the 200ps propagation delay on the trace becomes important and a distributed circuit model has to be used to model the trace. Through this example, it is easy to see that in the high-speed design, with ever-decreasing signal rise time, distributed circuit model must be used in SI analysis.Here is another example. Considering a pair of solid power and ground planes in a printed circuit board with the dimension of 15cm by 15cm, it is very natural to think the planes acting as a large, perfect, lumped capacitor, from the circuit theory point of view. The capacitor model C= erA/d, an electro-static solution, assumes anywhere on the plane the voltages are the same and all the charges stored are available instantaneously anywhere along the plane. This is true at DC and low frequency. However, when the logics switch with a rise time of 300ps, drawing a large amount of transient currents from the power/ground planes, they perceive the power/ground structure as a two-dimensional distributed network with significant delays. Only some portion of the plane charges located within a small radius of the switching logics will be able to supply the demand. And voltages between the power/ground planes will have variations at different locations. In this case, an ideal lumped capacitor model is obviously not going to account for the propagation effects. Two-dimensional distributed R-L-C circuit networks must be used to model the power/ground pair.In summary, as the current high-speed design trend continues, fast rise time reveals the distributed nature of package interconnects. Distributed circuit models need to be adopted to simulate the propagation delay in SI analysis. However, at higher frequencies, even the distributed circuit modeling techniques are not good enough, full wave electromagnetic field analysis based on solving Maxwell’s equations must come to play. As presen ted in later discussions, a trace will not be modeled as a lumped resistor, or a R-L-C ladder; it will be analyzed based upon transmission line theory; and a power/ground plane pair will be treated as a parallel-plate wave guide using radial transmission line theory.Transmission line theory is one of the most useful concepts in today’s SI analysis. And it is a basic topic in many introductory EM textbooks. For more information on the selective reading materials, please refer to the Resource Center in Chapter 16.In the above discussion, it can be noticed that signal rise time is a very important quantity in SI issues. So a little more expanded discussion on rise time will be given in the next section.信号完整性介绍在高速数字设计领域,信号完整性已经成为一个严重的问题,是造成越来越多的挑战的设计工程师。
Signal Processing in CommunicationsSignal processing in communications plays a crucial role in ensuring the efficient and reliable transmission of information. It involves the manipulation and analysis of signals to extract useful information and minimize the effects of noise and interference. This is essential for various communication systems such as wireless networks, satellite communication, and digital audio/video transmission. Signal processing techniques are constantly evolving to meet the increasing demands for higher data rates, improved quality of service, and robustness in communication systems. One of the key challenges in signal processing for communications is dealing with the effects of noise and interference. In any communication system, the transmitted signal is inevitably corrupted by various sources of noise such as thermal noise, interference from other signals, and distortions introduced by the transmission medium. Signal processing techniques such as filtering, equalization, and error correction coding are used to mitigate these effects and enhance the reliability of the communication system. For example, in wireless communication systems, adaptive equalization techniques are employed to combat the fading and multipath effects that can degrade the received signal quality. Another important aspect of signal processing in communications is the need for efficient data compression and encoding. With the ever-increasing demand for high-speed data transmission, efficient compression and encoding techniques are essential to minimize the required bandwidth and storage capacity. This is particularly important for multimedia communication systems that involve the transmission of large amounts of audio, video, and image data. Signal processing algorithms such as discrete cosine transform (DCT) and motion-compensated prediction are commonly used for video compression, while techniques like pulse code modulation (PCM) and adaptive differential pulse code modulation (ADPCM) are employed for audio compression. Furthermore, signal processing plays a critical role in the design and implementation of modern communication systems that utilize advanced modulation and multiple access techniques. These techniques are essential for achieving high spectral efficiency and accommodating a large number of users in a communication system. For instance, in wireless communication systems, techniques such asorthogonal frequency-division multiplexing (OFDM) and code-division multiple access (CDMA) rely on sophisticated signal processing algorithms to enableefficient data transmission in the presence of multipath propagation and interference. In addition to the technical aspects, the impact of signal processing in communications extends to various real-world applications and industries. For instance, in the field of telemedicine, signal processing techniques are used for the transmission and analysis of medical data such as electrocardiograms (ECG), electroencephalograms (EEG), and medical imaging. These techniques enable the remote monitoring of patients and the timely delivery of medical services, thereby improving healthcare accessibility and patient outcomes. Moreover, in the automotive industry, signal processing plays a crucial role in the development of advanced driver assistance systems (ADAS) and autonomous vehicles, where sensor data from radar, lidar, and cameras is processed to enable collision avoidance, lane keeping, and other safety-critical functions. From a societal perspective, signal processing in communications has a profound impact on the way people communicate, access information, and interact with technology. The advancements in signal processing have led to the proliferation of high-speed internet, seamless multimedia streaming, and ubiquitous connectivity through wireless networks. This has transformed the way individuals and businesses communicate, collaborate, and access digital content, leading to increased productivity, economic growth, and social connectivity. Moreover, signal processing has also played a significant role in bridging the digital divide by enabling affordable and accessible communication services in underserved and remote areas. In conclusion, signal processing in communications is a vital and dynamic field that underpins the design, performance, and societal impact of modern communication systems. The ongoing advancements in signal processing techniques continue to drive innovation in various industries and enable new applications that enhance the way we communicate and interact with technology. As the demand for higher data rates, improved quality of service, and seamless connectivity continues to grow, signal processing will remain at the forefront of enabling the next generation of communication systems and services.。
pcb走线英文专业术语1. Traces: PCB tracks or conductive paths that carry electrical signals between components.2. Vias: Plated holes that connect different layers of the PCB, allowing traces to pass through.3. Pads: Areas on the PCB used for component mounting and soldering.4. Ground plane: A large area of copper or metal that provides a common ground for components.5. Power plane: A similar concept to the ground plane, but carries power supply voltage instead.6. Footprints: Patterns or outlines on the PCB that indicate where specific components should be placed.7. Routing: The process of creating or designing the PCB traces and connections.8. Clearance: The minimum distance required between traces or components to prevent unwanted electrical interference or short circuits.9. Differential pair: A pair of traces that carry complementary signals, used for high-speed data transmission.10. Signal integrity: The measurement of how well signals are preserved as they travel through the PCB.11. Impedance control: Maintaining a specific impedance value for transmission lines to ensure proper signal integrity.12. Grounding techniques: Strategies used to minimize noise and interference by properly grounding the PCB.13. Crosstalk: The unwanted coupling of signals between traces that are in close proximity to each other.14. EMI/EMC: Electromagnetic interference (EMI) and electromagnetic compatibility (EMC) are measures of how wella PCB design reduces or mitigates unwanted electromagnetic emissions and susceptibility.15. SMT: Surface Mount Technology, a method of component mounting where components are directly soldered onto the PCB surface.16. THT: Through-Hole Technology, a method of component mounting where component leads pass through holes in the PCB and are soldered on the opposite side.17. BGA: Ball Grid Array, a type of IC package that uses an array of small solder balls for electrical connections.18. Solder mask: A protective layer applied to the PCB, covering all areas except the pads, traces, and vias that require soldering.19. Silkscreen: The layer of ink or etching used to label and identify components, test points, and other features on the PCB.20. Gerber files: Standardized file formats used for PCB manufacturing, containing information about the layers, traces, and component placement.。
5g应用领域及应用场景英语5G Applications and Their Scenarios.5G, or the fifth-generation mobile communication technology, has revolutionized the telecommunications industry, providing unprecedented speed, ultra-low latency, and the ability to connect millions of devices simultaneously. Its impact is felt across various sectors, transforming the way we live, work, and interact.1. Smart Cities.Smart cities are one of the prime beneficiaries of 5G technology. With its high-speed data transmission and low latency, 5G enables efficient monitoring and management of city infrastructure, including traffic, energy, and public safety systems. It also enables real-time data analysis, enabling cities to make informed decisions and improve services.2. Autonomous Vehicles.Autonomous vehicles rely on a constant stream of data to navigate and make decisions. 5G's high speed and low latency ensure reliable and secure communication between vehicles, infrastructure, and other road users, enabling safer and more efficient travel.3. Industrial Automation.5G's ability to connect a vast number of devices simultaneously enables industrial automation and smart manufacturing. It allows for real-time monitoring and control of machines and processes, improving efficiency, reducing waste, and enhancing worker safety.4. Healthcare.Healthcare is another area where 5G has made significant progress. Remote surgeries, real-time patient monitoring, and telemedicine are now possible with 5G's high-speed and low-latency connections. This not onlyimproves patient care but also reduces healthcare costs and increases access to services.5. Media and Entertainment.5G's high-speed data transmission enables seamless streaming of high-definition videos and games. It also opens the door to new forms of media, such as cloud gaming and virtual reality (VR) experiences, providing a more immersive and engaging user experience.6. IoT (Internet of Things)。
通信技术类英文文献IntroductionIn today’s digital age, communication technology plays a crucial role in connecting people, businesses, and devices. From traditional landline telephones to modern smartphones and internet-based communication platforms, the field of communication technology has witnessed significant advancements. This article aims to explore various aspects of communication technology, including its history, types, applications, and future trends.History of Communication Technology1.Early forms of communication technology–Smoke signals–Carrier pigeons–Semaphore telegraphs2.Invention of the telegraph–Samuel Morse and Morse code–Telegraph lines and the expansion of communication networks 3.Telephone revolution–Alexander Graham Bell and the invention of the telephone–Introduction of telephone exchanges and switchboards4.Birth of wireless communication–Guglielmo Marconi and radio transmission–Wireless telegraphy and its impact on long-distancecommunicationTypes of Communication Technology1.Wired communication technology–Traditional landline telephones–Ethernet cables for internet connectivity–Fiber optic cables for high-speed data transmission2.Wireless communication technology–Radio communication–Cellular networks (2G, 3G, 4G, and 5G)–Satellite communication3.Internet-based communication technology–Email and instant messaging–Voice over Internet Protocol (VoIP)–Video conferencing and online collaboration toolsApplications of Communication Technology1.Personal communication–Mobile phones and smartphones–Social media platforms–Messaging apps2.Business communication–Teleconferencing and virtual meetings–Cloud-based collaboration tools–Customer relationship management (CRM) systems3.Healthcare communication–Telemedicine and remote patient monitoring–Electronic health records (EHR)–Communication systems in hospitals and healthcare facilities munication in education–Online learning platforms–Virtual classrooms and webinars–Educational apps and softwareFuture Trends in Communication Technology1.Internet of Things (IoT)–Interconnected devices and sensors–Smart homes and cities2.5G and beyond–Faster data speeds and lower latency–Enhanced connectivity for autonomous vehicles and IoT devices3.Artificial Intelligence (AI) in communication–Chatbots and virtual assistants–Natural language processing for improved communication4.Blockchain technology in communication–Secure and transparent data exchange–Decentralized communication networksConclusionCommunication technology has evolved significantly over the years, revolutionizing the way we connect and interact. From the early forms of communication to the advent of wireless and internet-based technologies, communication has become faster, more efficient, and accessible to a wider audience. As we look towards the future, emerging trends such as IoT, 5G, AI, and blockchain promise to further transform the field of communication technology, opening up new possibilities and opportunities for individuals, businesses, and society as a whole.。
通信工程专业英语Revised at 16:25 am on June 10, 2021I hope tomorrow will definitely be better一、汉译英1、时分多址:TDMA Time Division Multiple Address/ Time Division Multiple Access2、通用无线分组业务:GPRSGeneral Packet Radio Service3、国际电报电话咨询委员会:CCITT4、同步数字体系:SDH Synchronous Digital Hierarchy 同步数字序列5、跳频扩频:FHSS frequency hopping spread spectrum6、同步转移模块:STM synchronous transfer module7、综合业务数字网:ISDNIntegrated Services Digital Network 8、城域网:MAN Metropolitan Area Network9、传输控制协议/互联网协议:TCP/IP Transmission ControlProtocol/Internet Protocol10、服务质量:QOS Quality of Service11、中继线:trunk line12、传输速率:transmission rate13、网络管理:network management14、帧结构:frame structure15、移动手机:Mobile Phone 手机Handset16、蜂窝交换机:Cellular switches 电池开关cell switchcell 蜂房17、天线:Antenna18、微处理器:microprocessor19、国际漫游:International roaming20、短消息:short message21、信噪比:SNRSignal to Noise Ratio22、数字通信:Digital communication23、系统容量:system capacity24、蜂窝网:cell networkcellular network Honeycomb nets 25、越区切换:Handover26、互联网:internet27、调制解调器:modem28、频谱:spectrum29、鼠标:Mouse30、电子邮件:electronic mail E-mail31、子网:subnet32、软件无线电:software defined radios33、网络资源:network resources二、英译汉1、mobile communication:移动通信2、Computer user:计算机用户3、Frame format:帧格式4、WLAN:wireless local area network 无线局域网络5、Communication protocol:通信协议6、Transmission quality:传输质量7、Remote terminal:远程终端8、International standard:国际标准9、GSM:全球移动通信系统Global System for Mobile Communications10、CDMA:码分多址Code Division Multiple Access11、ITU:国际电信联盟International Telecommunication Union12、PCM:pulse code modulation 脉冲编码调制13、WDM:波分复用Wavelength Division Multiplex14、FCC:联邦通信委员会Federal communications commission 15、PSTN:公用电话交换网Public Switched Telephone Network 16、NNI:网络节点借口Network Node Interface17、:万维网World Wide Web18、VOD:视频点播Video-On-Demand19、VLR:访问位置寄存器Visitor Location Register20、MSC:移动交换中心Mobile Switching Centre21、HLR:原籍位置寄存器Home Location Register22、VLSI:超大规模集成电路Very Large Scale Integrated Circuits23、Bluetooth technology:蓝牙技术24、Matched filter:匹配滤波器25、ADSL:非对称数字用户环路Asymmetrical Digital Subscriber Loop 非对称数字用户线路Asymmetric Digital Subscriber Line26、GPS:全球定位系统Global Position System27、ATM:异步传输模式Asynchronous Transfer Mode三、汉译英1、脉冲编码调制PCM依赖于三个独立的操作:抽样、量化和编码;PCM is dependent on three separate operations: sampling, quantizing and coding;2、像TCP和IP那样的协议已经被设计出来,当然,它们还需要被更新和维持;Protocols like TCP and IP have already been designed,of course,but they need to beupdated and maintained;3、码分多址是一种前景广阔的宽带数字工作系统;One form of digital wide band operation which has good future potential is CDMA;4、互联网是可提供很多网络资源的最大的信息库;The internet is the largest repository of information which can provide very largenetwork resources.5、开发蜂窝式移动电话系统并将其在许多城市中推广应用的原因之一是传统的移动电话系统存在容量有限、服务性能差、频谱利用率低的缺点;One of many reasons for developing a cellular mobile telephone system and deploying it in many cities is the operational limitations of conventional mobile telephone system;limited servile capability,poor service performance,and inefficient frequency spectrum utilization.6、来自发端借口不见的定时新号被加到数据调制解调器上,以使计算机与数传机同步;在接收端,从数据流中取出同步脉冲式计算机同步;Timing signals from the interface assembly at the transmitter are applied to the data modem to synchronize the computer and the data set. At the receivers synchronizationpulses are derived from the data stream to synchronize the computer.7、我们正处于通信网络革命的开始,越来越大的容量需求,各种各样的应用,以及服务质量正在对光网络提出巨大的需求;We are at the beginning of a revolution in communications networks, where increasing capacity, variety of applications, and quality of service are placing enormous demands on the optical network.8、目前在欧洲正在开发第三代移动通信系统,其目的是要综合第二代系统的所有不同业务并覆盖更广泛的业务话音、数据、视频、多媒体范围,而且还要与固定电话网络的技术发展保持一致和兼容;The third generation mobile communication system currently being developed in Europe, intended to integrate all the different services of the second generation system andcover a much wider range of broadband services Voice、data、video、multimediaconsistent and compatible with technology developments taking place within the fixed telecommunication networks.9、在现代对多媒体的描述中,我们所具有的技术已开始向人类具有的能力迈进;通过使用计算机技术、软件和其他技术,如CD-ROM;不仅能将视频图像和音频综合在一起,而且可以与其他计算机用户进行交互工作;In the modern presentation multimedia we now have the technology that can begin to move towards an ability already held by the human beings in that by using computer, technology, software and other technology, such as CD-ROM not only can we bring together audio and visual images but interactive with other computer users.10、为弥补这些问题,要在基本结构中加入大量的高效中继线;高效中继线用于直接连接有大量的节点间通信业务量的各交换中心;通信量常常是通过网络中可用的最低层次传送的;To compensate for these problems, a large number of high usage trunks augment the basic architecture. High-usage trunks are used for direct connection between switching centers with high volumes of internodes traffic. Traffic is always routed through the lowest available level of the work.11、光纤网络的革命刚刚开始,并正快速向未来的带宽无线、可靠和低费用的在线世界发展;The revolution of optical network is just beginning, and is advancing very swiftly towarda future online world in which bandwidth is essentially unlimited、reliable and low-cost.12、实际设备通常使用8kHz的采样速率,而如果每个样值用8位码的话,则话路是由一个重复速率64kHz的脉冲流来表示;Practical equipments, however, normally use a sampling rate of 8 kHz, and if 8 digits per sample value are used the voice channel becomes represented by a stream of pulses with a repetition rate of 64 kHz.13、是由于SDH设备在这些方面的标准化,才提供了网络运营者所期望的灵活性,从而能低价高效地应付带宽方面的增长并为后十年中出现的新的用户业务作好准备;It is the standardization of these aspects of SDH equipment that will deliver the flexibility required by network operators to cost effectively manage the growth in bandwidth and provisioning of new customer services expected in the net decade.四、英译汉1、The cellular concept is defined by two features, frequency reuse and cell splitting. Frequency reuse comes into play by using radio channels on the same frequency in coverage areas that are far enough apart not to cause co-channel interference. This allows handling of simultaneous calls that exceed the theoretical spectral capacity. Cell splitting is necessary when the traffic demand on a cell has reached the maximum=m and the cell is then divided into a micro-cellular system. The shape of cell in a cellular system is always depicted as a hexagon and the cluster size can be seven, nine or twelve. 蜂窝的概念由两个特点决定,频率复用和小区分割;在相邻覆盖区域足够远的而不至于引起共用信道干扰的,通过使用同一频率的无线信道,频率再利用才起作用;这样可以出来同时出现的呼叫并超出理论频谱容量;当小区的业务需求增到最大时,就要被划分,小区就要被划分成更小的蜂窝系统区域,蜂窝系统的小区形状常被描述成六边形,一群小区数量可以是9个或12个;2、Before you can use the Internet, you must choose a way to move data between the Internet and your PC. This link may be a high-speed data communication circuits, a local area network LAN, a telephone line or a radio channel. Most likely, you will use a Modem attached to your telephone line to talk to the Internet. Naturally, the quality of your Internet connection and service, like many other things in life, is dictated by the amount of money you are willing to spend.Although all these services can well satisfy the needs of the users for information exchange, a definite requirement is needed for the users. Not only should the users know where the resources locate, but also he should know some operating commands concerned.在使用Internet之前,必须使用一种方法在呢的PC机和Internet之间传递数据,这种连接的链路可以是高速数据通道电路,局域网,电话线路或无线信道;最有可能的是,你使用Modem连到电话线上与Internet对话;当然像生活中许多其他事物一样,与Internet连接服务和质量是由你花钱的数量来决定的;虽然这些所有的服务可以很好满足用户交换信息需要,但用户仍旧需要具有特定的先决条件,用户不但要知道信息资源的位置而且要知道一些有关的操作命令;3、Packet switching achieves the benefits discussed so far and offers added features. It provides the full advantage of the dynamic allocation of the bandwidth, even when messages are long. Indeed, with packer switching, many packets of the same message may be in transmission simultaneously over consecutive links of a path from source to destination, thus achieving a ‘pipelining’ effect and reducing considerably the overall transmission delay of the message as compared to message switching. It lends to require smaller storage allocation at the intermediate switches. It also has better error characteristics and leads to more efficient error recovery procedures, as it deals with smaller entities. Needless to say, packer switching presents design problems of its own, such as the need to reorder packets of a given message that may arrive at the destination node out of sequence.分组交换除具有以上讨论的有点之外,还具有一些特点;它提供带宽动态分配的全部优势,甚至当报文很长时依然如此,由于有分组交换,一个报文的多个分组确定可以通过原点到终点通路中多个链路同时传送,因而道道“管道”传送的效应,与报文交换相比,他大大的减少了报文整体传送时延;在中间交换设备中这种方式只需要较小的存储分配区域;分组交换的误码特性较好,由于它只涉及很短的长度,因而导致更高效的纠错方式;当然,分组交换也有自身设计上的麻烦,例如,当报文无序到达目的节点时,需要重新对该报文进行分组排序;4、As more and more systems join the Internet, and as more and more forms of information can be converted to digital form, the amount of stuff available to Internet userscontinues to grow. At some points very soon after the nationwide Internet started to grow, people began to treat the Net as a community, with its own tradition and customs.随着越来越多的系统连接到互联网,越来越多的信息被转化为数字形式,互联网用户可利用的资料持续增长,在不久的将来当国内互联网的规模增加到一定程度时,人们开始把互联网当做社区对待,并且拥有自己的传统和习惯;5、Three components are involved in a basic optical fiber system: the light source, the photo detector, and the optical transmission line. The optical light source generates the optical energy which serves as the information carrier, similar to a radio wave source supplying electromagnetic energy at radio wave wavelengths as the information carrier.The optical photo detector detects the optical energy and converts it into an electrical form. The optical fiber transmission line is the equivalent of copper wires and functionas the conductor of optical energy.基本的光纤系统涉及到3个部分:光源、光检测器和光缆;光源产生的能力作为信息的载体,类似于具有电磁能的无线电波用波长作为信息载体,光检测器检测光能并将其转化为电能的形式,光缆等效于铜线具有传导光能的作用;6、The fixed telephone service is global and the interconnection varies from coaxialcable to optical fiber and satellite. The national standards are different, but with common interfaces and interface conversion, interconnection can take place. For mobilethe problem is far more complex, with the need to roam creating a need for complex networks and system. Thus in mobile the question of standards is far more crucial to success than fixed systems.The GSM system is based on a cellular communications principle which was first proposedas a concept in the 1940s by Bell System engineers in the U.S. The idea came out of the need to increase network capacity and got round the face that broadcast mobile networks, operating in densely populated areas, could be jammed by a very small number of simultaneous calls. The power of the cellular system was that it allowed frequency reuse.全球范围内的固定电话通过同轴电缆、光缆和卫星相连;尽管各国的通信标准不同,但是由于有共同的接口和接口转换设备,互连问题得以解决;对于移动通信存在漫游问题,网络和系统就比较复杂,所以移动通信的标准问题就更关键;1940年有美国贝尔系统工程师提出的蜂窝通信原理是GSM系统的基础,其思路是在人口密集的区域由于一些同时发起的呼叫导致网络拥塞,采用蜂窝系统可以增加系统的容量,蜂窝系统的优点在于频率的重复利用;7、The advantage of SDH are mainly reflected in the following:1Lower network element costs: With a common standard, compatible equipment will be available from many vendors.In a highly competitive market prices will be vary attractive. 2 Better network management: With better network management, operators will be able to mote efficientlyuse the network and provide better service. The concept of TMNTelecommunication Management Networksis under study by CCITT. Some TMN standards defining management system interfaces already Faster provisioning: If new circuits can be software defined to use existing spare bandwidth then provisioning will be mush faster. The only new connection needed will be from the customer’s premises to the nearest network access node.SDH的优点是:1、较低的网络成本:由于具有共同的标准,许多供应商提供的设备可以兼容,激烈的时长竞争可以降低成本,2、更好的网络管理:运营商有能力提供更有效的业务,电信网管标准正在由CCITT指定,定义管理接口的相关标准已经出台;3、快速准备:如果新的电路可被软件来定义,利用现有的空闲频段从用户到最近的网络节点,接入的准备工作将更快;8、If we consider binary transmission, the complete information about a particular message will always be obtained by simply detection the presence or absence of the pulse. By comparison, most other forms of transmission systems convey the message information using the shape, or level of the transmitted signal; parameters that are most easily affected by the noise and attenuation introduced by the transmission path. Consequentlythere is an inherent advantage for overcoming noisy environments by choosing digital transmission.Noise can be introduced into transmission path in many different ways; perhaps via a nearby lightning strike, the parking of a car ignition system, or the thermal low-level noise within the communication equipment itself. It is the relationship of the truesignal to the noise signal, known as the signal-to-noise ratio, which is of most interest to the communication engineer.如果进行二进制传输,通过检测脉冲的有无就可获得报文的全部信息,相比之下,其他传输系统携带信息是通过波形和信号电平,这些参数容量收到噪声和传输信道衰减的影响,因此数字通信具有克服噪声环境的内在优势;噪声可能通过不同的方式进入传输信道,附近的闪电、汽车点火系统或者通信设备内部的热噪声,通信工程师最感兴趣的是信噪比;9、Most of the subscribers on the network ate telephones. The telephone contains a transmitter and receiver for converting back and forth between analog voice and analog electrical signals. With the introduction of the digital data system, some subscribers that transmit digital signals have been incorporated into the network.网络中大多数用户都是电话机;电话机包括一个送话器和一个受话器,用料在模拟话音和模拟电信号之间进行变换;随着数字、数据系统的引入,有些传输数字信号的用户已经并入该网;。
串并变换通信原理In the field of communication theory, one of the fundamental concepts is the transformation between serial and parallel data transmission, also known as serial-to-parallel and parallel-to-serial conversion. 在通信理论领域,串并变换是其中一个基本概念,也就是串并转换和并串转换。
This transformation process is essential in many communication systems to transmit data efficiently and effectively. 这种转换过程在许多通信系统中都至关重要,以便有效地传输数据。
By converting data between serial and parallel formats, it becomes possible to take advantage of the benefits of each type of transmission. 通过在串行和并行格式之间转换数据,可以充分利用每种传输方式的优势。
Serial data transmission involves sending bits of data one after the other along a single communication channel. 串行数据传输涉及沿着单个通信通道一个接着一个地发送数据位。
This method is often used when there is a need for long-distance communication or when bandwidth is limited. 当需要进行远距离通信或带宽受限时,通常会使用这种方法。
FlexCore™ Optical Distribution FrameORDERING GUIDEFlexCore Ordering GuideFlexCore at a GlanceEvery connection countsWith the use of cutting-edge technologies in anumber of environments, network dependencyhas transitioned from the era of “mission” to“life” critical from access points back to the datacenter. Simultaneously, high-speed data transportaccelerated from 1G to 400G and is expected toquadruple in the near future.What is at your core?Optical distribution frames (ODFs) are an all-important network element at the heart of a fiber network. Representing less than 5% of a typical IT project investment, high density, performance, and quality are pivotal attributes for an ODF ensuring business continuity 24 hours a day, seven days a week, 365 days a year.FlexCore Optical Distribution Frame offers the ultimate in flexibility, manageability, scalability, and security.A Few Benefits to the FlexCore ODF• Highly intuitive cable routing paths remove guess work and prevent “rip and replace” costs• Innovative cable management and lockable vertical cable manager door eliminates circuit risk and downtime • Data center floor space can be reduced by 50%†3Single Cross ConnectDouble Cross Connect back-to-back fullyenclosed (Recommended patch cord length is 4m)Double Cross Connect Frame Compact (Recommended LC Uniboot Patch Cord length 4m)Interconnect or splice-through frame with end panelsCommon Frame Arrangements/flexcore-odfFlexCore Ordering GuideA tool-free adjustable spool system optimizespathway space as a network evolves.5FlexCore Ordering GuideTray rails can be added to suit four different form factors (width) cassettes** N ot suitable for HD Flex ™ Cassette styles with both front and rear facing connectivity4 RU Enclosure Lock(Optional)Pay As You GrowPurchase incremental amounts to your existing setup in a scalable,agile way and as neededScaleDoor LockSTEP 1:Select a Frame6Single Frame Cross Connect(LH)Assemble as opposite hand(RH)Double Frame (Compact)Cross ConnectSplice Through Interconnect150mm VCM illustrated is supplied with 4 x cable management plates VCM can be assembled for top (as illustrated) or bottom cable entry/exit.†for a 2100mm wide double cross connect frame (where both LH + RH version is needed) double quantity shown ††Width and depth stated excludes end panel(s) and door(s)*Change “GY” to “BL ” in part number for black.**Maximum fiber capacity (stated) applies to single, double, and quad (quad = double frame placed back-to-back) frame arrangements in combination with 24F LC cassettes. If using a line-up of greater than two frames side-by-side the maximum number of 4 RU enclosure is 10 off (2880F per frame) to allow bottom patch cord pass-through (trough) to be used.FDUFD6045PRGY Door RHLH - Left HingedRH - Right Hinged* f or multi-rack lineups an end panel is only required at the end ofeach row**change “GY” to “BL” in part number for the part in black7 Splice-Through or Interconnect Single-Cross Connect (LH)Double (Compact)Double (Compact)DoubleSTEP 2:Select a SurroundEndPanel150VCMDoorLH150VCMDoorRH300VCMDoorLH300VCMDoorRH600PerforatedDoor – LH600PerforatedDoor – RH21345678FlexCore Ordering GuideFDFELSSTEP 3:Select a FlexCore Enclosure4 RU Enclosure Assembled with Cassettes9FDS9N-24-10PFlexCore CassettesFDC9N-24-10ASLHFDSN-24FDC9N-24-10ASRHFHP9N-LA1X04, FHP9N-LA1X16,FHP9N-LA1X0810deployment. The stub on the far end of the cable can be confidently deployed thanks to the patented cassette-cable strain relief feature, then spliced or connectorized into the required system.Features:• Panduit 24F small diameter indoor cable • One end connectorized and fully tested • Patented cable strain relief feature• Dry cable construction – 24 × 250-micron fibers Pre-Configured Lengths:• 5m to 30m in 5m increments • 30m to 100m in 10m increments • 10 to 100 feet in 10 foot increments • 100 to 300 feet in 50m incrementsContact your Panduit sales representative for alternatives not shownFlexCore Ordering Guide FDUSPFDFMX5KITBLFOBKF144YM1 (144F)FOBKF288YM1 (288F)FDFSPLKITBL1112FlexCoreOrdering GuidePatch Cords – Uniboot LC Duplex, 2 Fiber, 2.0mm Round JacketAre you familiar with our QuickNet Fiber Trunk Cables Assemblies? Built with modular MPO connectivity, these cable assemblies allow for rapid deployment of high-density permanent links in a single assembly for data center applications requiring quick infrastructure deployment such as main, horizontal, and zone distribution areas. Trunk cable assemblies optimize cable routing requirements to ensure efficient use of pathway space and significantly reduce installation time and cost. Click below to visit the online configurator.Panduit Corp.World Headquarters Tinley Park, IL 60487**************US and Canada: 800.777.3300Europe, Middle East, and Africa:44.20.8601.7200Latin America: 52.33.3777.6000Asia Pacific: 65.6305.7575All trademarks, service marks, trade names, product names, and logos appearing in this document are the property of their respective owners.©2023 Panduit Corp. ALL RIGHTS RESERVED. FBCB58-SA-ENG 6/2023。
The SD Series is a range of surge protection devices combining unparalleled packing densities, application versatility, proven reliable hybrid circuitry, simple installation and optional ‘loop disconnect’facilities – features which make the series the ultimate surge protection solution for process equipment, systems I/O and communications networks.The exceptionally high packing densities are the consequence of an ultra slim ‘footprint’ for individual modules which can thus ‘double-up’ as feedback terminals. Each module provides full hybrid surge protection for 2 and 3 wire loop protection.Modules with a comprehensive range of voltage ratings cover all process related signals such as RTDs, THCs, 4 to 20mA loops, telemetry outstations, shut-down systems and fire and gas detectors. Optional ‘loop disconnect’,is a feature which allows commissioning and maintenance to be carried out without removal of the surge protection device.This facility is provided by the SD07, SD16,SD32 and SD55 units. In addition, a thirdconnection on the field and safe side ofthe protector is provided in order toterminate screens safely.For three wire applications the speciallydesigned SD RTD(ResistanceTemperature Detector) and the SD32T3,(for separately powered 4-20mA loops)provide full 3-wire protection in a singlecompact unit. The recommended choicefor the protection of 3-wire pressuretransducers on low power circuits is theSD07R3.For higher bandwidth applications,theSDR series has been developed to meetthe demands of today’s highest speedcommunication systems.120V and 240V AC versionsare available for I/O andpower supplies up tothree Amps of loadcurrent.Telephone networks can be protected bythe SDPSTN.One simple manual operation clampsmodules securely onto DIN rail, whichautomatically provides the essential high-integrity earth connection.‘Top-hat’ (T-section) DIN rail is generallysuitable for mounting SD modulesalthough for adverse environments, aspecially-plated version is available fromMTL Surge Technologies. Acomprehensive range of mounting andearthing accessories can also besupplied, see page 7 for furtherdetails.Ultra-slim user-friendly devices for protecting electronic equipment and systems against surges on signal and I/O cablingSD SeriesO Range of ATEX Certified intrinsically safe surge protectorsO Ultra-slim space-saving design; easy installationO Multistage hybrid protection circuitry – 10kA maximum surge currentO Range of voltage ratings -- to suit all process I/O applicationsO High bandwidth, low resistance, RTD, PSTN and 3-wire transmitter versions availableGuide to applications and selectionAnalogue inputs(high-level)2-wire transmitters, 4-20mA, conventional and smartThe SPDs recommended for use with ‘conventional’and ‘smart’ 4-20mA transmitters(fed by a well-regulated supply) are the SD32 and SD55, the choice depending upon the maximum working voltage of the system (32V and 55V respectively). The diagram illustrates a prime example of an application for which the fuse/disconnect facility is particularly useful, however, both models are available in ‘X’versions without the optional fuse/disconnect feature.Analogue inputs(low-level)RTDsThese applications are best served using the SD RTD. F or optimum accuracy, the energising current should be chosen to ensure the voltage across the RTD does not exceed 1V over the full measurement range. When using a PT100 device, we recommend an energising current of 1mA.ac sensors, photocells, THCs, mV sources and turbine flowmetersThe SD07 or SD16 (depending upon the operational voltage) are the favoured choices for this application. SD07X and SD16X are also suitable.4561232xSD16X, SD32XPIVOVSD16 SD32 SD55SD16X SD32X SD55X (no fuse)456123SDVLogic signalOVSD07 SD16 SD32 SD55SD07X SD16X SD32X SD55X (no fuse)456123LED AlarmSD32X32V maxSD32SD32X(no fuse)IncomingtelephonelineModem,fax ortelephone456123Final output fromPLC,DCS,SCADAetc.110/120V acor220/240V acAnalogue outputsController outputs (I/P converters)F or this application, the recommendations are the SD16, SD32 and SD55 (and the equivalent ‘X’ versions), the final choice depending upon the operating voltage.Digital (on/off) inputsSwitchesSuitable SPDs for switches include the SD07, SD16, SD32 and SD55 modules – the choice depending upon the operating voltage of the system. The ‘X’ versions of these are also suitable. Digital (on/off) outputs Alarms, LEDs, solenoid valves, etcThe recommended choice for this application is the SD32 or SD32X.Telemetry (PSTN)Telemetry outstationsThe SD PSTN has been designed specifically for the protection of signals transmitted on public switched telephone networks.AC supplied equipmentPLC, I/O systemsFor systems on 110-120V ac, the SD150X is the recommended choice and for 220-240V ac systems, the SD275X is recommended.Controller outputs(I/P converters)SwitchesAlarms, LEDs,solenoid valves, etc.Telecom linePLC, I/O systemsFIELD CIRCUIT PROTECTED CIRCUIT32-wire transmitters or sensors4-20mA transmitters, conventional and smartWhere the TP48 is not an acceptable solution, either because of technical suitability or difficulties in mounting, the SD16X, SD32X and SD55X are an excellent alternative.3-wire transmitters or sensorsVibration Sensors and 4-20mA loop processcontrol systems invariably require threewire connections, when powered from anexternal source.This may be accomplished in one unit by usingthe SD32T3 three terminal Surge ProtectionDevice (SPD).Because the SD32T3 protects all threeconductors within the same unit, higherprotection is achieved, as the SPD hybridcircuitry is common to all three wires.The SD07R3 is available for the protection of 3-wire pressure transducers on low powercircuits.4-wire transmitters orsensorsFlow meters, level detectors, etc.4-wire systems such as level detectors requiretwo SDs, one for the supply and the other forthe transmitter output. Generally the voltagesacross the pairs are similar and so therecommended choice would be a pair ofSD16X, SD32X or SD55Xs. However, mainspowered transmitters should be protected withan SD150X or 275X (depending upon supplyvoltage) for the supply inputs.Loadcells are catered for by MTL SurgeTechnologies’ LC30 which is suitable for both 4and 6-wire load cells.series makes it the obvious choice for transmitter protection.The SDs within the junction box should be installed no further than one metre away but as close as possible to the sensor or transmitter they are protecting. A bond is required from the general mass of steelwork to the sensor or transmitter housing either using a flat short braid or a cable of at least 4mm2cross sectional area. In most instances this bond is automatically made by fixing the metallic transmitter housing to the plant structure. This bond ensures the voltage difference between the signal conductors and the transmitter housing is below the transmitter’s insulation rating. Please note that the transmitters or sensors are connected to the ‘Protected Equipment’ terminals of the SD and not the ‘Field Cables’.456123456123456123456123TP48SD32R (no fuse)SD32R (no fuse)SD32R (no fuse)TP48Communication systems protectionHigh speed data links between buildings or one part of a plant to another have become more common with the widespread use of smart transmitters and the increase in unmanned installations. The SD series has an SPD suitable for all process I/O applications with a choice of low resistance units, high bandwidth and a variety of voltage variants. The SDR series has been specially designed to meet the requirements for high speed data links with an extremely high bandwidth.Communication systemsRS232, RS422, RS485The recommended choice for these applications is the SD16R or SD32R depending on the maximum driver signal.Bus powered systemsThere are a variety of bus powered systems specially designed for the process industry. The ideal surge protection device for these systems is the SD32R as it has a very high bandwidth and a modest in-line resistance.Typical ApplicationsTable 1 shows suitable SD devices for different applications. In some applications alternative devices may be used, for example, where lower in-line resistance or a higher voltage power supply is used.Telematic have operationally tested therecommended SD series with representative highways listed but no formal approval for their use in systems by the respective bodies has been sought.RS232, RS422, RS485Bus powered systemsTable 1TP PROTECTED FIELD CIRCUITSD PROTECTED HOST CIRCUITPROTECTED FIELD CIRCUITSD PROTECTED HOST CIRCUITApplicationPreferred SPDAlternativeAllen Bradley Data Highway Plus SD16RFoundation Fieldbus 31.25kbits/s voltage mode SD32R 1.0/2.5 Mbits/s SD55R HART SD32X SD32, SD32R Honeywell DE SD32XSD32, SD32RLonWorks FFT-10SD32R LPT-10SD55R TP-78SD07R IS78†SD32R Modbus ‘& Modbus Plus (RS485)SD16R PROFIBUS DPSD32R PA (IEC 1158, 31.25 kbits/s)SD32R RS232SD16SD16XRS422SD07R RS423SD07R RS485SD07R WorldRP (IEC 1158)SD32R31.25 kbits/s voltage mode 1.0/2.5 Mbits/sSD55RThe SDs should be mounted on the field wiring side to ensure that any surges entering from the field do not damage any intrinsically safe barriers or galvanic isolators in the system. The SDs and IS interfaces should be mounted close to each other but on separate DIN rails in order to maintain the required 50mm clearance between safe area and hazardous area terminals.EarthingThe recommended earthing for field mounted devices has been illustrated previously but it is the earthing at the control panel that is more critical as there are usually a number of earthing systems, each with their own requirements. The earthing system illustrated here replaces the instrument 0V bond, the control system PSU bond and the IS earth with one single earth connection to meet all the design requirements and give the most effective protection against the effects of lightning induced surges.Zone 0 are considered real enough to require preventative measures. IEC 60079-14 (1996-12) Electrical apparatus for explosive gas atmospheres Part 14: Electrical installations in hazardous areas (other than mines) stresses the importance of SPDs in hazardous areas. An outdoor installation where there is a high likelihood of both lightning induced transients and combustible gases requires the installation of SPDs to prevent possible ignition of the gases. Areas seen particularly at risk include flammable liquid storage tanks, effluent treatment plants, distillation columns in petrochemical works and gas pipelines.SPDs for transmitter protection should be installed in Zone 1 but sufficiently close to the Zone 0 boundary to prevent high voltages entering Zone 0. The distance from the SPD to Zone 0 should be less than one metre where possible. However, in practice the SPD would normally be mounted on the transmitter or sensor housing which usually lies in Zone 1and is very close to Zone 0. Because there is only a very small free volume, the SD Series is suitable for mounting in flameproof or explosion proof enclosures.Zone 2The SD series is suitable for protecting electrical circuits in Division 2, Zone 2 and can be used without affecting the safety aspects of the circuit.Non-incendive (low-current) circuits can be protected using any SD series unit mounted in either the safe or hazardous area including those with the fuse disconnect facility. Non arcing (high current) circuits can also be protected except that SPDs with the fuse disconnect facility may only be mounted in the safe area. F or use in these circuits the units must be mounted in a suitable enclosure, normally the minimum requirements are IP54 and 7Nm resistance to impact. The SD series is self certified by Telematic Ltd as being suitable for this purpose.CertificationIntroducing surge protection into Intrinsically Safe (IS) circuits is trouble free as long as the current and power parameters are not exceeded. In the SD Series, the SD**X, SD**R,SD**R3, SD RTD and SD**T3 all have ATEX certification for use in IS circuits located in Zones 0, 1 or 2. The certification parameters for the SD**X and SD**T3 are:EEx ia IIC T4, Li = 0.22mH Ii = 260mA for Ui up to 20V Ii = 175mA for Ui up to 26V Ii = 140mA for Ui up to 28V Ii = 65mA for Ui up to 60VThe certification parameters for the SD**R,SD**R3 and SD RTD are:EEx ia IIC T4, Li = 0Ii = 260mA for Ui up to 60VThe power rating for each of the above is dependent on the table shown below.Pi = 1W (–30°C to +75°C)Pi = 1.2W (–30°C to +60°C) Pi = 1.3W (–30°C to +40°C) The SD** Series are classifed as simple apparatus and are intended for use in Zone 2 or safe areas only, because their fuses are not fully encapsulated.SD Series mounting kits and accessories The SD Series has a full range of mounting kits and accessories to simplify installation and tagging of individual loops. Insulating spacers (ISP7000) are available to allow mounting of the units onto backplanes without compromising correct earthing practice.These are placed at regular intervals along the rail or at each end as required. Earth connections can be made to the DIN rail via the earth terminal (ETL7000). Weatherproof enclosures are also available with all the necessary mounting accessories to install the SD series surge protection devices.Two tagging systems are available. One consists of tagging strips (TAG57) with labels (TGL57) mounted on posts (IMB57) at each end of a row of surge protection devices (SPDs). The other consists of separate tagging identifiers (BRI7000) mounted on the tops of individual SPDs. Both methods can be used conjointly. Replaceable fuses or solid links are available in packs of 5 (SD-F25, SD-F05 and SD-LNK). 7BRI700012BIL7000/BIL7000LSpecification(all figures typical at 25°C unless otherwise stated)Note: all figures are typical at +25°C unless otherwise stated; *standard fuse; +over full working temperature range; †at 20mA with a 250mA standard fuse; ‡these units need external 3A fuses; ^Signal; **Power & Common; maximum energising current depends upon RTD resistance.ProtectionFull hybrid line to lineEach line to screen/groundNominal discharge surge current (I n )10kA (8/20µs),(not applicable to SD150X and SD275X)Nominal discharge surge current (I n )6.5kA (8/20µs),(SD150X and SD275X only)Reaction time (T a )Within nanoseconds (10–9s)RTD resistance range (SD RTD )10 to 1500ΩDegradation accuracy (SD RTD at 1mA)0.1% (RTD resistance >100Ω)0.1Ω(RTD resistance < 100Ω)Ambient temperature–30°C to +75°C (working)–40°C to +80°C (storage)Humidity5 to 95% RH (non-condensing)Terminals2.5mm 2(12 AWG)MountingT-section DIN-rail(35 x 7.5 or 35 x 15mm rail)Weight70g approximately Case flammability UL94 V-2EMC complianceTo Generic Immunity Standards, EN 50082, part 2 for industrial environments R&TTE complianceEN 50082-2 : 1995EN 41003 : 1999EN 60950 : 1992(not applicable to SD150X and SD275X)LVD complianceSD150X & SD275X EN 60950 : 1992EN 61010 : 1995SD PSTNEN 41003 : 1999IEC complianceEN 61643-21:2001To order specify -Order by module, as listed in the specification table and/or accessory part numbers as defined on page 7.Note: In accordance with our policy of continuous improvement,we reserve the right to change the product’s specification without notice.Definitions of terminology used in table 1Working voltage (U n )Maximum voltage between lines or lines/ground for the specified leakage current 2Maximum leakage current (I c )Maximum current drawn by the SPD at the。
Efficient High-Speed Data Paths for IPForwarding using Host Based RoutersSimon Walton, Anne Hutton, JoeTouch†USC / Information Sciences InstituteAbstractHost-based forwarding uses general purpose computers with twoor more network interfaces to act as a router.These routers offercertain advantages over the use of dedicated hardware,allowingopen,public source code access to the forwarding,queuing,androuting algorithms,and the use of moreflexible,commodity hostinterfaces and host CPUs.The main drawback of host-basedforwarding is its inefficiency in supporting high-bandwidthinterfaces;although host I/O busses support gigabit throughputs,existing host routers achieve only25%of this capacity.This paperexamines a version of host-based forwarding that substantiallyincreases the capacity of host-based routers,by transferringpackets directly between host interfaces,rather than staging themthrough the host memory.On a200Mhz Pentium-Pro FreeBSD PC,the resulting system supports bandwidths in excess of480Mbps,a45% increase compared to conventional techniques.KEYWORDS:host routers,IP forwarding,data paths,peer DMA,router performance1.0 IntroductionHost workstations are increasingly being used as ing commodity platforms and network interfaces,these host-based routers can be cheaper,and allow moreflexible configuration and programmability than production routers.This work optimizes the data path in a host-based router, to increase routing bandwidth, while reducing backplane bandwidth and CPU load.Production routers are special-purpose systems,which use custom backplanes and line interface cards,with closed,proprietary software.These systems are often limiting,because of †. This work is supported by the Defense Advanced Research Projects Agency through Ft. Huachuca contract#DABT63-93-C-0062entitled“Netstation Architecture and Advanced Atomic Network”.The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied,of the Department of the Army,the Defense Advanced Research Projects Agency,or the U.S. Government.their closed software and proprietary backplane bus.Systems requiring software modification, such as experimental routing testbeds(e.g.,DARTnet and CAIRN)and dynamically-reprogram-mable routers (Active Networks [7]), typically require host-based routers.Host interfaces for new gigabit technologies are typically available much earlier than their router line-card equivalents.In some cases,such as the Myrinet,no router line cards are available. In addition,the router line-cards are vendor,and sometimes product specific.Host interfaces focus on commodity standards,such as PCI and VME busses.As a result,host-based routers pro-vide the earliest,and sometimes only way to interconnect new technologies among themselves, and to other existing technologies.Our investigation of the optimization of host-based routers focuses on optimizing the data path.First,we present some background.We discuss the way conventional host-based routers process packets,and the two ways we optimized that processing,by reducing an extra data copy. We then present our results,showing how we increased data throughput by25%while reducing CPU load in the host.Finally,we discuss the implications of this optimized processing,and how it affects network interface design and host operating system software.2.0 BackgroundThe University of Southern California Information Sciences Institute’s(USC/ISI)Com-puter Networks Division current runs the largest production Myrinet LAN,a1.28Gbps mesh-connected network co-invented by USC/ISI and the California Institute of Technology(CalTech), and spun-off to a company called Myricom.The Myrinet provides full-duplex gigabit bandwidth using cables composed of dual simplex byte-wide copper links.At ISI,this LAN is connected to a host-based router,interconnecting53office workstations to OC-3c(155Mbps)ATM and fast eth-ernet (100 Mbps).The Myrinet links are connected to LAN switches,each providing8ports of full-band-width cross-connect.The switches use hardware signalling to provide backpressure to the source, and route packets inside the LAN using source-routed,cut-through packet switching.Myrinet link packets carry up to 8,432 bytes of payload.At the link layer,the LAN emulates an ethernet,using ethernet-like link addresses.The LAN supports broadcast by serially copying the outgoing packets at the source interface,sending a copy to every destination interface on the LAN in sequence.Multicast is provide by broadcast, using driver-based select inside the host.The Myrinet network interface cards(NICs)contain256 KB(kilobyte)of shared memory(512KB to1MB in newer versions),used to contain the inter-face control program,shared and local variables,and packet buffering.The NIC also contains a 32-bit(20million instructions/sec.)processor,dedicated to link and LAN control,but not suffi-cient to participate in other processing.DMA is provided by the NIC across the host PCI bus,in either direction.For our testbed,we use200Mhz Pentium Pro PCs,running FreeBSD2.2.5[3].These hosts also have a33Mhz,32-bit wide PCI bus,used for high-bandwidth peripheral access.In experiments performed elsewhere,the combination of these PCs and the Myrinet NICs have been shown to provide host-memory to host-memory data transfer rates in excess of1Gbps(gigabits per second),using very large transfer units(64KB and larger),pipelined directly into the NIC DMA[6].However,at the IP layer these hosts support UDP rates of300Mbps and TCP rates of 150 Mbps.For our experiments,we relied on the packet multiplexing capability of the Myrinet switches to merge traffic from multiple sources,tofind the limiting routing bandwidths.We found that as a host-based router,the hosts provide bandwidths near335Mbps,using existing produc-tion drivers and the FreeBSD forwarding software.This rate is not substantially higher than the single-host data source limit of300Mbps.We knew that existing host forwarding copied data twice across the peripheral bus,and that this would limit the maximum bandwidth to500Mbps; we felt that reducing the number of copies would alleviate this contention,and possibly increase the throughput of the system.3.0 Prior and related workIt is well known that memory performance in workstations has not paralled improvements in processor performance or network bandwidth.Much work has been done on techniques to min-imize data copying by bypassing or reducing the level of operating system intervention and pro-moting direct application to network interface interaction.First some terminology about routing.Routers relay packets between different networks, using a combination of forwarding and queuing.Forwarding is the process of determining the destination interface and queue of an incoming packet,and includes some header manipulation. Queuing is the process of storing,retrieving(and potentially dropping)packets according to queue,network,and header context.Routing has multiple meanings;it can mean the combined process of forwarding and queuing,or the inter-router process of computing routes used by the forwarding algorithm.As a result,we refer separately to the process of forwarding and queuing here.Several techniques have been introduced to minimize the number of times network data crosses the CPU/memory data path,notably hardware streaming[2]and kernel-level streaming [4].In hardware streaming data is transferred from source to sink device without CPU involve-ment.This avoids the extra copy into kernel RAM,but is focused on inter-peripheral communica-tion,such as video-to-disk.In packet forwarding and queuing,the host CPU still needs to process the packets between the input and output devices;whereas such intermediate processing is entirely avoided in hardware streaming.Kernel level streaming transfers data from source to sink over the system bus without an application process being included in the data path.Data does, however, pass through memory and the technique relies on the Sytem V STREAMS interface.There has also been substantial work in avoiding OS intervention by providing user access to network data[9].Direct application access to the network interface is used to achieve both low-latency and high bandwidth using commodity workstation clusters.Protocol processing is moved to the application by virtualizing the interface using a combination of NIC capabilities(on board buffers,programmable co-processors,DMA)and OS mechanisms.In the case of fast ethernet NICs,which do not have an on-board co-processor,kernel intervention is necessary.The main benefit of the NIC co-processor is to allow examination of the packet header and direct DMA of the data into the user-space buffer reducing the number of data copies.U-Net is currently working on implementing a subset of the Java Virtual Machine for the Myrinet NI-this would allow cus-tomized packet processing on the NI and would encourage further use of host based routers in an Active Nets context.An industry consortium is also extending the concept of virtualizing NIs to provide user-level access to the network interface[8].The interest is primarily in low latency message passing,which may streamline the interrupt processing of current NICs,and allow outboard co-processors to be controlled and programmed using a standard interface.Finally,there are a number of techniques for optimizing custom router design,summa-rized in [5]. Peer-DMA transfers between router line cards has been utilized there, including vari-ations on where whether the header forwarding processing occurred on-card or outboard,on a separate processor.One recent system at BBN shows how this can be accomplished using com-modity CPUs in a custom backplane architecture [5].4.0 Conventional host-based forwardingThis section describes the flow of IP packets in a conventional host-based router.A host router is composed of a host with at least two host interfaces (Figure 1).The host RAM is man-aged by its CPU,predominantly as a pool of shared memory used by the operating system kernel.The host also has an I/O controller which manages DMA access by the host interfaces over an I/O channel.This channel is conventionally a shared bus,e.g.,a PCI bus,but can also be a switched matrix, as in recent workstations from Sun Microsystems and Silicon Graphics.The network interface cards,or NICs,each have a DMA controller and some internal memory,as well as an interface to their particular link.The DMA controllers manage data trans-ferring from the NIC RAM directly in to the host RAM,without the supervision of the host CPU.For simplicity, only one incoming and one outgoing NIC are shown in these figures.In a conventional host-based router,forwarding is performed inside the host,by the CPU.When a packet arrives,the NIC signals the host,which signals the DMA on the NIC to proceed with a copy of the packet from the NIC RAM into the host RAM.The host CPU then processes the packet,examining its link and IP header,and determining the appropriate outgoing interface,outgoing link header,and any modifications to the IP header.The host packet copy is then modi-fied appropriately,and the outgoing NIC then DMAs the packet into its RAM,and emits it on the outgoing link. These steps are shown in Figure 1FIGURE 1. Full packet kernel copy, with DMA to/from host memory onlyIn the forwarding step,both link and IP headers are modified by the forwarding algorithm.The IP header is typically modified only at its hopcount field,but other fields,such as source route,encapsulation,and timestamp options can also be altered.The link header can indicate link level destination (and possibly source)addresses,and can also have additional demultiplexing fields,such as channel identifiers in the Myrinet,or virtual circuit identifiers (VCIs)in ATM.The link header is usually replaced entirely by the router,appropriate for the destination of the outgo-I L data I L data I L data Forwarding computationI L dataI L dataDMA into host RAM Route, edit headers DMA into NIC RAMing interface link.Data inside these packets,interior to the IP packet,is typically not modified, although it can be examined, encrypted or authenticated.Buffer management in the conventional host case focuses on main memory storage;buff-ers in the NICs are used as temporary queues to buffer the link rate to the DMA transfer rates. NIC buffers are used only as long as required to transfer packets into main memory on the incom-ing path,or out to the link on the outgoing path.All other queuing,and queue management is per-formed in main memory [3].These steps are somewhat different if the host is the destination of the incoming packet.In that case,the packet is copied from kernel memory into user memory,based on further demulti-plexing of the packet header by the kernel.However,in the case where forwarding to an outgoing interface is the dominant case,the conventional data path is not optimal.The packet is copied twice, once into host memory, and once out of host memory.A better algorithm would permit packets to be transferred directly from incoming NIC to outgoing NIC,without the extra copy.This would reduce the I/O channel load,free host resources,and reduce the transfer latency through the router.The I/O channel would have reduced signalling load,because the bus arbitration is accessed only once per packet,rather than twice.If the channel is implemented using a shared bus,the bandwidth over this bus would be cut in half (this is not an issue if the channel is switched).The host memory would not be required,so kernel RAM resources would be increased.The host would also have increased I/O channel access band-width for other devices,such as video and disks,because the packet bandwidth over its port is reduced from two packet copies to zero.5.0 Peer-DMA forwardingHere we describe an alternative to the packet path in conventional host-based routers.We were not sure whether host PCI bus contention was a significant problem,but wanted to examine the benefits of avoiding the excess packet data copying of the conventional system.The goal is to leave most of the packet in the incoming NIC,until its destination is determined.When the packet is ready to be transferred,it can be DMA’d directly from the source NIC to the destination NIC, without an intermediate copy into the host memory.This streamlined DMA processing has several benefits.By avoiding the superfluous copy into host memory,shared peripheral resources are economized.In a shared bus,such as PCI, bandwidth use is halved.Even in switched peripheral systems,bandwidth in and out of host mem-ory is significantly reduced.Finally,packets are moved with reduced transfer latency.This low-ered transfer latency is typically not noticed in host-based routers,where queuing latencies dominate.We examined two different ways of supporting peer DMA forwarding.In both methods, packets remain in shared memory on the incoming NIC,and are eventually peer-DMA’d directly to the outgoing NIC.The host continues to perform the forwarding functions,and so needs access to the packet’s link and IP headers.Both methods assume that the NIC contains a reasonable amount of packet buffer space,and that space is sufficient for queuing packets while they are for-warding and queued for output.Neither method involves changes to kernel code only to the net-work interface driver.In thefirst variant,called‘IN-PLACE’,the entire packet is left in the NIC,and the host accesses these headers by reading itsfields directly over the PCI bus(Figure2).Eachfield is readseparately,as a shared-memory access.In the second variant,called ‘HEADER COPY’,the link and IP headers are copied,in full,into the host memory,where they are manipulated by the host using local RAM accesses.The modified headers are copied out over the original packet’s header,then the result is DMA’d in its entirety to the destination NIC Figure 3.Both techniques required modifications only to the device driver,and the changes affect packet transfers only on the incom-ing NIC.The two variants differ in complexity and efficiency.IN-PLACE forwarding requires only minor modification to the driver routines,where packet buffer pointers address shared NIC mem-ory rather than kernel RAM in the host.However,each field accessed by the host CPU requires peripheral bus arbitration,and also runs at a slower clock rate than CPU accesses to main mem-ory.In addition,these accesses cannot be cached,because they refer to shared memory.In this case, the bus arbitration overhead is increased.HEADER-COPY transfers the entire header into main memory when the packet arrives,streamlining peripheral bus access,but using copying fields the host may not access.The header must be copied back out to the incoming NIC before the peer DMA can begin.These steps were not difficult to add to the driver,but not as trivial as the IN-PLACE method.In this case the bus arbitration overhead is slightly higher than the conventional case,because of the additional access for the header copy-back operation.FIGURE 2. IN-PLACE forwarding, followed by peer DMAFIGURE 3. HEADER COPY forwarding, followed by peer DMABoth of these techniques require special processing for ARP and ICMP packets.These packets are usually associated with low-level control exchanges,often handled within the driver I L data Forwarding computationI L dataI L data Route, edit headersPeer DMA NIC-to-NICvia reads across I/O busI L data Forwarding computationI L dataI L data I L data II L data L I L I L DMA link, IP headersRoute, edit headers DMA link, IP headers into host RAM to source NIC RAM Peer DMA NIC-to-NICsoftware.Many such packets generate an immediate response,which typically overwrites the incoming packet,and is rapidly sent back out.As a result the peer DMA methods would request a transfer from the incoming NIC back to itself;although this operation is not prohibited by the PCI bus specification,it is often not supported,and can lock the bus and freeze the host.This case is easily handled by detecting the special case,and effecting the peer DMA by pointer manipulation,rather than by an excess self-copy.6.0 ResultsFor our experiments,we used two measurement software tools -netperf 2.11,and traffic .Netperf generates both UDP and TCP packets,and traffic generates only UDP;for our measure-ments,we used only UDP packets,because we focus on throughput of the router,and both types of packets are forwarded identically.Neither tool contains flow control,and the send side is typi-cally faster than the receive side for UDP.For CPU load measurement,we used a passive monitor-ing tool developed at ISI, called cpuload .Our experiments used both single pairs and multiple pairs of hosts sending and receiving,and used only Myrinet NICs to simplify the testbed.A single PC host can send only 300Mbps UDP without checksums,275Mbps with checksums.We found that a PC-based router can easily support these bandwidths,such that hosts directly connected,or routed through an intermediate PC,achieved the same bandwidth.As a result,we used the Myrinet switch’s packet multiplexing capability to aggregate streams from multiple sources,and demultiplex them to separate destina-tions,as shown in Figure 4.These switches are internally non-blocking,and the link rate supports1.28Gbps;for up to 4hosts,the links and switches would provide no contention for our experi-ments.We used up to three sources and three sinks for our tests;additional sources and sinks were tested, and did not affect the results.FIGURE 4. Experiment testbed configurationWe measured the three competing solutions to host-based forwarding:conventional,using the standard drivers,IN-PLACE,in which header fields are read over the PCI bus individually,and HEADER-COPY ,in which the link and IP headers are copied into host memory prior to for-warding (shown Figure 1,Figure 2,and Figure 3,respectively).We measured throughput in band-width (bits/second)and packet rate (packets/second),and monitored CPU load,over a range of packet sizes from 128bytes through 8KB.These results are shown in Figure tency was not measured because routers are dominated by queuing latency,which is not affected by our solu-tion.First,we measured a single pair of hosts,both with and without an intermediate router.We found that both UDP bandwidth and packet rates were unaffected by the inclusion of the host-HostHostHost Host Host HostHost Switch router Switchbased router.The router CPU was less loaded using the standard driver(37%)than either the IN-PLACE(65%)or HEADER-COPY(56%)drivers.Because our system is not otherwise limited, the peer DMA drivers only serve to complicate packet processing,requiring additional effort to access the header over the PCI bus,or to copy the header back out to the incoming NIC before the final DMA.The read-through over the PCI bus in the IN-PLACE driver is higher cost than the HEADER-COPY;it is likely this cost is a combination of bus contention tying up the CPU,and the fact that read-through data cannot be cached.For two pairs of sources and sinks,the standard driver tops out at335Mbps,the IN-PLACE at419Mbps,and the HEADER-COPY at441Mbps.The higher utilization is possible due to a higher offered load,a total of600Mbps of offered UDP packets.The HEADER-COPY is more efficient than the IN-PLACE driver,because copying the entire header into the host and back out is more efficient than accessing thefields independently over the PCI bus.In these cases, CPU load was nearly100%for all packet sizes,except for the HEADER-COPY driver,where the CPU load dropped to 65% for 8 KB packets, measured by our cpuload tool.For three pairs of sources and sinks,we modified our experimental technique.For the one and two pair cases,netperf and traffic generated identical results.For three or more sources,net-perf could not be used;one source would block and restart later,staggering the streams,defeating the goal of generating three simultaneous sources.Nonetheless,for three or more sources,the standard driver did not increase in throughput-it peaked at335Mbps,as before.The IN-PLACE driver peaked at 472 Mbps, and the HEADER-COPY driver at 482 Mbps.The standard driver,in general,increased bandwidth only10%using multiple sources,as compared to a single source.For HEADER-COPY,thefinal design,the measured CPU load was 100%for small packets and multiple sources,and per-packet overheads dominate for small packet sizes.This is also evidenced in the packets/second graph,Figure6.The packet rate is nearly con-stant for packets under1KB,around12,000packets/second,indicating that either interrupt pro-cessing or forwarding algorithm processing causes resource contention at the CPU.For larger packets,costs proportional to packet size dominate,suggesting backplane bandwidth contention. Backplane bus arbitration does not appear to be an issue,because the CPU is100%loaded at low levels,and because the CPU is involved in each step also requiring bus arbitration.By avoiding the additional packet copy,throughputs and packet forwarding rate is increased substantially,up to 45%, to a bandwidth of 480 Mbps.FIGURE 5. UDP bandwidth comparison of three driver typesFIGURE 6. HEADER-COPY packet rate throughputThese results focused on UDP packet processing.We expected TCP processing to be iden-tical,but found otherwise.The host-host throughput for a single pair through an intermediate host router is 100Mbps;lower than a point to point connection which achieves 150Mbps and does not use an intermediate router.The lower throughput occurs for both conventional and peer-DMA implementations of the host router.There is no current theory on why the rate through a router is lower;it is possible that the hardware flow-control feedback of the Myrinet is affecting the01002003004005001282565121024204840968192U D P T h r o u g h p u t (M b p s )Message Size (bytes)UDP Throughput for 3 sources and sinks for 3 Driver Types Standard Driver IN-PLACE peer DMA0500010000150001282565121024204840968192T h r o u g h p u t (p a c k e t s /s e c )Message Size (bytes)HEADER-COPY Peer DMA throughput by number of sources 1 source - 1 sink 2 sources - 2 sinksthroughput,but our lab currently lacks a comparable link technology capable of supporting TCP over200Mbps in which to compare these ing ATM links,the point to point TCP throughput is100Mbps,the same as using Myrinet with the host router.The peer-DMA is not currently suspected, because throughput is lowered even for conventional forwarding.7.0 ImplicationsThe results of these experiments are that HEADER-COPY peer DMA is an effective tech-nique for increasing host-based router throughput,and for reducing the CPU load for forwarding large packets.However,when the same driver was used for packets destined for the router itself, i.e.,for the end host,throughput was substantially reduced,compared to the standard driver.The optimized driver should be used only for host-based routers where the dominant traffic is through, rather than into,the host.This is not different from the effects of user protocol optimizations,so-called single-copy stacks,which optimize throughput into user space memory;for similar rea-sons,such systems are likely to be ill-suited to host based routers.For hosts with mixed traffic,a combination of these two systems,with early demultiplexing(packet labels)indicating which algorithm to select, is one possible design alternative.We assume that the link bandwidth is sufficient to necessitate seeking a higher perfor-mance forwarding algorithm.The results of the UDP tests indicate that link bandwidths in excess of400Mbps,closer to500Mbps would be required for peer DMA to be beneficial.Below these rates,conventional forwarding is sufficient.This result is host dependent;as CPU and backplane bandwidths increase,conventional forwarding will be faster,requiring ever higher link band-widths to necessitate peering.However,given the advent of gigabit ethernet,Myrinet,and OC-12c ATM (622 Mbps), link speeds warranting alternate methods are likely to be common.We assume that NICs support DMA,and that each NIC supporting incoming traffic also contains shared memory sufficient for packet queuing.Our experiments used only Myrinet inter-faces,because available fast ethernet and ATM NICs did not support equivalent shared memory use.If the NIC lacks sufficient shared memory,the host memory must be used as a staging area, defeating the peer-DMA altogether.Extensive memory can also be required when NIC link rates vary widely,to be used for line rate matching,or if advanced queuing algorithms are supported, such as priority queues or early discard queuing.Note that peer DMA supports any queuing algo-rithm, because both forwarding and queue management occur in the host CPU.We also assume that fragmentation is either not required or is trivial.This requires that the maximum transmission units(MTUs)do not vary largely between the NICs,that all NICs can be set to use the smallest,or that packets are forwarded to external nets,where the default Internet MTU is used.The NICs are assumed to deal in units of packets,the same packets as processed by the host forwarding algorithm.This latter case may not be true when packets are aggregated for link transmission, as in IP over SONET or gigabit ethernet.We assume that packet data is ignored by the host CPU.This is not the case in Active Nets,or when data is transcoded,reformatted,segmented and reassembled,authenticated or encrypted.In these cases,the entire packet must be manipulated by the CPU anyway,so the initial data copy of the standard driver aggregates the packet transfer in one step,just as the HEADER-COPY aggregates the individual field accesses of the IN-PLACE driver.As discussed earlier,the peer DMA techniques reduce peripheral system bandwidth resource contention.In our experiments,this resource is a shared bus with1Gbps capacity.When。