通信类英文文献及翻译
- 格式:doc
- 大小:64.00 KB
- 文档页数:9
外⽂参考⽂献翻译-中⽂基于4G LTE技术的⾼速铁路移动通信系统KS Solanki教授,Kratika ChouhanUjjain⼯程学院,印度Madhya Pradesh的Ujjain摘要:随着时间发展,⾼速铁路(HSR)要求可靠的,安全的列车运⾏和乘客通信。
为了实现这个⽬标,HSR的系统需要更⾼的带宽和更短的响应时间,⽽且HSR的旧技术需要进⾏发展,开发新技术,改进现有的架构和控制成本。
为了满⾜这⼀要求,HSR采⽤了GSM的演进GSM-R技术,但它并不能满⾜客户的需求。
因此采⽤了新技术LTE-R,它提供了更⾼的带宽,并且在⾼速下提供了更⾼的客户满意度。
本⽂介绍了LTE-R,给出GSM-R与LTE-R之间的⽐较结果,并描述了在⾼速下哪种铁路移动通信系统更好。
关键词:⾼速铁路,LTE,GSM,通信和信令系统⼀介绍⾼速铁路需要提⾼对移动通信系统的要求。
随着这种改进,其⽹络架构和硬件设备必须适应⾼达500公⾥/⼩时的列车速度。
HSR还需要快速切换功能。
因此,为了解决这些问题,HSR 需要⼀种名为LTE-R的新技术,基于LTE-R的HSR提供⾼数据传输速率,更⾼带宽和低延迟。
LTE-R能够处理⽇益增长的业务量,确保乘客安全并提供实时多媒体信息。
随着列车速度的不断提⾼,可靠的宽带通信系统对于⾼铁移动通信⾄关重要。
HSR的应⽤服务质量(QOS)测量,包括如数据速率,误码率(BER)和传输延迟。
为了实现HSR的运营需求,需要⼀个能够与 LTE保持⼀致的能⼒的新系统,提供新的业务,但仍能够与GSM-R长时间共存。
HSR系统选择合适的⽆线通信系统时,需要考虑性能,服务,属性,频段和⼯业⽀持等问题。
4G LTE系统与第三代(3G)系统相⽐,它具有简单的扁平架构,⾼数据速率和低延迟。
在LTE的性能和成熟度⽔平上,LTE- railway(LTE-R)将可能成为下⼀代HSR通信系统。
⼆ LTE-R系统描述考虑LTE-R的频率和频谱使⽤,对为⾼速铁路(HSR)通信提供更⾼效的数据传输⾮常重要。
美国科罗拉多州大学关于在噪声环境下对大量连续语音识别系统的改进---------噪声环境下说话声音的识别工作简介在本文中,我们报道美国科罗拉多州大学关于噪声环境下海军研究语音词汇系统方面的最新改进成果。
特别地,我们介绍在有限语音数据的前提下,为了了解不确定观察者和变化的环境的任务(或调查方法),我们必须在提高听觉和语言模式方面努力下工夫。
在大量连续词汇语音识别系统中,我们将展开MAPLR自适应方法研究。
它包括单个或多重最大可能线形回归。
当前噪声环境下语音识别系统使用了大量声音词汇识别的声音识别引擎。
这种引擎在美国科罗拉多州大学目前得到了飞速的发展,本系统在噪声环境下说话声音系统(SPINE-2)评价数据中单词错识率表现为30.5%,比起2001年的SPINE-2来,在相关词汇错识率减少16%。
1.介绍为获得噪声环境下的有活力的连续声音系统的声音,我们试图在艺术的领域做出计算和提出改善,这个工作有几方面的难点:依赖训练的有限数据工作;在训练和测试中各种各样的军事噪声存在;在每次识别适用性阶段中,不可想象的听觉溪流和有限数量的声音。
在2000年11月的SPIN-1和2001年11月SPIN-2中,海军研究词汇通过DARPT在工作上给了很大的帮助。
在2001年参加评估的种类有:SPIIBM,华盛顿大学,美国科罗拉多州大学,AT&T,奥瑞哥研究所,和梅隆卡内基大学。
它们中的许多先前已经报道了SPINE-1和SPLNE-2工作的结果。
在这方面的工作中不乏表现最好的系统.我们在特性和主模式中使用了自适应系统,同时也使用了被用于训练各种参数类型的多重声音平行理论(例如MFCC、PCP等)。
其中每种识别系统的输出通常通过一个假定的熔合的方法来结合。
这种方法能提供一个单独的结果,这个结果的错误率将比任何一个单独的识别系统的结果要低。
美国科罗拉多州大学参加了SPIN-2和SPIN-1的两次评估工作。
我们2001年11月的SPIN-2是美国科罗拉多州大学识别系统基础上第一次被命名为SONIC(大量连续语音识别系统)的。
姓名:刘峻霖班级:通信143班学号:2014101108Computer Language and ProgrammingI. IntroductionProgramming languages, in computer science, are the artificial languages used to write a sequence of instructions (a computer program) that can be run by a computer. Simi lar to natural languages, such as English, programming languages have a vocabulary, grammar, and syntax. However, natural languages are not suited for programming computers because they are ambiguous, meaning that their vocabulary and grammatical structure may be interpreted in multiple ways. The languages used to program computers must have simple logical structures, and the rules for their grammar, spelling, and punctuation must be precise.Programming languages vary greatly in their sophistication and in their degree of versatility. Some programming languages are written to address a particular kind of computing problem or for use on a particular model of computer system. For instance, programming languages such as FORTRAN and COBOL were written to solve certain general types of programming problems—FORTRAN for scientific applications, and COBOL for business applications. Although these languages were designed to address specific categories of computer problems, they are highly portable, meaning that the y may be used to program many types of computers. Other languages, such as machine languages, are designed to be used by one specific model of computer system, or even by one specific computer in certain research applications. The most commonly used progra mming languages are highly portable and can be used to effectively solve diverse types of computing problems. Languages like C, PASCAL and BASIC fall into this category.II. Language TypesProgramming languages can be classified as either low-level languages or high-level languages. Low-level programming languages, or machine languages, are the most basic type of programming languages and can be understood directly by a computer. Machine languages differ depending on the manufacturer and model of computer. High-level languages are programming languages that must first be translated into a machine language before they can be understood and processed by a computer. Examples of high-levellanguages are C, C++, PASCAL, and FORTRAN. Assembly languages are intermediate languages that are very close to machine languages and do not have the level of linguistic sophistication exhibited by other high-level languages, but must still be translated into machine language.1. Machine LanguagesIn machine languages, instructions are written as sequences of 1s and 0s, called bits, that a computer can understand directly. An instruction in machine language generally tells the computer four things: (1) where to find one or two numbers or simple pieces of data in the main computer memory (Random Access Memory, or RAM), (2) a simple operation to perform, such as adding the two numbers together, (3) where in the main memory to put the result of this simple operation, and (4) where to find the next instruction to perform. While all executable programs are eventually read by the computer in machine language, they are not all programmed in machine language. It is extremely difficult to program directly in machine language because the instructions are sequences of 1s and 0s. A typical instruction in a machine language might read 10010 1100 1011 and mean add the contents of storage register A to the contents of storage register B.2. High-Level LanguagesHigh-level languages are relatively sophisticated sets of statements utilizing word s and syntax from human language. They are more similar to normal human languages than assembly or machine languages and are therefore easier to use for writing complicated programs. These programming languages allow larger and more complicated programs to be developed faster. However, high-level languages must be translated into machine language by another program called a compiler before a computer can understand them. For this reason, programs written in a high-level language may take longer to execute and use up more memory than programs written in an assembly language.3. Assembly LanguagesComputer programmers use assembly languages to make machine-language programs easier to write. In an assembly language, each statement corresponds roughly to one machine language instruction. An assembly language statement is composed with the aid of easy to remember commands. The command to add the contents of the storage register A to the contents of storage register B might be written ADD B, A in a typical assembl ylanguage statement. Assembly languages share certain features with machine languages. For instance, it is possible to manipulate specific bits in both assembly and machine languages. Programmers use assemblylanguages when it is important to minimize the time it takes to run a program, because the translation from assembly language to machine language is relatively simple. Assembly languages are also used when some part of the computer has to be controlled directly, such as individual dots on a monitor or the flow of individual characters to a printer.III. Classification of High-Level LanguagesHigh-level languages are commonly classified as procedure-oriented, functional, object-oriented, or logic languages. The most common high-level languages today are procedure-oriented languages. In these languages, one or more related blocks of statements that perform some complete function are grouped together into a program module, or procedure, and given a name such as “procedure A.” If the same sequence of oper ations is needed elsewhere in the program, a simple statement can be used to refer back to the procedure. In essence, a procedure is just amini- program. A large program can be constructed by grouping together procedures that perform different tasks. Procedural languages allow programs to be shorter and easier for the computer to read, but they require the programmer to design each procedure to be general enough to be usedin different situations. Functional languages treat procedures like mathematical functions and allow them to be processed like any other data in a program. This allows a much higher and more rigorous level of program construction. Functional languages also allow variables—symbols for data that can be specified and changed by the user as the program is running—to be given values only once. This simplifies programming by reducing the need to be concerned with the exact order of statement execution, since a variable does not have to be redeclared , or restated, each time it is used in a program statement. Many of the ideas from functional languages have become key parts of many modern procedural languages. Object-oriented languages are outgrowths of functional languages. In object-oriented languages, the code used to write the program and the data processed by the program are grouped together into units called objects. Objects are further grouped into classes, which define the attributes objects must have. A simpleexample of a class is the class Book. Objects within this class might be No vel and Short Story. Objects also have certain functions associated with them, called methods. The computer accesses an object through the use of one of the object’s methods. The method performs some action to the data in the object and returns this value to the computer. Classes of objects can also be further grouped into hierarchies, in which objects of one class can inherit methods from another class. The structure provided in object-oriented languages makes them very useful for complicated programming tasks. Logic languages use logic as their mathematical base. A logic program consists of sets of facts and if-then rules, which specify how one set of facts may be deduced from others, for example: If the statement X is true, then the statement Y is false. In the execution of such a program, an input statement can be logically deduced from other statements in the program. Many artificial intelligence programs are written in such languages.IV. Language Structure and ComponentsProgramming languages use specific types of statements, or instructions, to provide functional structure to the program. A statement in a program is a basic sentence that expresses a simple idea—its purpose is to give the computer a basic instruction. Statements define the types of data allowed, how data are to be manipulated, and the ways that procedures and functions work. Programmers use statements to manipulate common components of programming languages, such as variables and macros (mini-programs within a program). Statements known as data declarations give names and properties to elements of a program called variables. Variables can be assigned different values within the program. The properties variables can have are called types, and they include such things as what possible values might be saved in the variables, how much numerical accuracy is to be used in the values, and how one variable may represent a collection of simpler values in an organized fashion, such as a table or array. In many programming languages, a key data type is a pointer. Variables that are pointers do not themselves have values; instead, they have information that the computer can use to locate some other variable—that is, they point to another variable. An expression is a piece of a statement that describe s a series of computations to be performed on some of the program’s variables, such as X+Y/Z, in which the variables are X, Y, and Z and the computations are addition and division. An assignment statement assigns a variable a value derived fromsome expression, while conditional statements specify expressions to be tested and then used to select which other statements should be executed next.Procedure and function statements define certain blocks of code as procedures or functions that can then be returned to later in the program. These statements also define the kinds of variables and parameters the programmer can choose and the type of value that the code will return when an expression accesses the procedure or function. Many programming languages also permit mini translation programs called macros. Macros translate segments of code that have been written in a language structure defined by the programmer into statements that the programming language understands.V. HistoryProgramming languages date back almost to the invention of the digital computer in the 1940s. The first assembly languages emerged in the late 1950s with the introduction of commercial computers. The first procedural languages were developed in the late 1950s to early 1960s: FORTRAN, created by John Backus, and then COBOL, created by Grace Hopper The first functional language was LISP, written by John McCarthy4 in the late 1950s. Although heavily updated, all three languages are still widely used today. In the late 1960s, the first object-oriented languages, such as SIMULA, emerged. Logic languages became well known in the mid 1970swith the introduction of PROLOG6, a language used to program artificial intelligence software. During the 1970s, procedural languages continued to develop with ALGOL, BASIC, PASCAL, C, and A d a SMALLTALK was a highly influential object-oriented language that led to the merging ofobject- oriented and procedural languages in C++ and more recently in JAVA10. Although pure logic languages have declined in popularity, variations have become vitally important in the form of relational languages for modern databases, such as SQL.计算机程序一、引言计算机程序是指导计算机执行某个功能或功能组合的一套指令。
光纤通信系统Optical Fiber Communications英文资料及中文翻译Communication may be broadly defined as the transfer of information from one point to another .When the information is to be conveyed over any distance a communication system is usually required .Within a communication system the information transfer is frequently achieved by superimposing or modulating the information on to an electromagnetic wave which acts as a carrier for the information signal .This modulated carrier is then transmitted to the required destination where it is received and the original information signal is obtained by demodulation .Sophisticated techniques have been developed for this process by using electromagnetic carrier waves operating at radio requites as well as microwave and millimeter wave frequencies.The carrier maybe modulated by using either optical an analog digital information signal.. Analog modulation involves the variation of the light emitted from the optical source in a continuous manner. With digital modulation, however, discrete changes in the length intensity are obtained (i.e. on-off pulses). Although often simpler to implement, analog modulation with an optical fiber communication system is less efficient, requiring a far higher signal to noise ratio at the receiver than digital modulation. Also, the linearity needed for analog modulation is mot always provided by semiconductor optical source, especially at high modulation frequencies .For these reasons ,analog optical fiber communications link are generally limited to shorter distances and lower bandwidths than digital links .Initially, the input digital signal from the information source is suitably encoded for optical transmission .The laser drive circuit directly modulates the intensity of the semiconductor last with the encoded digital signal. Hence a digital optical signal is launched into the optical fiber cable .The avalanche photodiode detector (APD) is followed by a front-end amplifier and equalizer or filter to provide gain as well as linear signal processing and noise bandwidth reduction. Finally ,the signal obtained isdecoded to give the original digital information .Generating a Serial SignalAlthough a parallel input-output scheme can provide fast data transfer and is simple in operation, it has the disadvantage of requiring a large number of interconnections. As an example typical 8 bit parallel data port uses 8 data lines, plus one or two handshake lines and one or more ground return lines. It is fairly common practice to provide a separate ground return line for each signal line, so an 8 bit port could typically use a 20 core interconnection cable. Whilst such a multi way cable is quite acceptable for short distance links, up to perhaps a few meters, it becomes too expensive for long distance links where, in addition to the cost of the multiword cable, separate driver and receiver circuits may be required on each of the 10 signal lines. Where part of the link is to be made via a radio link, perhaps through a space satellite, separate radio frequency channels would be required for each data bit and this becomes unacceptable.An alternative to the parallel transfer of data is a serial in which the states of the individual data bits are transmitted in sequence over a single wire link. Each bit is allocated a fixed time slot. At the receiving end the individual bit states are detected and stored in separate flip-flop stages, so that the data may be reassembled to produce a parallel data word. The advantage of this serial method of transmission is that it requires only one signal wire and a ground return, irrespective of the number of bits in the data word being transmitted. The main disadvantage is that the rate at which data can be transferred is reduced in comparison with a parallel data transfer, since the bits are dealt with in sequence and the larger the number of bits in the word, the slower the maximum transfer speed becomes. For most applications however, a serial data stream can provide a perfectly adequate data transfer rate . This type of communication system is well suited for radio or telephone line links, since only one communication channel is required to carry the data.We have seen that in the CPU system data is normally transferred in parallel across the main data bus, so if the input -output data is to be in serial form, then a parallel to serial data conversion process is required between the CPU data bus andthe external I/O line. The conversion from parallel data to the serial form could be achieved by simply using a multiplexed switch, which selects each data bit in turn and connects it to the output line for a fixed time period. A more practical technique makes use of a shift register to convert the parallel data into serial form.A shift register consists of a series of D type flip-flops connected in a chain, with the Q output of one flip-flop driving the D input of the next in the chain. All of the flip-flops ate clocked simultaneously by a common clock pulse, when the clock pulse occurs the data stored in each flip-flop is transferred to the next flip-flop to the right in the chain. Thus for each clock pulse the data word is effectively stepped along the shift register by one stage, At the end of the chain the state of the output flip-flop will sequence through the states of the data bits originally stored in the register. The result is a serial stream of data pulses from the end of the shift register.In a typical parallel to serial conversion arrangement the flip-flops making up the shift register have their D input switchable. Initially the D inputs are set up in a way so that data can be transferred in parallel from the CPU data bus into the register stages. Once the data word has been loaded into the register the D inputs are switched so that the flip-flops from a shift register .Now for each successive clock pulse the data pattern is shifted through the register and comes out in serial form at the right hand end of the register.At the receiving end the serial data will usually have to be converted back into the parallel form before it can be used. The serial to parallel conversion process can also be achieved by using a shift register .In this case the serial signal is applied to the D input of the stage at the left hand end of the register. As each serial bit is clocked into the register the data word again moves step by step to the right, and after the last bit has been shifted in the complete data word will be assembled within the register .At this point the parallel data may be retrieved by simply reading out the data from individual register stages in parallel It is important that the number of stages in the shift register should match the number of bits in the data word, if the data is to be properly converted into parallel form.To achieve proper operation of the receiving end of a serial data link, it isimportant that the clock pulse is applied to the receive shift register at a time when the data level on the serial line is stable. It is possible to have the clock generated at either end of the link, but a convenient scheme is to generate the clock signal at the transmitting end (parallel-serial conversion )as the master timing signal. To allow for settling time and delays along the line, the active edge of the clock pulse at the receive end is delayed relative to that which operates the transmit register. If the clock is a square wave the simples approach might be to arrange that the transmit register operates on the rising edge of the clock wave, and the receive register on the falling edge, so that the receiver operates half a clock period behind the transmitter .If both registers operate on arising edge, the clock signal from the transmitter could be inverted before being used to drive the receive shifty register.For an 8 bit system a sequence of 8 clock pulses would be needed to send the serial data word .At the receiving end the clock pulses could be counted and when the eighth pulse is reached it might be assumed that the data in the receive register is correctly positioned, and may be read out as parallel data word .One problem here is that, if for some reason the receive register missed a clock pulse ,its data pattern would get out of step with the transmitted data and errors would result. To overcome this problem a further signal is required which defines the time at which the received word is correctly positioned in the receive shift register and ready for parallel transfer from the register .One possibility is to add a further signal wire along which a pulse is sent when the last data bit is being transmitted, so that the receiver knows when the data word is correctly set up in its shift register. Another scheme might be to send clock pulses only when data bits are being sent and to leave a timing gap between the groups of bits for successive data words. The lack of the clock signal could then be detected and used to reset the bit counter, so that it always starts at zero at the beginning of each new data word.Serial and Parallel Data lion is processed. Serial indicates that the information is handled sequentially, similar to a group of soldiers marching in single file. In parallel transmission the info The terms serial and parallel are often used in descriptions of data transmission techniques. Both refer to the method by which information isdivided in to characters, words, or blocks which are transmitted simultaneously. This could be compared to a platoon of soldiers marching in ranks.The output of a common type of business machine is on eight—level punched paper tape, or eight bits of data at a time on eight separate outputs. Each parallel set of eight bits comprises a character, and the output is referred to as parallel by bit, serial by character. The choice of cither serial or parallel data transmission speed requirements.Business machines with parallel outputs, how—ever, can use either parallel outputs, how—ever, can use either direct parallel data trans—mission or serial transmission, with the addition of a parallel—to—serial converter at the interface point of the business machine and the serial data transmitter. Similarly, another converter at the receiving terminal must change the serial data back to the parallel format.Both serial and parallel data transmission systems have inherent advantages which are some—what different. Parallel transmission requires that parts of the available bandwidth be used as guard bands for separating each of the parallel channels, whereas serial transmission systems can use the entire linear portion of the available band to transmit data, On the other hand, parallel systems are convenient to use because many business machines have parallel inputs and outputs. Though a serial data set has the added converters for parallel interface, the parallel transmitter re—quires several oscillators and filters to generate the frequencies for multiplexing each of the side—by—side channels and, hence, is more susceptible to frequency error.StandardsBecause of the wide variety of data communications and computer equipment available, industrial standards have been established to provide operating compatibility. These standards have evolved as a result of the coordination between manufacturers of communication equipment and the manufacturers of data processing equipment. Of course, it is to a manufacturer’s advantage to provide equipment that isuniversally acceptable. It is also certainly apparent that without standardization intersystem compatibility would be al—most impossible.Organizations currently involved in uniting the data communications and computer fields are the CCITT, Electronic Industries Association (EIA), American Standards Association (ASA), and IEEE.A generally accepted standard issued by the EIA, RS—232—B, defines the characteristics of binary data signals, and provides a standard inter—face for control signals between data processing terminal equipment and data communications equipment. As more and more data communications systems are developed, and additional ways are found to use them, the importance ways are found to use them, the importance of standards will become even more significant.Of the most important considerations in transmitting data over communication systems is accuracy. Data signals consist of a train of pulses arranged in some sort of code. In a typical binary system, for example, digits 1 and 0 are represented by two different pulse amplitudes. If the amplitude of a pulse changes beyond certain limits during transmission, the detector at the receiving end may produce the wrong digit, thus causing an error.It is very difficult in most transmission systems to completely avoid. This is especially true when transmission system designed for speech signals. Many of the inherent electrical characteristics of telephone circuits have an adverse effect on digital signals.Making the circuits unsatisfactory for data transmission—especially treated before they can be used to handle data at speeds above 2000 bits per second.V oice channels on the switched (dial—up) telephone network exhibit certain characteristics which tend to distort typical data signal waveforms. Since there is random selection of a particular route for the data signal with each dialed connection, transmission parameters will generally change, sometimes upsetting the effect of built—in compensationNetworks. In addition, the switched network cannot be used of for large multipleaddress data systems using time sharing. Because of these considerations, specially treated voice bandwidth circuits are made available for data use. The characteristics and costs of these point—to—point private lines are published in document called tariffs, which are merely regulatory agreements reached by the FCC, state public utilities commissions, and operating telephone companies regarding charges for particular types of telephone circuits. The main advantage of private or dedicated facilities is that transmission characteristics are fixed and remain so for all data communications operations.Correlative TechniqueCorrelative data transmission techniques, particularly the Duobinary principle, have aroused considerable interest because of the method of converting a binary signal into three equidistant levels. This correlative scheme is accomplished in such a manner that the predetermined level depends on past signal history, forming the signal so that it never goes from one level extreme to another in one bit interval.The most significant property of the Duobinary process is that it affords a two—to—one bandwidth compression relative to binary signaling, or equivalently twice the speed capability in bits per second for a fixed bandwidth. The same speed capability for a multilevel code would normally require four levels, each of which would represent two binary digits.The FutureIt is universally recognized that communication is essential at every level of organization. The United States Government utilizes vast communications network for voice as well as data transmission. Likewise, business need communications to carry on their daily operations.The communications industry has been hard at work to develop systems that will transmit data economically and reliably over both private—line and dial up telephone circuits. The most ardent trend in data transmission today is toward higher speeds over voice—grade telephone channels. New transmission and equalization techniques now being investigated will soon permit transmitting digital data over telephone channels at speeds of 4800 bits per second or higher.To summarize: The major demand placed on telecommunications systems is for more information-carrying capacity because the volume of information produced increases rapidly. In addition, we have to use digital technology for the high reliability and high quality it provides in the signal transmission. However, this technology carries a price: the need for higher information-carrying capacity.The Need for Fiber-Optic Communications Systems The major characteristic of a telecommunications system is unquestionably its information-carrying capacity, but there are many other important characteristics. For instance, for a bank network, security is probably more important than capacity. For a brokerage house, speed of transmission is the most crucial feature of a network. In general, though, capacity is priority one for most system users. And there’s the rub. We cannot increase link capacity as much as we would like. The major limit is shown by the Shannon-Hartley theorem,Where C is the information-carrying capacity(bits/sec), BW is the link bandwidth (Hz=cycles/sec), and SNR is the signal-to-noise power ratio.Formula 1.1 reveals a limit to capacity C; thus, it is often referred to as the “ Shannon limit.” The formula, which comes from information theory, is true regardless of specific technology. It was first promulgated in 1948 by Claude Shannon, a scientist who worked at Bell Laboratories. R. V. L. Hartley, who also worked at Bell Laboratories, published a fundamental paper 20 years earlier, a paper that laid important groundwork in information theory, which is why his name is associated with Shannon’s formula.The Shannon-Hartley theorem states that information-carrying capacity is proportional to channel bandwidth, the range of frequencies within which the signals can be transmitted without substantial attenuation.What limits channel bandwidth? The frequency of the signal carrier. The higher the carrier’s frequency, the greater the channel bandwidth and the higher the information-carrying capacity of the system. The rule of thumb for estimating possible order of values is this: Bandwidth is approximately 10 percent of the carrier-signal frequency. Hence, if a microwave channel uses a 10-GHz carrier signal.Then its bandwidth is about 100 MHz.A copper wire can carry a signal up to 1 MHz over a short distance. A coaxial cable can propagate a signal up to 100 MHz. Radio frequencies are in the range of 500 KHz to 100 MHz. Microwaves, including satellite channels, operate up to 100 GHz. Fiber-optic communications systems use light as the signal carrier; light frequency is between 100 and 1000 THz; therefore, one can expect much more capacity from optical systems. Using the rule of thumb mentioned above, we can estimate the bandwidth of a single fiber-optic communication link as 50 THz.To illustrate this point, consider these transmission media in terms of their capacity to carry, simultaneously, a specific number of one-way voice channels. Keep in mind that the following precise value. A single coaxial cable can carry up to 13,000 channels, a microwave terrestrial link up to 20,000 channels, and a satellite link up to 100,000 channels. However, one fiber-optic communications link, such as the transatlantic cable TAT-13, can carry 300,000 two-way voice channels simultaneously. That’s impressive and explains why fiber-optic communications systems form the backbone of modern telecommunications and will most certainly shape its future.To summarize: The information-carrying capacity of a telecommunications system is proportional to its bandwidth, which in turn is proportional to the frequency of the carrier. Fiber-optic communications systems use light-a carrier with the highest frequency among all the practical signals. This is why fiber-optic communications systems have the highest information-carrying capacity and this is what makes these systems the linchpin of modern telecommunications.To put into perspective just how important a role fiber-optic communications will be playing in information delivery in the years ahead, consider the following statement from a leading telecommunications provider: “ The explosive growth of Internet traffic, deregulation and the increasing demand of users are putting pressure on our customers to increase the capacity of their network. Only optical networks can deliver the required capacity, and bandwidth-on-demand is now synonymous with wavelength-on-demand.” Th is statement is true not only for a specific telecommunications company. With a word change here and there perhaps, but withthe same exact meaning, you will find telecommunications companies throughout the world voicing the same refrain.A modern fiber-optic communications system consists of many components whose functions and technological implementations vary. This is overall topic of this book. In this section we introduce the main idea underlying a fiber-optic communications system.Basic Block DiagramA fiber-optic communications system is a particular type of telecommunications system. The features of a fiber-optic communications system can be seen in Figure 1.4, which displays its basic block diagram.Information to be conveyed enters an electronic transmitter, where it is prepared for transmission very much in the conventional manner-that is, it is converted into electrical form, modulated, and multiplexed. The signal then moves to the optical transmitter, where it is converted into optical detector converts the light back into an electrical signal, which is processed by the electronic receiver to extract the information and present it in a usable form (audio, video, or data output).Let’s take a simple example that involves Figures 1.1, 1.3, and 1.4 Suppose we need to transmit a voice signal. The acoustic signal (the information) is converted into electrical form by a microphone and the analog signal is converted into binary formby the PCM circuitry. This electrical digital signal modulates a light source and the latter transmits the signal as a series of light pulses over optical fiber. If we were able to look into an optical fiber, we would see light vary between off and on in accordance with the binary number to be transmitted. The optical detector converts the optical signal it receives into a set of electrical pulses that are processed by an electronic receiver. Finally, a speaker converts the analog electrical signal into acoustic waves and we can hear sound-delivered information.Figure 1.4 shows that this telecommunications system includes electronic components and optical devices. The electronic components deal with information in its original and electrical forms. The optical devices prepare and transmit the light signal. The optical devices constitute a fiber-optic communications system.TransmitterThe heart of the transmitter is a light source. The major function of a light source is to convert an information signal from its electrical form into light. Today’sfiber-optic communications systems use, as a light source, either light-emitting diodes (LEDs) or laser diodes (LDs). Both are miniature semiconductor devices that effectively convert electrical signals are usually fabricated in one integrated package. In Figure 1.4, this package is denoted as an optical transmitter. Figure 1.5 displays the physical make-up of an LED, an LD, and integrated packages.Optical fiberThe transmission medium in fiber-optic communications systems is an optical fiber. The optical fiber is the transparent flexible filament that guides light from a transmitter to a receiver. An optical information signal entered at the transmitter end of a fiber-optic communications system is delivered to the receiver end by the optical fiber. So, as with any communication link, the optical fiber provides the connection between a transmitter and a receiver and, very much the way copper wire and coaxial cable conduct an electrical signal, optical fiber “ conducts” light.The optical fiber is generally made from a type of glass called silica or, less commonly nowadays, from plastic. It is about a human hair in thickness. To protect very fragile optical fiber from hostile environments and mechanical damage, it is usually enclosed in a specific structure. Bare optical fiber, shielded by its protective coating, is encapsulated use in a host of applications, many of which will be covered in subsequent chaptersReceiver The key component of an optical receiver is its photodetector. The major function of a photodetector is to convert an optical information signal back into an electrical signal (photocurrent). The photodetector in today's fiver-optic communications systems is a semiconductor photodiode (PD). This miniature device is usually fabricated together with its electrical circyitry to form an integrated package that provides power-supply connections and signal amplification. Such an integrated package is shown in Figure 1.4 as an optical receiver. Figure 1.7 shows samples of a photodiode and an integrated package.The basic diagram shown in Figure 1.4 gives us the first idea of what a fiber-optic communications system is and how it works. All the components of this point-to-point system are discussed in detail in this book. Particular attention is given to the study of networks based on fiber-optic communications systems.The role of Fiber-Optic Communications Technology has not only already changed the landscape of telecommunications but it is still doing so and at a mind-boggling pace. In fact, because of the telecommunications industry's insatiable appetite for capacity, in recent years the bandwidth of commercial systems has increased more than a hundredfold. The potential information-carrying capacity of a single fiber-optic channel is estimated at 50 terabits a second (Tbit/s) but, from apractical standpoint, commercial links have transmitted far fewer than 100 Gbps, an astoundingamount of data in itself that cannot be achieved with any other transmission medium. Researchers and engineers are working feverishly to develop new techniques that approach the potential capacity limit.Two recent major technological advances--wavelength-division multiplexing (WDM) anderbium-doped optical-fiber amplifiers (EDFA)--have boosted the capacity of existing system sand have brought about dramatic improvements in the capacity of systems now in development. In fact,' WDM is fast becoming the technology of choice in achieving smooth, manageable capacity expansion.The point to bear in mind is this: Telecommunications is growing at a furious pace, and fiber-optic communications is one of its most dynamically moving sectors. While this book refleets the current situation in fiber-optic communications technology, to keep yourself updated, you have to follow the latest news in this field by reading the industry's trade journals, attending technical conferences and expositions, and finding the time to evaluate the reams of literature that cross your desk every day from companies in the field.光纤通信系统一般的通信系统由下列部分组成:(1) 信息源。
Recent Advances in Underwater Acoustic Communications & Networking水声通信与网络研究进展Abstract– The past three decades have seen a growing interest in underwater acousticcommunications. Continued research over the years has resulted in improved performance androbustness as compared to the initial communication systems. Research has expanded frompoint-to-point communications to include underwater networks as well. A series of review papers provide an excellent history of the development of the field until the end of the last decade. In this paper, we aim to provide an overview of the key developments, both theoretical and applied, in the field in thepast two decades. We also hope to provide an insight into some of the open problems and challenges facingresearchers in this field in the near future.抽象过去三十年来,在水声通信的兴趣与日俱增。
光纤通信R F方面的中英文翻译RF和微波光纤设计指引Name:Class:Student NO.:RF and Microwave Fiber-Optic Design GuideAgere Systems Inc., through its predecessors, began developing and producing lasers and detectors for linear fiber-optic links nearly two decades ago. Over time, these optoelectronic components have been continually refined for integration into a variety of systems that require high fidelity, high frequency, or long-distance transportation of analog and digital signals. As a result of this widespread use and development, by the late 1980s, these link products were routinely being treated as standard RF and microwave components in many different applications.There are several notable advantages of fiber optics that have led to its increasing use. The most immediate benefit of fiber optics is its low loss. With less than 0.4 dB/km of optical attenuation, fiber-optic links send signals tens of kilometers and still maintain nearly the original quality of the input.The low fiber loss is also independent of frequency for most practical systems. With laser and detector speeds up to 18 GHz, links can send high-frequency signalsin their original form without the need to downconvert or digitize them for the transmission portion of a system. As a result, signal conversion equipment can be placed in convenient locations or even eliminated altogether, which often leads to significant cost and maintenancesavings.Savings are also realized due to the mechanical flexibility and lightweight fiber-optic cable, approximately 1/25 the weight of waveguide and 1/10 that of coax. Many transmission lines can be fed through small conduits, allowing for high signal rates without investing in expensive architectural supports. The placement of fiber cable is further simplified by the natural immunity of optical fiber to electromagnetic interference (EMI). Not only can large numbers of fibers be tightly bundled with power cables, they also provide a uniquely secure and electrically isolated transmission path.The general advantages of fiber-optics first led to their widespread use in long-haul digital telecommunications. In the most basic form of fiber-optic communications, light from a semiconductor laser or LED is switched on and off to send digitally coded information through a fiber to a photodiode receiver.By comparison, in linear fiber-optic systems developed by Lucent, the light sent through the fiber has an intensity directly related to the input electrical current. While this places extra requirements on the quality of the lasers and photodiodes, it has been essential in many applications to transmit arbitrary RF and microwave signals. As a result, tens of thousands of Agere Systems’ transmitters are currently in use.The information offered here examines the basic link components, provides an overview of design calculations related to gain, bandwidth, noise, and dynamic range and distortion. A section on fiber-optic components discusses a number of keyparameters, among them wavelength and loss, dispersion, reflections, and polarization and attenuation. Additional information evaluates optical isolators, distributed-feedback lasers and Fabry-Perot lasers, predistortion, and short- vs. long-wavelength transmission.One of linear optical fiber relation main usages or receives between the electronic installation and the remote localization antenna in the transmission transmits RF and the microwave signal。
附录一、英文原文:Detecting Anomaly Traf?c using Flow Data in the realVoIP networkI. INTRODUCTIONRecently, many SIP[3]/RTP[4]-based VoIP applications and services haveappeared and their penetration ratio is gradually increasing due to the freeor cheap call charge and the easy subscription method. Thus, some of the subscribers to the PSTN service tend to change their home telephone servicesto VoIP products. For example, companies in Korea such as LGDacom, SamsungNet- works, and KT have begun to deploy SIP/RTP-based VoIP services. It is reportedthat more than ?ve million users have subscribed the commercial VoIP servicesand 50% of all the users are joined in 2009 in Korea [1]. According to IDC, itis expected that the number of VoIP users in US will increase to 27 millionsin 2009 [2]. Hence, as the VoIP service becomes popular, it is not surprisingthat a lot of VoIP anomaly traf ?c has been already known [5]. So, Most commercial service such as VoIP services should provide essential security functions regarding privacy, authentication, integrity and non-repudiation for preventing malicious traf ?c. Particu- larly, most of current SIP/RTP-based VoIP servicessupply the minimal security function related with authentication. Though secure transport-layer protocols such as Transport Layer Security (TLS) [6] or Secure RTP(SRTP) [7] have been standardized, they have not been fully implemented anddeployed in current VoIP applications because of the overheads of implementation and performance. Thus, un-encrypted VoIP packets could be easily sniffed andforged, especially in wireless LANs. In spite of authentication,the authentication keys such as MD5in the SIP header could be maliciously exploited, because SIP is a text-based protocol and unencrypted SIP packets are easilydecoded. Therefore, VoIP services are very vulnerable to attacks exploiting SIP and RTP. We aim at proposing a VoIP anomaly traf ?c detection method using the?ow-based traf ?c measurement archi-tecture. We consider three representativeVoIP anomalies called CANCEL,BYEDenial of Service (DoS) and RTP?ooding attacks in this paper, because we found that malicious users in wireless LANcould easily perform these attacks in the real VoIP network. For monitoring VoIP packets,we employ the IETF IP Flow Information eXport (IPFIX) [9] standard that is based on NetFlow v9. This traf ?c measurement method provides a ?exible and extensible template structure for various protocols, which is useful for observing SIP/RTP ?ows [10]. In order to capture and export VoIP packets into IPFIX ?ows, we de?ne two additional IPFIX templates for SIP and RTP ?ows. Furthermore, we add four IPFIX ?elds to observe packets which are necessary to detect VoIP source spoo?ng attacks in WLANs.II. RELATED WORK[8] proposed a ?ooding detection method by the Hellinger Distance (HD) concept. In [8], they have pre- sented INVITE, SYN and RTP?ooding detection meth-ods.The HD is the difference value between a training data set and a testing dataset. The training data set collected traf?c over n sampling period of duration testing data set collected traf?c next the training data set in the sameperiod. If the HD is close to ‘1’, this testing data set is regarded as anomaly traf ?c. For using this method, they assumed that initial training data set didnot have any anomaly traf ?c. Since this method was based on packet counts, itmight not easily extended to detect other anomaly traf ?c except ?ooding. On the other hand, [11] has proposed a VoIP anomaly traf ?c detection method using Extended Finite State Machine (EFSM). [11] has suggested INVITE ?ooding, BYEDoS anomaly traf ?c and media spamming detection methods. However, the statemachine required more memory because it had to maintain each ?ow. [13] has presented NetFlow-based VoIP anomaly detection methods for INVITE, REGIS-TER,RTP?ooding, and REGISTER/INVITEscan. How-ever, the VoIP DoSattacks consideredin this paper were not considered. In [14], an IDS approach to detect SIPanomalies was developed, but only simulation results are presented. For monitoring VoIP traf ?c, SIPFIX [10] has been proposed as an IPFIX extension.The key ideas of the SIPFIX are application-layer inspection and SDP analysisfor carrying media session information. Yet, this paper presents only the possibility of applying SIPFIX to DoS anomaly traf ?c detection and prevention. Wedescribed the preliminary idea of detecting VoIP anomaly traf ?c in [15]. This paper elaborates BYEDoSanomaly traf ?c and RTP?ooding anomaly traf ?c detec-tion method based on IPFIX. Based on [15], we have considered SIP and RTP anomalytraf ?c generated in wireless LAN. In this case, it is possible to generate thesimiliar anomaly traf ?c with normal VoIP traf ?c, because attackers can easilyextract normal user information from unencrypted VoIP packets. In this paper,we have extended the idea with additional SIP detection methods using informationof wireless LAN packets. Furthermore, we have shown the real experiment resultsat the commercial VoIP network.III. THE VOIP ANOMALY TRAFFIC DETECTION METHOD A. CANCEL DoS Anomaly Traf ?c DetectionAs the SIP INVITE message is not usually encrypted, attackers could extract ?elds necessary to reproduce the forged SIP CANCELmessage by snif ?ng SIP INVITE packets, especially in wireless LANs. Thus, wecannot tell the difference between the normal SIP CANCEL message and the replicated one, because the faked CANCEL packet includes the normal ?elds inferred from the SIP INVITE message. Theattacker will perform the SIP CANCELDoS attack at the samewireless LAN, because the purpose of the SIP CANCELattack is to prevent the normal call estab-lishment when a victim is waiting for calls. Therefore, as soon as the attacker catchesa call invitation message for a victim, it will send a SIP CANCELmessage, which makes the call establishment failed. Wehave generated faked SIP CANCELmessage using sniffed a SIP INVITE in SIP header of this CANCEL message is the sameas normal SIP CANCEL message, because the attacker can obtain the SIP header?eld from unencrypted normal SIP message in wireless LANenvironment. Therefore it is impossible to detect the CANCEL DoS anomaly traf ?c using SIP headers, we use the different values of the wireless LANframe. That is, the sequence number in the frame will tell the difference between a victim host and an attacker.Welook into source MACaddress and sequence number in the MAC frame including a SIP CANCEL messageas shown in Algorithm 1. We compare the source MAC address of SIP CANCEL packets with that of the previously saved SIP INVITE ?ow. If the source MAC address of a SIP CANCEL ?ow is changed, it will be highly probablethat the CANCEL packet is generated by a unknown user. However, the source MAC address could be spoofed. Regarding source spoo ?ng detection, we employ the method in [12] that uses sequence numbers of frames. We calculate the gapbetween n-th and (n-1)-th frames. As the sequence number ?eld in a MAC header uses 12 bits, it varies from 0 to 4095. When we ?nd that the sequence number gap between a single SIP ?ow is greater than the threshold value of N that willbe set from the experiments, we determine that the SIP host address as beenspoofed for the anomaly traf ?c.B. BYE DoS Anomaly Traf ?c DetectionIn commercial VoIP applications, SIP BYE messages use the same authentication ?eld is included in the SIP IN-VITE message for security andaccounting purposes. How-ever, attackers can reproduce BYEDoS packets through snif ?ng normal SIP INVITE packets in wireless faked SIP BYE message is samewith the normal SIP BYE. Therefore, it is dif ?cult to detect the BYEDoS anomaly traf ?c using only SIP header snif ?ng SIP INVITE message, the attacker at the same or different subnets could terminate the normal in- progress call, because it could succeed in generating a BYE message to the SIP proxy server. In theSIP BYE attack, it is dif ?cult to distinguish from the normal call termination procedure. That is, we apply the timestamp of RTP traf ?c for detecting the SIP BYE attack. Generally, after normal call termination, the bi-directional RTP?ow is terminated in a bref space of time. However, if the call terminationprocedure is anomaly, we can observe that a directional RTP media ?ow is still ongoing, whereas an attacked directional RTP?ow is broken. Therefore, in order to detect the SIP BYE attack, we decide that we watch a directional RTP ?ow for a long time threshold of N sec after SIP BYEmessage. The threshold of N is also set from the 2 explains the procedure to detect BYE DoS anomal traf ?c using captured timestamp of the RTPpacket. Wemaintain SIP session information between clients with INVITE and OK messages including the same Call-ID and 4-tuple(source/destination IP Address and port number) of the BYEpacket. Weset a time threshold value by adding Nsec to the timestamp value of the BYE message. Thereason why we use the captured timestamp is that a few RTP packets are observed under second. If RTP traf ?c is observed after the time threshold, this willbe considered as a BYE DoS attack, because the VoIP session will be terminatedwith normal BYEmessages. C. RTPAnomaly Traf ?c Detection Algorithm 3 describes an RTP ?ooding detection method that uses SSRC and sequence numbers of the RTP header. During a single RTPsession, typically, the sameSSRCvalue is maintained. If SSRCis changed, it is highly probable that anomaly has occurred. In addition,if there is a big sequence number gap between RTP packets, we determine thatanomaly RTPtraf ?c has happened. As inspecting every sequence number for a packet is dif ?cult, we calculate the sequence number gap using the ?rst, last, maximum and minimum sequence numbers. In the RTP header, the sequence number ?eld uses 16 bits from 0 to 65535. When we observe a wide sequence number gap in our algorithm, we consider it as an RTP ?ooding attack.IV. PERFORMANCE EVALUATIONA. Experiment EnvironmentIn order to detect VoIP anomaly traf ?c, we established an experimental environment as ?gure 1. In this envi-ronment, we employed two VoIP phones with wireless LANs, one attacker, a wireless access router and an IPFIX ?ow collector.For the realistic performance evaluation, we directly used one of the workingVoIP networks deployed in Korea where an 11-digit telephone number (070-XXXX-XXXX) has been assigned to a SIP wireless SIP phones supporting ,we could make calls to/from the PSTNor cellular phones. In the wireless access router, we used two wireless LAN cards- one is to support the AP service, andthe other is to monitor packets. Moreover, in order to observe VoIP packetsin the wireless access router, we modi ?ed nProbe [16], that is an open IPFIX?ow generator, to create and export IPFIX ?ows related with SIP, RTP, and information. As the IPFIX collector, we have modi ?ed libip ?x so that it could provide the IPFIX ?ow decoding function for SIP, RTP, and templates. We used MySQL for the ?ow DB.B. Experimental ResultsIn order to evaluate our proposed algorithms, we gen-erated 1,946 VoIP callswith two commercial SIP phones and a VoIP anomaly traf ?c generator. Table I showsour experimental results with precision, recall, and F-score that is the harmonic mean of precision and recall. In CANCEL DoS anomaly traf ?c detection, our algorithm represented a few false negative cases, which was related with thegap threshold of the sequence number in MAC header. The average of the F-score value for detecting the SIP CANCEL anomaly is %.For BYE anomaly tests, wegenerated 755 BYEmes-sages including 118 BYEDoSanomalies in the exper-iment. The proposed BYE DoS anomaly traf ?c detec-tion algorithm found 112 anomalieswith the F-score of %. If an RTP?ow is terminated before the threshold, we regard the anomaly ?ow as a normal one. In this algorithm, we extract RTP sessioninformation from INVITE and OK or session description messages using the sameCall-ID of BYE message. It is possible not to capture those packet, resultingin a few false-negative cases. The RTP ?ooding anomaly traf ?c detection experiment for 810 RTP sessions resulted in the F score of 98%.The reason offalse-positive cases was related with the sequence number in RTP header. If the sequence number of anomaly traf ?c is overlapped with the range of the normaltraf ?c, our algorithm will consider it as normal traf ?c.V. CONCLUSIONSWe have proposed a ?ow-based anomaly traf ?c detec-tion method against SIP and RTP-based anomaly traf ?c in this paper. We presented VoIP anomaly traf ?c detection methods with ?ow data on the wireless access router. Weused the IETF IPFIX standard to monitor SIP/RTP ?ows passing through wireless access routers, because its template architecture is easily extensible to several protocols.For this purpose, we de ?ned two new IPFIX templates for SIP and RTP traf ?c and four new IPFIX ?elds for traf ?c. Using these IPFIX ?ow templates,we proposed CANCEL/BYE DoS and RTP?ooding traf ?c detection algorithms. From experimental results on the working VoIP network in Korea, we showed that our method is able to detect three representative VoIP attacks on SIP phones. In CANCEL/BYE DoS anomaly traf ?cdetection method, we employed threshold values about time and sequence numbergap for class ?cation of normal and abnormal VoIP packets. This paper has notbeen mentioned the test result about suitable threshold values. For the futurework, we will show the experimental result about evaluation of thethreshold values for our detection method.二、英文翻译:交通流数据检测异常在真实的世界中使用的VoIP 网络一.介绍最近 , 许多 SIP[3],[4]基于服务器的VoIP应用和服务出现了,并逐渐增加他们的穿透比及由于自由和廉价的通话费且极易订阅的方法。
中英文资料外文翻译计算机网络计算机网络,通常简单的被称作是一种网络,是一家集电脑和设备为一体的沟通渠道,便于用户之间的沟通交流和资源共享。
网络可以根据其多种特点来分类。
计算机网络允许资源和信息在互联设备中共享。
一.历史早期的计算机网络通信始于20世纪50年代末,包括军事雷达系统、半自动地面防空系统及其相关的商业航空订票系统、半自动商业研究环境。
1957年俄罗斯向太空发射人造卫星。
十八个月后,美国开始设立高级研究计划局(ARPA)并第一次发射人造卫星。
然后用阿帕网上的另外一台计算机分享了这个信息。
这一切的负责者是美国博士莱德里尔克。
阿帕网于来于自印度,1969年印度将其名字改为因特网。
上世纪60年代,高级研究计划局(ARPA)开始为美国国防部资助并设计高级研究计划局网(阿帕网)。
因特网的发展始于1969年,20世纪60年代起开始在此基础上设计开发,由此,阿帕网演变成现代互联网。
二.目的计算机网络可以被用于各种用途:为通信提供便利:使用网络,人们很容易通过电子邮件、即时信息、聊天室、电话、视频电话和视频会议来进行沟通和交流。
共享硬件:在网络环境下,每台计算机可以获取和使用网络硬件资源,例如打印一份文件可以通过网络打印机。
共享文件:数据和信息: 在网络环境中,授权用户可以访问存储在其他计算机上的网络数据和信息。
提供进入数据和信息共享存储设备的能力是许多网络的一个重要特征。
共享软件:用户可以连接到远程计算机的网络应用程序。
信息保存。
安全保证。
三.网络分类下面的列表显示用于网络分类:3.1连接方式计算机网络可以据硬件和软件技术分为用来连接个人设备的网络,如:光纤、局域网、无线局域网、家用网络设备、电缆通讯和G.hn(有线家庭网络标准)等等。
以太网的定义,它是由IEEE 802标准,并利用各种媒介,使设备之间进行通信的网络。
经常部署的设备包括网络集线器、交换机、网桥、路由器。
无线局域网技术是使用无线设备进行连接的。
通信专业英语作文模板英文回答:1. What is the definition of communication?Communication is the process of effectively conveying a message from one person or group to another, with theintent of creating shared understanding. It involves the exchange of information, thoughts, feelings, and ideas through various channels, such as speaking, writing, gestures, and visual cues.2. What are the different types of communication?Verbal communication: Spoken or written words used to convey a message.Nonverbal communication: Body language, facial expressions, tone of voice, and eye contact that convey messages without words.Intrapersonal communication: Communication with oneself, involving internal thoughts, feelings, and self-reflection.Interpersonal communication: Communication between individuals, including conversations, discussions, and relationships.Mass communication: Dissemination of a message to alarge audience through media such as television, radio, and print.3. What are the key elements of effective communication?Clarity: The message is easy to understand and unambiguous.Accuracy: The message is truthful and represents the intended meaning.Relevance: The message is pertinent to the recipient's needs and interests.Timeliness: The message is delivered at an appropriate time.Completeness: The message includes all necessary information.Conciseness: The message is brief and to the point.Empathy: The message demonstrates understanding of the recipient's perspective.Feedback: The sender receives feedback to ensure the message has been received and understood.4. What are the barriers to effective communication?Language differences: Misunderstandings due to linguistic barriers.Cultural differences: Varying communication styles and protocols across cultures.Personal biases: Preconceived notions or prejudices that influence perception.Noise: Distractions that interfere with the transmission or reception of the message.Lack of attention: The recipient is not paying enough attention to the message.Emotional barriers: Strong emotions that hinder clear thinking and communication.5. What are the strategies for improving communication skills?Active listening: Paying full attention to the speaker and demonstrating comprehension.Effective speaking: Clearly and confidently expressing oneself with appropriate tone and body language.Feedback and clarification: Seeking and providing feedback to ensure understanding.Cultural sensitivity: Being aware of and adapting to different communication styles across cultures.Emotional management: Controlling emotions and maintaining a professional demeanor.Written communication skills: Writing emails, reports, and other documents effectively and clearly.中文回答:1. 什么是沟通?沟通是有效地将信息从一个人或群体传达给另一个人或群体,以期达成共同理解的过程。
Combined Adaptive Filter with LMS-Based AlgorithmsBoˇ zo Krstaji´ c, LJubiˇ sa Stankovi´ c,and Zdravko Uskokovi´Abstract: A combined adaptive filter is proposed. It consists of parallel LMS-based adaptive FIR filters and an algorithm for choosing the better among them. As a criterion for comparison of the considered algorithms in the proposed filter, we take the ratio between bias and variance of the weighting coefficients. Simulations results confirm the advantages of the proposed adaptive filter.Keywords: Adaptive filter, LMS algorithm, Combined algorithm,Bias and variance trade-off1.IntroductionAdaptive filters have been applied in signal processing and control, as well as in many practical problems, [1, 2]. Performance of an adaptive filter depends mainly on the algorithm used for updating the filter weighting coefficients. The most commonly used adaptive systems are those based on the Least Mean Square (LMS) adaptive algorithm and its modi fications (LMS-based algorithms).The LMS is simple for implementation and robust in a number of applications [1–3]. However, since it does not always converge in an acceptable manner, there have been many attempts to improve its performance by the appropriate modifications: sign algorithm (SA) [8], geometric mean LMS (GLMS) [5], variable step-size LMS(VS LMS) [6, 7].Each of the LMS-based algorithms has at least one parameter that should be defined prior to the adaptation procedure (step for LMS and SA; step and smoothing coefficients for GLMS; various parameters affecting the step for VS LMS). These parameters crucially influence the filter output during two adaptation phases:transient and steady state. Choice of these parameters is mostly based on some kind of trade-off between the quality of algorithm performance in the mentioned adaptation phases.We propose a possible approach for the LMS-based adaptive filter performance improvement. Namely, we make a combination of several LMS-based FIR filters with different parameters, and provide the criterion for choosing the most suitable algorithm for different adaptation phases. This method may be applied to all the LMS-based algorithms, although we here consider only several of them.The paper is organized as follows. An overview of the considered LMS-based algorithms is given in Section 2.Section 3 proposes the criterion for evaluation and combination of adaptive algorithms. Simulation results are presented in Section 4.2. LMS based algorithmsLet us de fine the input signal vector T k N k x k x k x X )]1()1()([+--= and vector of weighting coef ficients as T N k k W k W k W W )]()()([110-= .The weighting coef ficients vector should be calculated according to: }{21k k k k X e E W W μ+=+ (1) where µ is the algorithm step, E{·} is the estimate of the expected value and k T k k k X W d e -=is the error at the in-stant k,and dk is a reference signal. Depending on the estimation of expected value in (1), one de fines various forms of adaptive algorithms: the LMS {}()k k k k X e X e E =,the GLMS {}()()∑=--≤<-=k i i k i k i k k a X e a a X e E 010,1, and the SA {}()()k k k k e sign X X e E =,[1,2,5,8] .The VS LMS has the same form as the LMS, but in the adaptation the step µ(k) is changed [6, 7]. The considered adaptive filtering problem consists in trying to adjust a set of weighting coef ficients so that the system output,k T k k X W y =, tracks a reference signal, assumed as k k Tk k n X W d +=*,where k n is a zeromean Gaussian noise with the variance 2n σ,and *k W is the optimal weight vector (Wiener vector). Two cases will be considered:W W k =* is a constant (stationary case) and *k W is time-varying (nonstationary case). In nonstationary case the unknown system parameters( i.e. the optimal vector *k W )are time variant. It is often assumed that variation of *k W maybe modeled as K k k Z W W +=+**1 is the zero-mean random perturbation,independent on k X and k n with the autocorrelation matrix[]I Z Z E G Z T k k 2σ==.Note that analysis for the stationary case directly follows for 02=Zσ.The weighting coef ficient vector converges to the Wiener one, if the condition from [1, 2] is satis fied.De fine the weighting coef ficientsmisalignment, [1–3],*k k k W W V -=. It is due to both the effects of gradient noise (weighting coef ficients variations around the average value) and the weighting vector lag (difference between the average and the optimal value), [3]. It can be expressed as:()()()()*k k k k k W W E W E W V -+-=, (2) According to (2), the ith element of k V is:(3)where ()()k W bias i is the weighting coef ficient bias and ()k i ρ is a zero-mean random variable with the variance 2σ.The variance depends on the type of LMS-based algorithm, as well as on the external noise variance 2n σ.Thus, if the noise variance is constant or slowly-varying,2σ is time invariant for a particular LMS-based algorithm. In that sense, in the analysis that follows we will assume that 2σ depends only on the algorithm type, i.e. on its parameters.An important performance measure for an adaptive filter is its mean square deviation (MSD) of weighting coef ficients. For the adaptive filters, it is given by, [3]:[]k T k k V V E MSD ∞→=lim . 3. Combined adaptive filterThe basic idea of the combined adaptive filter lies in parallel implementation of two or more adaptive LMS-based algorithms, with the choice of the best among them in each iteration [9]. Choice of the most appropriate algorithm, in each iteration, reduces to the choice of the best value for the weighting coef ficients. The best weighting coef ficient is the one that is, at a given instant, the closest to the corresponding ()()()()()()()()()()()()k k W bias k W E k W k W k W E k V i i i i i i i ρ+=-+-=*value of the Wiener vector.Let ()q k W i , be the i −th weighting coef ficient for LMS-based algorithm with the chosen parameter q at an instant k. Note that one may now treat all the algorithms in a uni fied way (LMS: q ≡ µ,GLMS: q ≡ a,SA:q ≡ µ). LMS-based algorithm behavior is crucially dependent on q. In each iteration there is an optimal value qopt , producing the best performance of the adaptive al-gorithm. Analyze now a combined adaptive filter, with several LMS-based algorithms of the same type, but with different parameter q.The weighting coef ficients are random variables distributed around the()k W i *,with ()()q k W bias i ,and the variance 2q σ, related by [4, 9]:()()()()q i i i q k W bias k W q k W κσ≤--,,*, (4)where (4) holds with the probability P(κ), dependent on κ. For example, for κ = 2 and a Gaussian distribution,P(κ) = 0.95 (two sigma rule). De fine the con fidence intervals for ()]9,4[,,q k W i :()()()[]q i q i i q k W k q k W k D κσσ2,,2,+-= (5) Then, from (4) and (5) we conclude that, as long as ()()q i q k W bias κσ<,,()()k D k W i i ∈*, independently on q. This means that, for small bias, the con fidence intervals, for different s q ' of the same LMS-based algorithm, of the same LMS-based algorithm, intersect. When, on the other hand, the bias becomes large, then the central positions of the intervals for different s q ' are far apart, and they do not intersect. Since we do not have apriori information about the ()()q k W bias i ,,we will use a speci fi c statistical approach to get the criterion for the choice of adaptive algorithm, i.e. for the values of q. The criterion follows from the trade-off condition that bias and variance are of the same order of magnitude, i.e.()()[]4,,q i q k W bias κσ≅.The proposed combined algorithm (CA) can now be summarized in the following steps:Step 1. Calculate ()q k W i ,for the algorithms with different s q 'from the prede fined set {} ,,2q q Q i =.Step 2. Estimate the variance 2q σ for each considered algorithm.Step 3. Check if ()k D i intersect for the considered algorithms. Start from an algorithm with largest value of variance, and go toward the ones with smaller values of variances. According to (4), (5) and the trade-off criterion, this check reduces to the check if()()()ql qm l i m i q k W q k W σσκ+<-2,, (6)is satis fied, where Q q q l m ∈,,and the following relation holds:Q q q h ql qh qm h ∉⇒>>∀,:222σσσ.If no ()k D i intersect (large bias) choose the algorithm with largest value of variance. If the ()k D i intersect, the bias is already small. So, check a new pair of weighting coef ficients or, if that is the last pair, just choose the algorithm with the smallest variance. First two intervals that do not intersect mean that the proposed trade-off criterion is achieved, and choose the algorithm with large variance.Step 4. Go to the next instant of time.The smallest number of elements of the set Q is L =2. In that case, one of the s q 'should provide good tracking of rapid variations (the largest variance), while the other should provide small variance in the steady state. Observe that by adding few more s q ' between these two extremes, one may slightly improve the transient behavior of the algorithm.Note that the only unknown values in (6) are the variances. In oursimulations we estimate 2q σ as in [4]:()()()2675.0/1--=k W k W median i i q σ, (7)for k = 1, 2,... , L and 22qZ σσ<<. The alternative way is to estimate 2n σ as:∑=≈T i i n e T 1221σ,for x(i) = 0. (8) Expressions relating 2n σ and 2q σ in steady state, for different types ofLMS-based algorithms, are known from literature. For the standard LMSalgorithm in steady state, 2n σ and 2q σ are related 22nq q σσ=,[3]. Note that any other estimation of 2q σis valid for the proposed filter.Complexity of the CA depends on the constituent algorithms (Step 1),and on the decision algorithm (Step 3).Calculation of weighting coefficients for parallel algorithms does not increase the calculation time, since it is performed by a parallel hardware realization, thus increasing the hardware requirements. The variance estimations (Step 2), negligibly contribute to the increase of algorithm complexity, because they are performed at the very beginning of adaptation and they are using separate hardware realizations. Simple analysis shows that the CA increases the number of operations for, at most, N(L −1) additions and N(L −1) IF decisions, and needs some additional hardware with respect to the constituent algorithms.4.Illustration of combined adaptive filterConsider a system identification by the combination of two LMS algorithms with different steps. Here, the parameter q is μ,i.e. {}{}10/,,21μμ==q q Q .The unknown system has four time-invariant coefficients,and the FIR filters are with N = 4. We give the average mean square deviation (AMSD ) for both individual algorithms, as well as for their combination,Fig. 1(a). Results are obtained by averaging over 100 independent runs (the Monte Carlo method), with μ = 0.1. The reference dk is corrupted by a zero-meanuncorrelated Gaussian noise with 2n σ= 0.01 and SNR = 15 dB, and κ is 1.75.In the first 30 iterations the variance was estimated according to (7), and the CA picked the weighting coefficients calculated by the LMS with μ.As presented in Fig. 1(a), the CA first uses the LMS with μ and then, in the steady state, the LMS with μ/10. Note the region, between the 200th and 400th iteration,where the algorithm can take the LMS with either stepsize,in different realizations. Here, performance of the CA would be improved by increasing the number of parallel LMS algorithms with steps between these two extrems.Observe also that, in steady state, the CA does not ideally pick up the LMS with smaller step. The reason is in the statistical nature of the approach.Combined adaptive filter achieves even better performance if the individual algorithms, instead of starting an iteration with the coefficient values taken from their previous iteration, take the ones chosen by the CA. Namely, if the CA chooses, in the k -th iteration, the weighting coefficient vector P W ,theneach individual algorithm calculates its weighting coefficients in the (k +1)-th iteration according to: {}k k p k X e E W W μ21+=+(9)Fig. 1. Average MSD for considered algorithms.Fig. 2. Average MSD for considered algorithms.Fig. 1(b) shows this improvement, applied on the previous example. In order to clearly compare the obtained results,for each simulation we calculated the AMSD . For the first LMS (μ) it was AMSD = 0.02865, for the second LMS (μ/10) it was AMSD = 0.20723, for the CA (CoLMS) it was AMSD = 0.02720 and for the CA with modification (9) it was AMSD = 0.02371.5. Simulation resultsThe proposed combined adaptive filter with various types of LMS-based algorithms is implemented for stationary and nonstationary cases in a system identification setup.Performance of the combined filter is compared with the individual ones, that compose the particular combination.In all simulations presented here, the reference dk is corrupted by azero-mean uncorrelated Gaussian noise with 1.02=nσand SNR = 15 dB. Results are obtained by averaging over 100 independent runs, with N = 4, as in the previous section.(a) Time varying optimal weighting vector: The proposed idea may be applied to the SA algorithms in a nonstationary case. In the simulation, the combined filter is composed out of three SA adaptive filters with different steps, i.e. Q = {μ, μ/2, μ/8}; μ = 0.2. The optimal vectorsis generated according to the presented model with 001.02=Zσ,and with κ = 2. In the first 30 iterations the variance was estimated according to (7), and CA takes the coefficients of SA with μ (SA1).Figure 2(a) shows the AMSD characteristics for each algorithm. In steady state the CA does not ideally follow the SA3 with μ/8, because of the nonstationary problem nature and a relatively small difference between the coefficient variances of the SA2 and SA3. However,this does not affect the overall performance of the proposed algorithm. AMSD for each considered algorithm was: AMSD = 0.4129 (SA1,μ), AMSD = 0.4257 (SA2,μ/2), AMSD = 1.6011 (SA3, μ/8) and AMSD = 0.2696(Comb).(b) Comparison with VS LMS algorithm [6]: In this simulation we take the improved CA (9) from 3.1, and compare its performance with the VS LMS algorithm [6], in the case of abrupt changes of optimal vector. Since the considered VS LMS algorithm[6] updates its step size for each weighting coefficient individually, the comparison of these two algorithms is meaningful.All the parameters for the improved CA are the same as in 3.1. For the VS LMS algorithm [6], the relevant parameter values are the counter of sign change m0 = 11,and the counter of sign continuity m1 = 7. Figure 2(b)shows the AMSD for the compared algorithms, where one can observe the favorable properties of the CA, especially after the abrupt changes. Note that abrupt changes are generated by multiplying all the system coefficients by −1 at the 2000-th iteration (Fig. 2(b)). The AMSD for the VS LMS was AMSD = 0.0425, while its value for the CA (CoLMS) was AMSD = 0.0323.For a complete comparison of these algorithms we consider now their calculation complexity, expressed by the respective increase in number of operations with respect to the LMS algorithm. The CA increases the number of requres operations for N additions and N IF decisions.For the VS LMS algorithm, the respective increase is: 3N multiplications, N additions, and at least 2N IF decisions.These values show the advantage of the CA with respect to the calculation complexity.6. ConclusionCombination of the LMS based algorithms, which results in an adaptive system that takes the favorable properties of these algorithms in tracking parameter variations, is proposed.In the course of adaptation procedure it chooses better algorithms, all the way to the steady state when it takes the algorithm with the smallest variance of the weighting coefficient deviations from the optimal value.Acknowledgement. This work is supported by the Volkswagen Stiftung, Federal Republic of Germany.References[1] Widrow, B.; Stearns, S.: Adaptive Signal Processing. Prentice-Hall, Inc. Englewood Cliffs, N.J. 07632.[2] Alexander, S. T.: Adaptive Signal Processing –Theory and Applications. Springer-Verlag, New York 1986.[3] Widrow, B.; McCool, J. M.; Larimore, M. G.; Johnson, C. R.:Stationary and Nonstationary Learning Characteristics of the LMS Adaptive Filter. Proc. IEEE 64 (1976) 1151–1161.[4] Stankovic, L. J.; Katkovnik, V.: Algorithm for the instantaneous frequency estimation using time-frequency distributions with variable window width. IEEE SP. Letters 5 (1998), 224–227.[5] Krstajic, B.; Uskokovic, Z.; Stankovic, Lj.: GLMS Adaptive Algorithm in Linear Prediction. Proc. CCECE’97 I, pp. 114–117, Canada, 1997. [6] Harris, R. W.; Chabries, D. M.; Bishop, F. A.: A Variable Step (VS) Adaptive Filter Algorithm. IEEE Trans. ASSP 34 (1986),309–316.[7] Aboulnasr, T.; Mayyas, K.: A Robust Variable Step-Size LMSType Algorithm: Analysis and Simulations. IEEE Trans. SP 45 (1997), 631–639.[8] Mathews, V. J.; Cho, S. H.: Improved Convergence Analysis of Stochastic Gradient Adaptive Filters Using the Sign Algorithm. IEEE Trans. ASSP 35 (1987), 450–454.[9] Krstajic, B.; Stankovic, L. J.; Uskokovic, Z.; Djurovic, I.:Combined adaptive system for identification of unknown systems with varying parameters in a noisy enviroment. Proc. IEEE ICECS’99, Phafos, Cyprus, Sept. 1999.基于LMS算法的自适应组合滤波器摘要:提出了一种自适应组合滤波器。
姓名:峻霖班级:通信143班学号:2014101108附录一、英文原文:Detecting Anomaly Traffic using Flow Data in thereal VoIP networkI. INTRODUCTIONRecently, many SIP[3]/RTP[4]-based VoIP applications and services have appeared and their penetration ratio is gradually increasing due to the free or cheap call charge and the easy subscription method. Thus, some of the subscribers to the PSTN service tend to change their home telephone services to VoIP products. For example, companies in Korea such as LG Dacom, Samsung Net- works, and KT have begun to deploy SIP/RTP-based VoIP services. It is reported that more than five million users have subscribed the commercial VoIP services and 50% of all the users are joined in 2009 in Korea [1]. According to IDC, it is expected that the number of VoIP users in US will increase to 27 millions in 2009 [2]. Hence, as the VoIP service becomes popular, it is not surprising that a lot of VoIP anomaly traffic has been already known [5]. So, Most commercial service such as VoIP services should provide essential security functions regarding privacy, authentication, integrity andnon-repudiation for preventing malicious traffic. Particu- larly, most of current SIP/RTP-based VoIP services supply the minimal security function related with authentication. Though secure transport-layer protocols such as Transport Layer Security (TLS) [6] or Secure RTP (SRTP) [7] have been standardized, they have not been fully implemented and deployed in current VoIP applications because of the overheads of implementation and performance. Thus, un-encrypted VoIP packets could be easily sniffed and forged, especially in wireless LANs. In spite of authentication,the authentication keys such as MD5 in the SIP header could be maliciously exploited, because SIP is a text-based protocol and unencrypted SIP packets are easily decoded. Therefore, VoIP services are very vulnerable to attacks exploiting SIP and RTP. We aim at proposing a VoIP anomaly traffic detection method using the flow-based traffic measurement archi-tecture. We consider three representative VoIP anomalies called CANCEL, BYE Denial of Service (DoS) and RTP flooding attacks in this paper, because we found that malicious users in wireless LAN could easily perform these attacks in the real VoIP network. For monitoring VoIP packets, we employ the IETF IP Flow Information eXport (IPFIX) [9] standard that is based on NetFlow v9. This traffic measurement method provides a flexible and extensible template structure for various protocols, which is useful for observing SIP/RTP flows [10]. In order to capture and export VoIP packets into IPFIX flows, we define two additional IPFIX templates for SIP and RTP flows. Furthermore, weadd four IPFIX fields to observe 802.11 packets which are necessary to detect VoIP source spoofing attacks in WLANs.II. RELATED WORK[8] proposed a flooding detection method by the Hellinger Distance (HD) concept. In [8], they have pre- sented INVITE, SYN and RTP flooding detection meth-ods. The HD is the difference value between a training data set and a testing data set. The training data set collected traffic over n sampling period of duration Δ t.The testing data set collected traffic next the training data set in the same period. If the HD is close to ‘1’, this testing data set is regarded as anomaly traffic. For using this method, they assumed that initial training data set did not have any anomaly traffic. Since this method was based on packet counts, it might not easily extended to detect other anomaly traffic except flooding. On the other hand, [11] has proposed a VoIP anomaly traffic detection method using Extended Finite State Machine (EFSM). [11] has suggested INVITE flooding, BYE DoS anomaly traffic and media spamming detection methods. However, the state machine required more memory because it had to maintain each flow. [13] has presented NetFlow-based VoIP anomaly detection methods for INVITE, REGIS-TER, RTP flooding, and REGISTER/INVITE scan. How-ever, the VoIP DoS attacks considered in this paper were not considered. In [14], an IDS approach to detect SIP anomalies was developed, but only simulation results are presented. For monitoring VoIP traffic, SIPFIX [10] has been proposed as an IPFIX extension. The key ideas of the SIPFIX are application-layer inspection andSDP analysis for carrying media session information. Yet, this paper presents only the possibility of applying SIPFIX to DoS anomaly traffic detection and prevention. We described the preliminary idea of detecting VoIP anomaly traffic in [15]. This paper elaborates BYE DoS anomaly traffic and RTP flooding anomaly traffic detec-tion method based on IPFIX. Based on [15], we have considered SIP and RTP anomaly traffic generated in wireless LAN. In this case, it is possible to generate the similiar anomaly traffic with normal VoIP traffic, because attackers can easily extract normal user information from unencrypted VoIP packets. In this paper, we have extended the idea with additional SIP detection methods using information of wireless LAN packets. Furthermore, we have shown the real experiment results at the commercial VoIP network.III. THE VOIP ANOMALY TRAFFIC DETECTION METHODA. CANCEL DoS Anomaly Traffic DetectionAs the SIP INVITE message is not usually encrypted, attackers could extract fields necessary to reproduce the forged SIP CANCEL message by sniffing SIP INVITE packets, especially in wireless LANs. Thus, we cannot tell the difference between the normal SIP CANCEL message and the replicated one, because the faked CANCEL packet includes the normal fields inferred from the SIP INVITE message. The attacker will perform the SIP CANCEL DoS attack at the same wireless LAN, because the purpose of the SIP CANCEL attack is to prevent the normal call estab-lishment when a victim is waiting for calls. Therefore, as soonas the attacker catches a call invitation message for a victim, it will send a SIP CANCEL message, which makes the call establishment failed. We have generated faked SIP CANCEL message using sniffed a SIP INVITE message.Fields in SIP header of this CANCEL message is the same as normal SIP CANCEL message, because the attacker can obtain the SIP header field from unencrypted normal SIP message in wireless LAN environment. Therefore it is impossible to detect the CANCEL DoS anomaly traffic using SIP headers, we use the different values of the wireless LAN frame. That is, the sequence number in the 802.11 frame will tell the difference between a victim host and an attacker. We look into source MAC address and sequence number in the 802.11 MAC frame including a SIP CANCEL message as shown in Algorithm 1. We compare the source MAC address of SIP CANCEL packets with that of the previously saved SIP INVITE flow. If the source MAC address of a SIP CANCEL flow is changed, it will be highly probable that the CANCEL packet is generated by a unknown user. However, the source MAC address could be spoofed. Regarding 802.11 source spoofing detection, we employ the method in [12] that uses sequence numbers of 802.11 frames. We calculate the gap between n-th and (n-1)-th 802.11 frames. As the sequence number field in a 802.11 MAC header uses 12 bits, it varies from 0 to 4095. When we find that the sequence number gap between a single SIP flow is greater than the threshold value of N that will be set from the experiments, we determine that the SIP host address as been spoofed for the anomaly traffic.B. BYE DoS Anomaly Traffic DetectionIn commercial VoIP applications, SIP BYE messages use the same authentication field is included in the SIP IN-VITE message for security and accounting purposes. How-ever, attackers can reproduce BYE DoS packets through sniffing normal SIP INVITE packets in wireless LANs.The faked SIP BYE message is same with the normal SIP BYE. Therefore, it is difficult to detect the BYE DoS anomaly traffic using only SIP header information.After sniffing SIP INVITE message, the attacker at the same or different subnets could terminate the normal in- progress call, because it could succeed in generating a BYE message to the SIP proxy server. In the SIP BYE attack, it is difficult to distinguish from the normal call termination procedure. That is, we apply the timestamp of RTP traffic for detecting the SIP BYE attack. Generally, after normal call termination, the bi-directional RTP flow is terminated in a bref space of time. However, if the call termination procedure is anomaly, we can observe that a directional RTP media flow is still ongoing, whereas an attacked directional RTP flow is broken. Therefore, in order to detect the SIP BYE attack, we decide that we watch a directional RTP flow for a long time threshold of N sec after SIP BYE message. The threshold of N is also set from the experiments.Algorithm 2 explains the procedure to detect BYE DoS anomal traffic using captured timestamp of the RTP packet. We maintain SIP session information between clients with INVITE and OK messages including the same Call-ID and 4-tuple (source/destination IP Address and port number) of the BYEpacket. We set a time threshold value by adding Nsec to the timestamp value of the BYE message. The reason why we use the captured timestamp is that a few RTP packets are observed under 0.5 second. If RTP traffic is observed after the time threshold, this will be considered as a BYE DoS attack, because the VoIP session will be terminated with normal BYE messages. C. RTP Anomaly Traffic Detection Algorithm 3 describes an RTP flooding detection method that uses SSRC and sequence numbers of the RTP header. During a single RTP session, typically, the same SSRC value is maintained. If SSRC is changed, it is highly probable that anomaly has occurred. In addition, if there is a big sequence number gap between RTP packets, we determine that anomaly RTP traffic has happened. As inspecting every sequence number for a packet is difficult, we calculate the sequence number gap using the first, last, maximum and minimum sequence numbers. In the RTP header, the sequence number field uses 16 bits from 0 to 65535. When we observe a wide sequence number gap in our algorithm, we consider it as an RTP flooding attack.IV. PERFORMANCE EVALUATIONA. Experiment EnvironmentIn order to detect VoIP anomaly traffic, we established an experimental environment as figure 1. In this envi-ronment, we employed two VoIP phones with wireless LANs, one attacker, a wireless access router and an IPFIX flow collector. For the realistic performance evaluation, we directly used one of the working VoIP networks deployed in Korea where an 11-digit telephone number (070-XXXX-XXXX) has been assigned to a SIP phone.With wireless SIP phones supporting 802.11, we could make calls to/from the PSTN or cellular phones. In the wireless access router, we used two wireless LAN cards- one is to support the AP service, and the other is to monitor 802.11 packets. Moreover, in order to observe VoIP packets in the wireless access router, we modified nProbe [16], that is an open IPFIX flow generator, to create and export IPFIX flows related with SIP, RTP, and 802.11 information. As the IPFIX collector, we have modified libipfix so that it could provide the IPFIX flow decoding function for SIP, RTP, and 802.11 templates. We used MySQL for the flow DB.B. Experimental ResultsIn order to evaluate our proposed algorithms, we gen-erated 1,946 VoIP calls with two commercial SIP phones and a VoIP anomaly traffic generator. Table I shows our experimental results with precision, recall, and F-score that is the harmonic mean of precision and recall. In CANCEL DoS anomaly traffic detection, our algorithm represented a few false negative cases, which was related with the gap threshold of the sequence number in 802.11 MAC header. The average of the F-score value for detecting the SIP CANCEL anomaly is 97.69%.For BYE anomaly tests, we generated 755 BYE mes-sages including 118 BYE DoS anomalies in the exper-iment. The proposed BYE DoS anomaly traffic detec-tion algorithm found 112 anomalies with the F-score of 96.13%. If an RTP flow is terminated before the threshold, we regard the anomaly flow as a normal one. In this algorithm, we extract RTP session information from INVITE and OK or session description messages using the same Call-ID of BYE message. It is possible not to capture those packet, resulting in a few false-negative cases. The RTP flooding anomaly traffic detection experiment for 810 RTP sessions resulted in the F score of 98%.The reason of false-positive cases was related with the sequence number in RTP header. If the sequence number of anomaly traffic is overlapped with the range of the normal traffic, our algorithm will consider it as normal traffic.V. CONCLUSIONSWe have proposed a flow-based anomaly traffic detec-tion method against SIP and RTP-based anomaly traffic in this paper. We presented VoIP anomaly traffic detection methods with flow data on the wireless access router. We used the IETF IPFIX standard to monitor SIP/RTP flows passing through wireless access routers, because its template architecture is easily extensible to several protocols. For this purpose, we defined two new IPFIX templates for SIP and RTP traffic and four new IPFIX fields for 802.11 traffic. Using these IPFIX flow templates,we proposed CANCEL/BYE DoS and RTP flooding traffic detection algorithms. From experimental results on the working VoIP network in Korea, we showed that our method is able to detect three representative VoIP attacks on SIP phones. In CANCEL/BYE DoS anomaly trafficdetection method, we employed threshold values about time and sequence number gap for classfication of normal and abnormal VoIP packets. This paper has not been mentioned the test result about suitable threshold values. For the future work, we will show the experimental result about evaluation of the threshold values for our detection method.二、英文翻译:交通流数据检测异常在真实的世界中使用的VoIP网络一 .介绍最近,多SIP[3],[4]基于服务器的VoIP应用和服务出现了,并逐渐增加他们的穿透比及由于自由和廉价的通话费且极易订阅的法。
使用LabVIEW中的TCP/IP和UDP协议前言互联网络协议(IP),用户数据报协议(UDP)和传输控制协议(TCP)是网络通信的基本的工具。
TCP与IP的名称来自于一组最著名的因特网协议中的两个--传输控制协议和互联网络协议。
你能使用TCP/IP来进行单一网络或者互连网络间的通信。
单独的网络会被大的地理距离分隔。
TCP/IP把数据从一个子网网络或者因特网连接的计算机发送到另一个上。
因为TCP/IP 在大多数计算机上是可用的,它能在多样化的系统中间传送信息。
LabVIEW和TCP/IP你能在所有平台上的LabVIEW中使用TCP/IP。
LabVIEW包含了TCP和UDP程序还有能让你建立客户端或者服务器程序的功能。
IPIP执行低层次的计算机间的数据传送。
在组成部分里的IP数据包称为数据报。
一个数据报包含表明来源和目的地地址的数据和报头字。
IP为通过网络或者因特网把数据发送到指定的目的地的数据报确定正确的路径。
IP协议并不能保证发送。
事实上,如果数据报在传输中被复制,IP可能多次传送一个单独的数据报。
所以,程序很少用IP而是用TCP或者UDP代替。
UDPUDP在计算机进程中提供简单而低层次的通信。
进程通过把数据报发送到一个目的地计算机或者端口进行通信。
一个端口是你发送数据的位置。
IP处理计算机对计算机的发送。
在数据报到达目的地计算机后,UDP把数据报移动到其目的端口。
如果目的端口不是开放的,UDP 将删除数据报。
UDP将发生IP的同样的发送问题。
应用程序的UDP的可靠性不强。
例如,一项应用程序能经常把大量信息的数据传送到目的地而丢失少量的数据是肯定的。
在LabVIEW中使用UDP协议因为UDP不是一个TCP似的一个以连接为基础的协议,在你发送或者收到数据之前,你不需要和目的地建立一种连接。
相反,当你每发送一个数据报时,由你指定数据目的地。
操作系统不会报告传输差错使用UDP打开功能在一个端口上打开一个UDP插口。
Five Disruptive Technology Directions for 5G ABSTRACT: New research directions will lead to fundamental changes in the design of future 5th generation (5G) cellular networks. This paper describes five technologies that could lead to both architectural and component disruptive design changes: device-centric architectures, millimeter Wave, Massive-MIMO, smarter devices, and native support to machine-2-machine. The key ideas for each technology are described, along with their potential impact on 5G and the research challenges that remain.I.INTRODUCTION:5G is coming. What technologies will define it? Will 5G be just an evolution of 4G, or will emerging technologies cause a disruption requiring a wholesale rethinking of entrenched cellular principles? This paper focuses on potential disruptive technologies and their implications for 5G. We classify the impact of new technologies, leveraging the Henderson-Clark model [1], as follows:1.Minor changes at both the node and the architectural level, e.g., the introduction of codebooks and signaling support for a higher number of antennas. We refer to these as evolutions in the design.2.Disruptive changes in the design of a class of network nodes, e.g., the introduction of a new waveform. We refer to these as component changes.3.Disruptive changes in the system architecture, e.g., the introduction of new types of nodes or new functions in existing ones. We refer to these as architectural changes.4.Disruptive changes that have an impact at both the node and the architecture levels. We refer to these as radical changes.We focus on disruptive (component, architectural or radical) technologies, driven by our belief that the extremely higher aggregate data rates and the much lower latencies required by 5G cannot be achieved with a mere evolution of the status quo. We believe that the following five potentially disruptive technologies could lead to both architectural and component design changes, as classified in Figure 1.1.Device-centric architectures.The base-station-centric architecture of cellular systems may change in 5G. It may be time to reconsider the concepts of uplink and downlink, as well as control and data channels, to better route information flows with different priorities and purposes towards different sets of nodes within the network. We present device-centric architectures in Section II.limeter Wave (mmWave).While spectrum has become scarce at microwave frequencies, it is plentiful in the mmWave realm. Such a spectrum ‘el Dorado’ has led to a mmWave ‘gold rush’ in which researchers with diverse backgrounds are studying different aspects ofmmWave transmission. Although far from fully understood, mmWave technologies have already been standardized for short-range services (IEEE 802.11ad) and deployed for niche applications such as small-cell backhaul. In Section III, we discuss the potential of mmWave for a broader application in 5G.3.Massive-MIMO.Massive-MIMO1 proposes utilizing a very high number of antennas to multiplex messages for several devices on each time-frequency resource, focusing the radiated energy towards the intended directions while minimizing intra-and inter-cell interference. Massive-MIMO may require major architectural changes, in particular in the design of macro base stations, and it may also lead to new types of deployments. We discuss massive-MIMO in Section IV.4.Smarter devices.2G-3G-4G cellular networks were built under the design premise of having complete control at the infrastructure side. We argue that 5G systems should drop this design assumption and exploit intelligence at the device side within different layers of the protocol stack, e.g., by allowing Device-to-Device (D2D) connectivity or by exploiting smart caching at the mobile side. While this design philosophy mainly requires a change at the node level (component change), it has also implications at the architectural level. We argue for smarter devices in Section V.5.Native support for Machine-to-Machine (M2M) communicationA native2 inclusion of M2M communication in 5G involves satisfying three fundamentally different requirements associated to different classes of low-data-rate services: support of a massive number of low-rate devices, sustainment of a minimal data rate in virtually all circumstances, and very-low-latency data transfer. Addressing these requirements in 5G requires new methods and ideas at both the component and architectural level, and such is the focus of Section VI.II.DEVICE-CENTRIC ARCHITECTURESCellular designs have historically relied on the axiomatic role of ‘cells’ as fundamental units within the radio access network. Under such a design postulate, a device obtains service by establishing a downlink and an uplink connection, carrying both control and data traffic, with the base station commanding the cell where the device is located. Over the last few years, different trends have been pointing to a disruption of this cell-centric structure:1.The base-station density is increasing rapidly, driven by the rise of heterogeneous networks. While heterogeneous networks were already standardized in 4G, the architecture was not natively designed to support them. Network densification could require some major changes in 5G. The deployment of base stations with vastly different transmit powers and coverage areas, for instance, calls for a decoupling of downlink and uplink in a way that allows for the corresponding information to flow through different sets of nodes [5].2.The need for additional spectrum will inevitably lead to the coexistence of frequency bands with radically different propagation characteristics within the same system. In this context, [6] proposes the concept of a ‘phantom cell’ where the data and control planes are separated: the control information is sent by high-power nodes at microwave frequencies whereas the payload data is conveyed by low-power nodes at mm-Wave frequencies. (cf. Section III.)3.A new concept termed centralized baseband related to the concept of cloud radioaccess networks is emerging (cf. [7]), where virtualization leads to a decoupling between a node and the hardware allocated to handle the processing associated with this node. Hardware resources in a pool, for instance, could be dynamically allocated to different nodes depending on metrics defined by the network operator.Emerging service classes, described in Section VI, could require a complete redefinition of the architecture. Current works are looking at architectural designs ranging from centralization or partial centralization (e.g., via aggregators) to full distribution (e.g., via compressed sensing and/or multihop).Cooperative communications paradigms such as CoMP or relaying, which despite falling short of their initial hype are nonetheless beneficial [8], could require a redefinition of the functions of the different nodes. In the context of relaying, for instance, recent developments in wireless network coding [9] suggest transmission principles that would allow recovering some of the losses associated with half-duplex relays. Moreover, recent research points to the plausibility of full- duplex nodes for short-range communication in a not-so-distant future.The use of smarter devices (cf. Section V) could impact the radio access network. In particular, both D2D and smart caching call for an architectural redefinition where the center of gravity moves from the network core to the periphery (devices, local wireless proxies, relays). Based on these trends, our vision is that the cell-centric architecture should evolve into a device-centric one: a given device (human or machine) should be able to communicate by exchanging multiple information flows through several possible sets of heterogeneous nodes. In other words, the set of network nodes providing connectivity to a given device and the functions of these nodes in a particular communication session should be tailored to that specific device and session. Under this vision, the concepts of uplink/downlink and control/data channel should be rethought (cf. Figure 2).While the need for a disruptive change in architectural design appears clear, major research efforts are still needed to transform the resulting vision into a coherent and realistic proposition. Since the history of innovations (cf. [1]) indicates that architectural changes are often the drivers of major technological discontinuities, we believe that the trends above might have a major influence on the development of 5G.LIMETER WA VE COMMUNICATIONMicrowave cellular systems have precious little spectrum: around 600 MHz are currently in use, divided among operators [10]. There are two ways to gain access to more microwave spectrum:1.To repurpose or refarm spectrum. This has occurred worldwide with the repurposing of terrestrial TV spectrum for applications such as rural broadband access. Unfortunately, repurposing has not freed up that much spectrum, only about 80 MHz, and at a high cost associated with moving the incumbents.2.To share spectrum utilizing, for instance, cognitive radio techniques. The high hopes initially placed on cognitive radio have been dampened by the fact that an incumbent not fully willing to cooperate is a major obstacle to spectrum efficiency for secondary users.3.Altogether, it appears that a doubling of the current cellular bandwidth is the best-case scenario at microwave frequencies. Alternatively, there is an enormous amount of spectrum at mmWave frequencies ranging from 3 to 300 GHz. Many bands therein seem promising, including most immediately the local multipoint distribution service at 28-30 GHz, the license-free band at 60 GHz, and the E-band at 71-76 GHz, 81-86 GHz and 92-95 GHz. Foreseeably, several tens of GHz could become available for 5G, offering well over an order-of-magnitude increase over what is available atpresent. Needless to say, work needs to be done on spectrum policy to render these bands available for mobile cellular.3.Propagation is not an insurmountable challenge. Recent measurements indicate similar general characteristics as at microwave frequencies, including distance-dependent pathloss and the possibility of non-line-of-sight communication. A main difference between microwave and mmWave frequencies is the sensitivity to blockages: the results in [11], for instance, indicate a pathloss exponent of 2 for line-of-sight propagation but 4 (plus an additional power loss) for non-line-of-sight. MmWave cellular research will need to incorporate sensitivity to blockages and more complex channel models into the analysis, and also study the effects of enablers such as higher density infrastructure and relays. Another enabler is the separation between control and data planes, already mentioned in Section II.Antenna arrays are a key feature in mmWave systems. Large arrays can be used to keep the antenna aperture constant, eliminating the frequency dependence of pathloss relative to omnidirectional antennas (when utilized at one side of the link) and providing a net array gain to counter the larger thermal noise bandwidth (when utilized at both sides of the link). Adaptive arrays with narrow beams also reduce the impact of interference, meaning that mmWave systems could more often operate in noise-limited rather than interference-limited conditions. Since meaningful communication might only happen under sufficient array gain, new random access protocols are needed that work when transmitters can only emit in certain directions and receivers can only receive from certain directions. Adaptive array processing algorithms are required that can adapt quickly when beams are blocked by people or when some device antennas become obscured by the user’s own body.MmWave systems also have distinct hardware constraints. A major one comes from the high power consumption of mixed signal components, chiefly the analog-to-digital (ADC) and digital-to-analog converters (DAC). Thus, the conventional microwave architecture where every antenna is connected to a high-rate ADC/DAC is unlikely to be applicable to mmWave without a huge leap forward in semiconductor technology. One alternative is a hybrid architecture where beamforming is performed in analog at RF and multiple sets of beamformers are connected to a small number of ADCs or DACS; in this alternative, signal processing algorithms are needed to steer the analog beamforming weights. Another alternative is to connect each RF chain to a 1-bit ADC/DAC, with very low power requirements; in this case, the beamforming would be performed digitally but on very noisy data. There are abundant research challenges in optimizing different transceiver strategies, analyzing their capacity, incorporating multiuser capabilities, and leveraging channel features such as sparsity.A data rate comparison between technologies is provided in Fig. 3, for certain simulation settings, in terms of mean and 5% outage rates. MmWave operation is seento provide very high rates compared to two different microwave systems. The gains exceed the 10x spectrum increase because of the enhanced signal power and reduced interference thanks to directional beamforming at both transmitter and receiver.IV.MASSIVE MIMOMassive MIMO (also referred to as ‘Large-Scale MIMO’ or ‘Large-Scale Antenna Systems’) is a form of multiuser MIMO in which the number of antennas at the base station is much larger than the number of devices per signaling resource [14]. Having many more base station antennas than devices renders the channels to the different devices quasi-orthogonal and very simple spatial multiplexing/de-multiplexing procedures quasi-optimal. The favorable action of the law of large numbers smoothens out frequency dependencies in the channel and, altogether, huge gains in spectral efficiency can be attained (cf. Fig. 4).In the context of the Henderson-Clark framework, we argue that massive-MIMO has a disruptive potential for 5G:At a node level, it is a scalable technology. This is in contrast with 4G, which, in many respects, is not scalable: further sectorization therein is not feasible because of (i) the limited space for bulky azimuthally-directive antennas, and (ii) the inevitable angle spread of the propagation; in turn, single-user MIMO is constrained by the limited number of antennas that can fit in certain mobile devices. In contrast, there is almost no limit on the number of base station antennas in massive- MIMO provided that time-division duplexing is employed to enable channel estimation through uplink pilots.It enables new deployments and architectures. While one can envision direct replacement of macro base stations with arrays of low-gain resonant antennas, other deployments are possible, e.g., conformal arrays on the facades of skyscrapers or arrays on the faces of water tanks in rural locations. Moreover, the same massive-MIMO principles that govern the use of collocated arrays of antennas applyalso to distributed deployments in which a college campus or an entire city could be covered with a multitude of distributed antennas that collectively serve many users (in this framework, the centralized baseband concept presented in Section II is an important architectural enabler).While very promising, massive-MIMO still presents a number of research challenges. Channel estimation is critical and currently it represents the main source of limitations. User motion imposes a finite coherence interval during which channel knowledge must be acquired and utilized, and consequently there is a finite number of orthogonal pilot sequences that can be assigned to the devices. Reuse of pilot sequences causes pilot contamination and coherent interference, which grows with the number of antennas as fast as the desired signals. The mitigation of pilot contamination is an active research topic. Also, there is still much to be learned about massive-MIMO propagation, although experiments thus far support the hypothesis of channel quasi-orthogonality. From an implementation perspective, massive-MIMO can potentially be realized with modular low-cost low-power hardware with each antenna functioning semi-autonomously, but a considerable development effort is still required to demonstrate the cost-effectiveness of this solution. Note that, at the microwave frequencies considered in this section, the cost and the energy consumption of ADCs/DACs are sensibly lower than at mmWave frequencies (cf. Section III).From the discussion above, we conclude that the adoption of massive-MIMO for 5G could represent a major leap with respect to today’s state-of-the-art in system and component design. To justify these major changes, massive-MIMO proponents should further work on solving the challenges emphasized above and on showing realistic performance improvements by means of theoretical studies, simulation campaigns, and testbed experiments.V.SMARTER DEVICESEarlier generations of cellular systems were built on the design premise of having complete control at the infrastructure side. In this section, we discuss some of the possibilities that can be unleashed by allowing the devices to play a more active role and, thereafter, how 5G’s design should account for an increase in device smartness. We focus on three different examples of technologies that could be incorporated into smarter devices, namely D2D, local caching, and advanced interference rejection.V.1 D2DIn voice-centric systems it was implicitly accepted that two parties willing to establish a call would not be in close proximity. In the age of data, this premise might no longer hold, and it could be common to have situations where several co-located devices would like to wirelessly share content (e.g., digital pictures) or interact (e.g., video gaming or social networking). Handling these communication scenarios via simply connecting through the network involves gross inefficiencies at various levels:1.Multiple wireless hops are utilized to achieve what requires, fundamentally, a single hop. This entails a multifold waste of signaling resources, and also a higher latency. Transmit powers of a fraction of a Watt (in the uplink) and several Watts (in the downlink) are consumed to achieve what requires, fundamentally, a few milliWatts. This, in turn, entails unnecessary levels of battery drain and of interference to all other devices occupying the same signaling resources elsewhere.2.Given that the pathlosses to possibly distant base stations are much stronger than the direct-link ones, the corresponding spectral efficiencies are also lower. While it is clear that D2D has the potential of handling local communication more efficiently, local high-data-rate exchanges could also be handled by other radio access technologies such as Bluetooth or Wi-Fi direct. Use cases requiring a mixture of local and nonlocal content or a mixture of low-latency and high- data-rate constraints (e.g., interaction between users via augmented reality), could represent more compelling reasons for the use of D2D. In particular, we envision D2D as an important enabler for applications requiring low-latency 3 , especially in future network deployments utilizing baseband centralization and radio virtualization (cf. Section I).From a research perspective, D2D communication presents relevant challenges:1.Quantification of the real opportunities for D2D. How often does local communication occur? What is the main use case for D2D: fast local exchanges, low-latency applications or energy saving?2.Integration of a D2D mode with the uplink/downlink duplexing structure.3.Design of D2D-enabled devices, from both a hardware and a protocol perspective, by providing the needed flexibility at both the PHY and MAC layers.4.Assessing the true net gains associated with having a D2D mode, accounting for possible extra overheads for control and channel estimation.5.Finally, note that, while D2D is already being studied in 3GPP as a 4G add-on2, the main focus of current studies is proximity detection for public safety [15]. What wediscussed here is having a D2D dimension natively supported in 5G.V.2 Local CachingThe current paradigm of cloud computing is the result of a progressive shift in the balance between data storage and data transfer: information is stored and processed wherever it is most convenient and inexpensive because the marginal cost of transferring it has become negligible, at least on wireline networks [2]. For wireless devices though, this cost is not always negligible. The understanding that mobile users are subject to sporadic ‘abundance’ of connectivity amidst stretches of ‘deprivation’ is hardly new, and the natural idea of opportunistically leveraging the former to alleviate the latter has been entertained since the 1990s [3]. However, this idea of caching massive amounts of data at the edge of the wireline network, right before the wireless hop, only applies to delay-tolerant traffic and thus it made little sense in voice-centric systems. Caching might finally make sense now, in data-centric systems [4]. Thinking ahead, it is easy to envision mobile devices with truly vast amounts of memory. Under this assumption, and given that a substantial share of the data that circulates wirelessly corresponds to the most popular audio/video/social content that is in vogue at a given time, it is clearly inefficient to transmit such content via unicast and yet it is frustratingly impossible to resort to multicast because the demand is asynchronous. We hence see local caching as an important alternative, both at the radio access network edge (e.g., at small cells) and at the mobile devices, also thanks to enablers such as mmWave and D2D.V.3 Advanced Interference RejectionIn addition to D2D capabilities and massive volumes of memory, future mobile devices may also have varying form factors. In some instances, the devices mightaccommodate several antennas with the consequent opportunity for active interference rejection therein, along with beamforming and spatial multiplexing. A joint design of transmitter and receiver processing, and proper control and pilot signals, are critical to allow advanced interference rejection. As an example, in Fig. 5 we show the gains obtained by incorporating the effects of nonlinear, intra and inter-cluster interference awareness into devices with 1, 2 and 4 antennas.While this section has been mainly focused on analyzing the implications of smarter devices at a component level, in Section II we discussed the impact at the radio access network architecture level. We regard smarter devices as having all the characteristic of a disruptive technology (cf. Section I) for 5G, and therefore we encourage researchers to further explore this direction.VI.NATIVE SUPPORT FOR M2M COMMUNICATIONWireless communication is becoming a commodity, just like electricity or water [13]. This commoditization, in turn, is giving rise to a large class of emerging services with new types of requirements. We point to a few representative such requirements, each exemplified by a typical service:1.A massive number of connected devices. Whereas current systems typically operate with, at most, a few hundred devices per base station, some M2M services might require over 104 connected devices. Examples include metering, sensors, smart grid components, and other enablers of services targeting wide area coverage.2.Very high link reliability. Systems geared at critical control, safety, or production, have been dominated by wireline connectivity largely because wireless links did not offer the same degree of confidence. As these systems transition from wireline to wireless, it becomes necessary for the wireless link to be reliably operational virtually all the time.3.Low latency and real-time operation. This can be an even more stringent requirement than the ones above, as it demands that data be transferred reliably within a given time interval. A typical example is Vehicle-to-X connectivity, whereby traffic safety can be improved through the timely delivery of critical messages (e.g., alert and control).Fig. 5 provides a perspective on the M2M requirements by plotting the data rate vs. the device population size. This cartoon illustrates where systems currently stand and how the research efforts are expanding them. The area R1 reflects the operating range of today’s systems, outlining the fact that the device data rate decreases as its population increases. In turn, R2 is the region that reflects current research aimed at improving the spectral efficiency. Finally, R5 indicates the region where operation is not feasible due to fundamental physical and information-theoretical limits.Regions R3 and R4 correspond to the emerging services discussed in this section:R3 refers to massive M2M communication where each connected machine or sensor transmits small data blocks sporadically. Current systems are not designed to simultaneously serve the aggregated traffic accrued from a large number of such devices. For instance, a current system could easily serve 5 devices at 2 Mbps each, but not 10000 devices each requiring 1 Kbps. R4 demarks the operation of systems that require high reliability and/or low latency, but with a relatively low average rate per device. The complete description of this region requires additional dimensions related to reliability and latency.There are services that pose simultaneously more than one of the above requirements, but the common point is that the data size of each individual transmission is small, going down to several bytes. This profoundly changes the communication paradigm for the following reasons:Existing coding methods that rely on long codewords are not applicable to very short data blocks. Short data blocks also exacerbate the inefficiencies associated with control and channel estimation overheads. Currently, the control plane is robust but suboptimal as it represents only a modest fraction of the payload data; the most sophisticated signal processing is reserved for payload data transmission. An optimized design should aim at a much tighter coupling between the data and control planes.As mentioned in Section II, the architecture needs a major redesign, looking at new types of nodes. At a system level, the frame-based approaches that are at the heart of 4G need rethinking in order to meet the requirements for latency and flexible allocation of resources to a massive number of devices. From the discussion above, and from the related architectural consideration in Section II, and referring one last time to the Henderson-Clark model, we conclude that a native support of M2M in 5G requires radical changes at both the node and the architecture level. Major research work remains to be done to come up with concrete and interworking solutionsenabling ‘M2M-inside’ 5G systems.VII.CONCLUSIONThis paper has discussed five disruptive research directions that could lead to fundamental changes in the design of cellular networks. We have focused on technologies that could lead to both architectural and component design changes: device-centric architectures, mmWave, massive-MIMO, smarter devices, and native support to M2M. It is likely that a suite of these solutions will form the basis of 5G. REFERENCES[1] A. Afuah, Innovation Management: Strategies, Implementation and Profits, Oxford University Press, 2003.[2] J. Zander and P. Mähönen, “Riding the data tsunami in the cloud: myths and challenges in future wireless access,” IEEE Comm. Magazine, V ol. 51, No. 3, pp. 145-151, Mar. 2013.[3] D. Goodman, J. Borras, N. Mandayam, R. D. Yates, “Infostations: A new system model for data and messaging services,” in Proc. IEEE Veh. Techn. Conf. (VTC), vol. 2, pp. 969–973, Rome, Italy, May 1997.[4] N. Golrezaei, A. F. Molisch, A. G. Dimakis and G. Caire, “Femtocaching and device-to-device collaboration: A new architecture for wireless video distribution,” IEEE Comm. Magazine, V ol. 51, No. 1, pp.142-149, Apr. 2013.[5] J. Andrews, “The seven ways HetNets are a paradigm shift,” IEEE Comm. Magazine, V ol. 51, No. 3, pp.136-144, Mar. 2013.[6] Y. Kishiyama, A. Benjebbour, T. Nakamura and H. Ishii, “Future steps of LTE-A: evolution towards integration of local area and wide area systems,” IEEE Wireless Communications, V ol. 20, No. 1, pp.12-18, Feb. 2013.[7] “C-RAN: The road towards green RAN,” China Mobile Res. Inst., Beijing, China, White Paper, ver. 2.5, Oct. 2011.[8] A. Lozano, R. W. Heath Jr., J. G. Andrews, “Fundamental limits of cooperation,” IEEE Trans. Inform. Theory, V ol. 59, No. 9, pp. 5213-5226, Sep. 2013.[9] C. D. T. Thai, P. Popovski, M. Kaneko, and E. de Carvalho, “Multi-flow scheduling for coordinated direct and relayed users in cellular systems,” IEEE Trans. Comm., V ol. 61, No. 2, pp. 669-678, Feb. 2013.[10] Z. Pi and F. Khan, “An introduction to millimeter-wave mobile broadband systems,” IEEE Comm. Magazine, V ol. 49, No. 6, pp. 101 –107, Jun. 2011.[11] T. Rappaport and et al, “Millimeter wave mobile communications for 5G cellular: It will work!” IEEE Access, vol. 1, pp. 335–349, 2013.[12] R. W. Heath Jr., “What is the Role of MIMO in Future Cellular Networks: Massive? Coordinated? mmWave?” ICC Workshop Plenary: Beyond LTE-A, Budapest, Hungary. Slides available at: /~rheath/presentations/2013/Future_of_MIMO_Plenary_He ath.pdf[13] Mark Weiser, “The Computer for the 21st Century,” Scientific American, Sept. 1991.。
英文原文Mixed DSP /FPGA implementation of an error-resilient image transmission system based on JPEG2000Marco Grangetto,Enrico Magli, Maurizio Martina, Fabrizio VaccaAbstractThis paper describes a demonstrator of an error-resilient image communication system over wireless packet networks, based on the novel JPEG2000 standard. In particular, the decoder implementation is addressed, which is the most critical task in terms of complexity and power consumption, in view of use on a wireless portable terminal for cellular applications. The system implementation is based on a mixed DSP/FPGA architecture, which allows to parallelize some computational tasks, thus leading to efficient system operation.1 IntroductionNowadays, there is a growing interest in the end-to-end transmission of images, especially motivated by the short-term deployment of next generation mobile communication services (UMTS-IMT2000). However, transmission in a networked, tetherless environment provides both opportunities and challenges. The wireless context implies that the data may undergo bit errors and packet losses, making it necessary to foresee error recovery modalities. It is thereby necessary that image communication techniques are provided with the ability to recover, or at least conceal, the effect of such losses. The forthcoming JPEG2000 image com-pression standard has been designed to match these requirements, and embeds some error detection and concealment tools.This paper addresses the development of a demonstrator of an error-resilient JPEG2000 decoder implementation for image communication over a lossy packet network. The robustness to packet erasures is achieved by combining the flexibility of the JPEG2000 framework with the powerfulness of source-channel adaptive, optimized Reed-Solomon codes. The decoder implementation is particularly significant in the context of wirelessportable terminals for next-generation cellular systems, where the limited power budget and available dimensions impose severe constraints on the design of a multimedia processing sys-tem.2 System overviewIn the following we provide a brief description of the functional units of the implemented system.2.1 JPEG2000 image compressionJPEG2000 is the novel ISO standard for still image coding, and is intended to provide innovative solutions according to the new trends in multimedia technologies. At the time of this writing, the standard is in advanced publication stage; the Final Committee Draft is the most recent JPEG2000 description publicly available, which our implementation conforms to. JPEG2000 not only yields superior performance with respect to existing standards in terms of compression capability and subjective quality, but also numerous additional functionalities, such as loss less and lossy compression, progressive transmission, and error resilience. The architecture of the JPEG2000 is based on the transform coding approach. An image may be divided into several sub-images (tiles), to reduce memory and computing requirements. A biorthogonal discrete wavelet transform (DWT) is first applied to each tile, whose output is a series of versions of the tile at different resolution levels (subbands); then, the transform coefficients are quantized, independently for each subband, with an embedded dead-zone quantizer. Each subband of the wavelet decomposition is divided into rectangular blocks (code-blocks), which are in- dependently encoded with EBCOT (Embedded Block Coding with Optimized Truncation) ; this latter is based on a bit-plane approach (i.e. the most significant bits of the subband coefficients are transmitted first), context modeling and arithmetic coding. The bit stream output by EBCOT is organized by the rate allocator into a sequence of layers, each layer containing contributions from each code-block; the truncation points associated with each layer are optimized in the rate distortion sense. The final JPEG2000 bit stream consists of a main header, followed by one or more sec-tions corresponding to individual tiles. Each tile comprises a tile header and a layeredrepresentation of the included code-blocks, organized into packets. In order to form a progressive bitstream, i.e. one that can be only partially decoded with minimal penalty, the layers are formed and ordered in such a way that the most important information is placed at the beginning of the bitstream. The JPEG2000 decoder performs exactly the same steps (except for rate allocation), in reverse order: syntax parsing, codeblock decoding by EBCOT, inverse quantization, inverse DWT, and tile mosaicking; this is sketched in the right-hand-side box of Fig. 1.2.2 Adaptive Reed-Solomon packet protectionAlthough JPEG2000 embodies advanced error concealment techniques to mitigate the effect of errors, it does neither contain, nor specify any error correction method, in order to recover lost packets. On the other hand, packet losses are likely to occur in a network potentially subject to congestion, as is often the case in practice. In order to overcome this problem, a technique has been recently proposed, called Unequal Loss Protection (ULP), and based on the joint use of RS codes and packet interleaving, as shown in the left-hand-side of Fig. 1. Let us consider a maximum rate allocated to the image transmission, e.g. N packets of size L; the source bitstream is rowwise inserted in the interleaving matrix, followed by a proper amount of parity symbols, say Ti for the i-th row. The packets are read on the columns of the interleaver. The allocation problem consists in finding the optimal partitioning between source and code symbols for each row of the interleaver, so as to maximize the quality of service at the receiver; see for implementation details. At the decoder, due to the error correction capability of RS codes, the i-th row can be exactly recovered provided that the number of packet erasures has been less than Ti.Figure 1 System architecture2.3 Proposed systemIn this paper we present a demonstrator of a complete decoder for imagecommunication over a lossy packet network. The decoder consists of 1) ULP decoder, followed by 2) JPEG2000 decoder. The goal is to demonstrate that such tasks can be effectively performed on a modern DSP, satisfying real-time operation requirements. This objective has been accomplished by using a mixed DSP /FPGA architecture. In particular, the implementation has been developed using a Texas Instruments (TI) TMS320C6201 DSP board as target device. The additional bitstream protection protocol, based on RS codes, has been implemented on a Virtex XCVl000 FPGA device from XIL-INX, which is able to concurrently exchange data with the DSP. As is shown in Sect. 4, the developed system is able to perform very fast decoding of still images, at a rate compatible with real-time video applications; thus, an extension to MotionJPEG2000 video coding can be envisaged, with minor modifications of the current demonstrator.3 System architectureResorting to mixed DSP /FPGA based architectures allows to achieve very-high performance systems, with excellent properties of reconfigurability. Preliminary partitioning studies have put into evidence that wavelet transform and EBCOT are the most demanding tasks in JPEG2000 decoding process. However, since many extensions of the standard are still possible, an all-DSP implementation offers an excellent degree of reconfigurability. On the other hand, FPGA is well suited to the implementation of a Reed Solomon core, needed to grant error resilience of the communication system. It is worth noticing that, with an ULP, the RS decoder needs to be re-adapted to work with words of bits which can keep different amounts of information symbols and, as a consequence, different amounts of protection symbols. The need for changing these parameters "on-the-fly" is perfectly fulfilled by an FPGA, by simply loading new values into some configuration registers. Moreover, the high memory bandwidth needed to quickly de-packetize and de- code the received bit stream makes the use of a Reed Solomon FPGA implementation very attractive. The JPEG2000 decoder module, entirely implemented on DSP, is composed by four main blocks: syntax parser, entropy decoder (EBCOT), uniform scalar dequantizer, and inverse wavelet transform. Moreover,two additional tasks, devoted to communication management between DSP, FPGA anda personal computer, have been introduced.3.1 Syntax parserThe parser is the functional block that interfaces the JPEG2000 decoder with the RS decoder. It retrieves RS decoded packets, and extracts from the compressed JPEG2000 bitstream all the relevant information needed to perform image reconstruction. Firstly, the bit stream main header is read, which contains information on the parameters used during the encoding process (e.g. image size, wavelet filter used, number of decomposition levels, quantization thresholds, and so on). After that, tile headers are read, which provide information specific to each image tile. Finally, each packet contained in the bit stream is read, and the data and parameters of each codeblock are extracted, and fed as inputs to the EBCOT decoder.3.2 EBCOTRight after the bitstream syntax parser, the subsequent stage in the JPEG2000 decompression chain is the entropy decoder (EBCOT). From an algorithmic point of view, EBCOT is a block-based bitplane encoder followed by a reduced complexity arithmetic coder (MQ). It subdivides each wavelet subband into a disjoint set of rectangular blocks, called code-blocks. Then the compression algorithm is independently applied to every code-block. The samples of every codeblock are arranged into so-called bitplanes. To decode a code-block, EBCOT always starts from the most significant bitplanes, and then moves towards the least significant ones. The compressed information of every code-block is then arranged in several quality layers, to create a scalable compressed bit-stream. Conceptually, each quality layer monotonically increases the knowledge of samples magnitudes, i.e. increases the quality of the reconstructed image.Formally, EBCOT is made of three main steps, namely Significance Propagation (SP), Magnitude Refinement (MR), and Clean Up (CL). Each of the above steps can resort to four decoding primitives, namely Zero Coding, Sign Coding, Magnitude Refinement Coding, and Run Length Coding. The bitplane visiting order follows the sequence SP -MR-CL: it is worth noticing that every sample of a given code-block is processed in just one of the three steps. As far as computational complexity is concerned, CL demands the largest effort during the decoding of the most significant bitplanes. As SP steps are applied, an increasing number of samples become significant, and are inserted in a list of MR-ready samples. Progressively, the load required by MR steps grows, making the decoder efficiency directly dependent on the MR and CL optimization level. During the development of the EBCOT decoder block, particular care has been posed on the design of agile data structures, particularly suited to DSP optimized C code of MR and CL steps.3.3 Uniform scalar dequantizerAccording to, the quantization method supported by JPEG2000 is called scalar uniform. Uniform scalar dequantization can be simply accomplished by means of a single multiplication for each wavelet coefficient.3.4 Inverse wavelet transformThe discrete wavelet transform can be evaluated by means of a convolution-based kernel, or a lifting-based kernel, this latter being the default transform kernel employed in JPEG2000. It has been demonstrated that the lifting scheme may run up to twice as fast as convolution. The wavelet transform has to be performed on both image rows and columns, in order to obtain a separable two-dimensional subband decomposition: JPEG2000 performs first the columnwise, and then the rowwise filtering. The default filter used for lossy compression is the well-known DB(9,7): since it does not have rational coefficients, particular care ought to be posed to the effects of finite precision representation. Due to the use of a fixed point TI TMS320C6201 DSP, a detailed study of internal data representation has been performed. Experimental results shows that excellent perceptive quality can be achieved recurring to 9 fractional bits for filter coefficients. In order to optimize the dynamic range around zero, a DC-shift is foreseen by the standard, as the DC component could lead to an excessive growth of the dynamic range of low-pass subband coefficients. Moreover, the low-pass filters can keep the samples in a fixed range, provided that a unitary DC filter gain is guaranteed. The jointeffect of DC component suppression and unitary gain ensures that range requirements are fulfilled during the whole wavelet transform. 3.5 Adaptive Reed-Solomon packet de- coding3.5 Adaptive Reed-Solomom packet decodingThe de-interleaving RS decoder has been mapped on the FPGA device; it is split into two functional subblocks: the first is the de-interleaver, the second is the RS decoder. The former collects packets received from the network, filling the columns of a matrix, and transferring them, row by row, to the RS decoder core. The latter performs the decoding process in five calculation steps, namely syndromes, erasure locator polynomial, erasures evaluator polynomial, error values, and correction. The RS core has been designed and developed taking into account, as much as possible, the issue of modularity. Since RS codes are strongly based on Galois Fields (GF) inner operations, the first goal has been to implement arithmetic units able to handle GF elements with good performance. It is worth noticing that multiplications and divisions in Galois Fields are complex operations and, since decoding calculation steps require a large amount of multiplications, very much care has been given to the implementation of GF inner products. Resorting to two dedicated look-up tables has permitted to achieve very good performance with reduced complexity. Furthermore, this RS implementation has excellent figures of modularity, since the five calculation steps can be computed with the same core. Selection of proper input data and signals is a trivial task, demanded to RS core control unit.4 ResultsIn the following we present the results of tests performed on the proposed system. Results on visual quality witness the performance improvement that can be obtained by protecting the JPEG2000 stream with ULP encoding. Moreover, profiling results of the DSP /FPGA system are reported.4.1 Visual qualityWe report here the system performance in terms of the quality of service delivered atthe receiver. The adopted joint source-channel coding strategy allows to couple the powerful compression capabilities of JPEG2000 with the error correction provided by RS codes. We tested the proposed approach using images of 256x256 size, transmission rate equal to 0.5 bpp, e.g. 87 ATM packets of 47 bytes (plus 1 byte devoted to sequence numbering); RS code allocation optimized for an exponential packet loss model with 20% mean loss rate.In Fig. 2, visual results are presented. The right-hand-side image has been obtained by the proposed demonstrator, using RS codes and JPEG2000, with 20% lost packets. The image achieved by using JPEG2000 without FEC protection, and a single erased packet, is shown on the left-hand-side for comparison; the quality degradation in this case is apparent.4.2 System profilingCode profiling has shown that JPEG2000 decoding could require remarkable computational load. As shown in Tab. 1, the FPGA device grants very good performance in terms of both throughput and complexity. It is worth noticing that, even if a noteworthy amount of code has been mapped on the DSP, the system is still able to sustain an interesting decoding rate. The figures reported in Fig. 3 have been obtained in the same conditions as in Sect. 3.1, and with 5 transform levels using the (9,7) filter, and 8x8 codeblock size.5 ConclusionsIn this paper we have described the implementation and performance of a JPEG2000 decoder demonstrator for image transmission over a lossy packet network. The resulting visual quality of the decoded images is good, even in the presence of a packet loss rate of 20%. Moreover, very fast JPEG2000 image decoding is supported, at a speed of about 250 ms for a 256x256 image. This opens the road to a low-complexity, fully software implementation of multimedia video systems, based on a frame-by-frame Motion JPEG2000 core, using a multiprocessor system for video delivery in consumer applications.AcknowledgementsThis project was partially supported under the Texas Instruments University Elite program.(a) (b)Figure2: Decoding from (a) unprotected JPEG2000 bitstream, (b) RS protected bitstream中文翻译基于JPEG2000技术,融合DSP和FPGA的可纠错图像传输系统Marco Grangetto,Enrico Magli,Maurizio Martina,Fabrizio Vacca摘要本文描述了一个基于先进的JPEG标准技术,覆盖于无线网络之上可纠错的图像传输的全过程。
5G无线通信网络中英文对照外文翻译文献(文档含英文原文和中文翻译)翻译:5G无线通信网络的蜂窝结构和关键技术摘要第四代无线通信系统已经或者即将在许多国家部署。
然而,随着无线移动设备和服务的激增,仍然有一些挑战尤其是4G所不能容纳的,例如像频谱危机和高能量消耗。
无线系统设计师们面临着满足新型无线应用对高数据速率和机动性要求的持续性增长的需求,因此他们已经开始研究被期望于2020年后就能部署的第五代无线系统。
在这篇文章里面,我们提出一个有内门和外门情景之分的潜在的蜂窝结构,并且讨论了多种可行性关于5G无线通信系统的技术,比如大量的MIMO技术,节能通信,认知的广播网络和可见光通信。
面临潜在技术的未知挑战也被讨论了。
介绍信息通信技术(ICT)创新合理的使用对世界经济的提高变得越来越重要。
无线通信网络在全球ICT战略中也许是最挑剔的元素,并且支撑着很多其他的行业,它是世界上成长最快最有活力的行业之一。
欧洲移动天文台(EMO)报道2010年移动通信业总计税收1740亿欧元,从而超过了航空航天业和制药业。
无线技术的发展大大提高了人们在商业运作和社交功能方面通信和生活的能力无线移动通信的显著成就表现在技术创新的快速步伐。
从1991年二代移动通信系统(2G)的初次登场到2001年三代系统(3G)的首次起飞,无线移动网络已经实现了从一个纯粹的技术系统到一个能承载大量多媒体内容网络的转变。
4G无线系统被设计出来用来满足IMT-A技术使用IP面向所有服务的需求。
在4G系统中,先进的无线接口被用于正交频分复用技术(OFDM),多输入多输出系统(MIMO)和链路自适应技术。
4G无线网络可支持数据速率可达1Gb/s的低流度,比如流动局域无线访问,还有速率高达100M/s的高流速,例如像移动访问。
LTE系统和它的延伸系统LTE-A,作为实用的4G系统已经在全球于最近期或不久的将来部署。
然而,每年仍然有戏剧性增长数量的用户支持移动宽频带系统。
宽带,稳定增益,FET输入的运算放大器特征:●400MHz稳态增益带宽●低输入偏置电流:5pA●高输入电阻:1012Ω或1.0pF●极低的dG/dP :0.006%/0.009°●低扭曲:在5MHz为90dB●快速设置:17ns(0.01%)●高输出电流:60mA●超速传动快速恢复应运:●宽带光电二极管放大器●峰值检测●CCD输出缓冲器●ADC输入缓冲器●高速积分仪●检测和测量前端宽带光电二极管转移阻抗放大器一种包含宽带,稳态增益,电压反馈运算OPA655,当有FET输入时,能为ADC 缓冲器和转移阻抗设备提供十分宽广的动态放大范围。
良好的脉冲设置和极低的调和扭曲将支持更高要求的ADC输入缓冲需要。
宽带稳态增益和FET输入在高速,低噪声积分器中允许特殊的操作。
由FET输入所提供的高输入阻抗和低偏置电流能被极低的输入电压噪声支持,在宽带光电二极管设备中达到极低的积分噪声。
给定的OPA655高达240MHZ的增益带宽产品可以提供高宽带转移阻抗。
如下图所示,来自于47PF 的电容高达1兆欧的转移阻抗可以提供1MHZ,-3增益的带宽。
性能讨论:使用FET输入阻抗的放大器具有同那些用biploar阻抗相似的功能外,还有一些重要的优点。
在标准运算中,低输入偏置电流可以减少由于一个非常高或者未知源的阻抗所产生的直流电压错误。
在绝大多数OPA655使用中,输出直流错误只是由于低于1mv输入激励电压所造成的。
类似地,输入电流噪声几乎对输出电流噪声影响很小。
对于低电流噪声和低于6nv/ 输入电压噪声的OPA655对于宽带阻抗的应用极为有益。
OPA655的高宽带增益和近乎线性的输出,可以通过5MHz对于2v的峰值电压摆动在100Ω处,来控制调和扭曲低于-90dbc.在低频率或高负载阻抗时,这种显著地减少扭曲可以被观察到。
图1 放大器的内部原理操作时需考虑的问题对于PC板外形的仔细观察可以实现如典型性能曲线中所示的特殊操作。
中英文翻译(文档含英文原文和中文翻译)译文一:基于一个高双折射光纤双Sagnac环的可调谐多波长光纤激光器1.引言工作在波长1550nm附近的多波长光纤激光器已经吸引了许多人的兴趣,它可以应用于密集波分复用(DWDM)系统,精细光谱学,光纤传感和微波(RF)光电[1-4]等领域。
多波长光纤激光器可以通过布拉格光纤光栅阵列[5],锁模技术[6-7],光学参量振荡器[8],四波混频效应[9],受激布里渊散射效应实现[10-12]。
掺铒光纤(EDF)环形激光器可以提供大输出功率,高斜度效率和大可调谐波长范围。
例如,作为一种可调谐EDF激光器,带有单个高双折射光纤Sagnac 环的多波长光纤激光器已经提出[13-15]。
输出波长可以通过调整偏振控制器(PC)进行调谐,波长间隔可以通过改变保偏光纤(PMF)的长度进行调谐。
然而,对于单个Sagnac环光纤激光器来说,波长间隔和线宽都不能独立调谐[16]。
密集波分复用(DWDM)系统要求激光波长调谐更灵活,否则会限制这些激光器的应用。
一个双Sagnac环的多波长光纤激光器能提供更好的可调谐性和可控性。
采用这种结构,可以实现保持线宽不变的波长间隔可调谐,以及保持波长间隔不变的线宽调谐。
本文提出和证明了一种双Sagnac环可调谐多波长掺铒光纤环形激光器。
多波长选择由两个Sagnac 环实现,而每个环由一个3dB 耦合器,一个PC ,和一段高双折射PMF 组成。
本文模拟分析了单个和两个Sagnac 环的梳状滤波器的特征。
实验中,得到输出激光的半峰全宽(FWHM )是0.0187nm ,边模抑制比(SMSR )是50dB 。
通过调整两个PC 可以实现多波长激光器输出的大范围调谐。
与单环结构相比,改变PMF 长度可以独立调谐波长间隔和激光线宽。
本文中提出的双Sagnac 环光纤激光器是先前单Sagnac 环多段PMF 多波长光纤激光器工作的延伸,其在DWDM 系统,传感和仪表测试中具有潜在应用。
附录一、英文原文:Detecting Anomaly Traffic using Flow Data in the realVoIP networkI. INTRODUCTIONRecently, many SIP[3]/RTP[4]-based VoIP applications and services have appeared and their penetration ratio is gradually increasing due to the free or cheap call charge and the easy subscription method. Thus, some of the subscribers to the PSTN service tend to change their home telephone services to VoIP products. For example, companies in Korea such as LG Dacom, Samsung Net- works, and KT have begun to deploy SIP/RTP-based VoIP services. It is reported that more than five million users have subscribed the commercial VoIP services and 50% of all the users are joined in 2009 in Korea [1]. According to IDC, it is expected that the number of VoIP users in US will increase to 27 millions in 2009 [2]. Hence, as the VoIP service becomes popular, it is not surprising that a lot of VoIP anomaly traffic has been already known [5]. So, Most commercial service such as VoIP services should provide essential security functions regarding privacy, authentication, integrity and non-repudiation for preventing malicious traffic. Particu- larly, most of current SIP/RTP-based VoIP services supply the minimal security function related with authentication. Though secure transport-layer protocols such as Transport Layer Security (TLS) [6] or Secure RTP (SRTP) [7] have been standardized, they have not been fully implemented and deployed in current VoIP applications because of the overheads of implementation and performance. Thus, un-encrypted VoIP packets could be easily sniffed and forged, especially in wireless LANs. In spite of authentication,the authentication keys such as MD5 in the SIP header could be maliciously exploited, because SIP is a text-based protocol and unencrypted SIP packets are easily decoded. Therefore, VoIP services are very vulnerable to attacks exploiting SIP and RTP. We aim at proposing a VoIP anomaly traffic detection method using the flow-based traffic measurement archi-tecture. We consider three representative VoIP anomalies called CANCEL, BYE Denial of Service (DoS) and RTP flooding attacks in this paper, because we found that malicious users in wireless LAN could easily perform these attacks in the real VoIP network. For monitoring VoIP packets, we employ the IETF IP Flow Information eXport (IPFIX) [9] standard that is based on NetFlow v9. This traffic measurement methodprovides a flexible and extensible template structure for various protocols, which is useful for observing SIP/RTP flows [10]. In order to capture and export VoIP packets into IPFIX flows, w e define two additional IPFIX templates for SIP and RTP flows. Furthermore, we add four IPFIX fields to observe 802.11 packets which are necessary to detect VoIP source spoofing attacks in WLANs.II. RELATED WORK[8] proposed a flooding detection method by the Hellinger Distance (HD) concept. In [8], they have pre- sented INVITE, SYN and RTP flooding detection meth-ods. The HD is the difference value between a training data set and a testing data set. The training data set collected traffic over n sampling period of duration Δ t.The testing data set collected traffic next the training data set in the same period. If the HD is close to ‘1’, this testing data set is regarded as anomaly traffic. For using this method, they assumed that initial training data set did not have any anomaly traffic. Since this method was based on packet counts, it might not easily extended to detect other anomaly traffic except flooding. On the other hand, [11] has proposed a VoIP anomaly traffic detection method using Extended Finite State Machi ne (EFSM). [11] has suggested INVITE flooding, BYE DoS anomaly traffic and media spamming detection methods. However, the state machine required more memory because it had to maintain each flow. [13] has presented NetFlow-based VoIP anomaly detection methods for INVITE, REGIS-TER, RTP flooding, and REGISTER/INVITE scan. How-ever, the VoIP DoS attacks considered in this paper were not considered. In [14], an IDS approach to detect SIP anomalies was developed, but only simulation results are presented. For monito ring VoIP traffic, SIPFIX [10] has been proposed as an IPFIX extension. The key ideas of the SIPFIX are application-layer inspection and SDP analysis for carrying media session information. Yet, this paper presents only the possibility of applying SIPFIX to DoS anomaly traffic detection and prevention. We described the preliminary idea of detecting VoIP anomaly traffic in [15]. This paper elaborates BYE DoS anomaly traffic and RTP flooding anomaly traffic detec-tion method based on IPFIX. Based on [15], we have considered SIP and RTP anomaly traffic generated in wireless LAN. In this case, it is possible to generate the similiar anomaly traffic with normal VoIP traffic, because attackers can easily extract normal user information from unencrypted VoIP packets. In this paper, we have extended the idea with additional SIP detection methods using information of wireless LAN packets. Furthermore, we have shown the real experiment results at the commercial VoIP network.III. THE VOIP ANOMALY TRAFFIC DETECTION METHODA. CAN CEL DoS Anomaly Traffic DetectionAs the SIP INVITE message is not usually encrypted, attackers could extract fields necessary to reproduce the forged SIP CANCEL message by sniffing SIP INVITE packets, especially in wireless LANs. Thus, we cannot tell the difference between the normal SIP CANCEL message and the replicated one, because the faked CANCEL packet includes the normal fields inferred from the SIP INVITE message. The attacker will perform the SIP CANCEL DoS attack at the same wireless LAN, because the purpose of the SIP CANCELattack is to prevent the normal call estab-lishment when a victim is waiting for calls. Therefore, as soon as the attacker catches a call invitation message for a victim, it will send a SIP CANCEL message, which makes the call establishment failed. We have generated faked SIP CANCEL message using sniffed a SIP INVITE message.Fields in SIP header of this CANCEL message is the same as normal SIP CANCEL message, because the attacker can obtain the SIP header field from unencrypted normal SIP message in wireless LAN environment. Therefore it is impossible to detect the CANCEL DoS anomaly traffic using SIP headers, we use the different values of the wireless LAN frame. That is, the sequence number in the 802.11 frame will tell the difference between a victim host and an attacker. We look into source MAC address and sequence number in the 802.11 MAC frame including a SIP CANCEL message as shown in Algorithm 1. We compare the source MAC address of SIP CANCEL packets with that of the previously saved SIP INVITE flow. If the source MAC address of a SIP CANCEL flow is changed, it will be highly probable that the CANCEL packet is generated by a unknown user. However, the source MAC address could be spoofed. Regardin g 802.11 source spoofing detection, we employ the method in [12] that uses sequence numbers of 802.11 frames. We calculate the gap between n-th and (n-1)-th 802.11 frames. As the sequence number field in a 802.11 MAC header uses 12 bits, it varies from 0 to 4095. When we find that the sequence number gap between a single SIP flow is greater than the threshold value of N that will be set from the experiments, we determine that the SIP host address as been spoofed for the anomaly traffic.B. BYE DoS Anomaly Traffic DetectionIn commercial VoIP applications, SIP BYE messages use the same authentication field is included in the SIP IN-VITE message for security and accounting purposes. How-ever, attackers can reproduce BYE DoS packets through sniffing normal SIP INVITE p ackets in wireless LANs.The faked SIP BYE message is same with the normal SIP BYE. Therefore, it is difficult to detect the BYE DoS anomaly traffic using only SIP header information.After sniffing SIP INVITE message, the attacker at the same or different subn ets could terminate the normal in- progress call, because it could succeed in generating a BYE message to the SIP proxy server. In the SIP BYE attack, it is difficult to distinguish from the normal call termination procedure. That is, we apply the timestamp of RTP traffic for detecting the SIP BYE attack. Generally, after normal call termination, the bi-directional RTP flow is terminated in a bref space of time. However, if the call termination procedure is anomaly, we can observe that a directional RTP media flow is still ongoing, whereas an attacked directional RTP flow is broken. Therefore, in order to detect the SIP BYE attack, we decide that we watch a directional RTP flow for a long time threshold of N sec after SIP BYE message. The threshold of N is also set from the experiments.Algorithm 2 explains the procedure to detect BYE DoS anomal traffic using captured timestamp of the RTP packet. We maintain SIP session information between clients with INVITE and OK messages including the same Call-ID and 4-tuple (source/destination IP Address and port number) of the BYE packet. We set a time threshold value by adding Nsec to the timestamp value of the BYE message. The reason why we use the captured timestamp is that a few RTP packets are observed under 0.5 second. If RTP traffic is observed after the time threshold, this will be considered as a BYE DoS attack, because the VoIP session will be terminated with normal BYE messages. C. RTP Anomaly Traffic Detection Algorithm 3 describes an RTP flooding detection method that usesSSRC and sequence numbers of the RTP header. During a single RTP session, typically, the same SSRC value is maintained. If SSRC is changed, it is highly probable that anomaly has occurred. In addition, if there is a big sequence number gap between RTP packets, we determine that anomaly RTP traffic has happened. As inspecting every sequence number for a packet is difficult, we calculate the sequence number gap using the first, last, maximum and minimum sequence numbers. In the RTP header, the sequence number field uses 16 bits from 0 to 65535. When we observe a wide sequence number gap in our algorithm, we consider it as an RTP flooding attack.IV. PERFORMANCE EVALUATIONA. Experiment EnvironmentIn order to detect VoIP anomaly traffic, we established an experimental environment as figure 1. In this envi-ronment, we employed two VoIP phones with wireless LANs, one attacker, a wireless access router and an IPFIX flow collector. For the realistic performance evaluation, we directly used one of the working VoIP networks deployed in Korea where an 11-digit telephone number (070-XXXX-XXXX) has been assigned to a SIP phone.With wireless SIP phones supporting 802.11, we could make calls to/from the PSTN or cellular phones. In the wireless access router, we used two wireless LAN cards- one is to support the AP service, and the other is to monitor 802.11 packets. Moreover, in order to observe VoIP packets in the wireless access router, we modified nProbe [16], that is an open IPFIX flow generator, to create a nd export IPFIX flows related with SIP, RTP, and 802.11 information. As the IPFIX collector, we have modified libipfix so that it could provide the IPFIX flow decoding function for SIP, RTP, and 802.11 templates. We used MySQL for the flow DB.B. Experimental ResultsIn order to evaluate our proposed algorithms, we gen-erated 1,946 VoIP calls with two commercial SIP phones and a VoIP anomaly traffic generator. Table I shows our experimental results with precision, recall, and F-score that is the harmonic mean of precision and recall. In CANCEL DoS anomaly traffic detection, our algorithm represented a few false negative cases, which was related with the gap threshold of the sequence number in 802.11 MAC header. The average of the F-score value for detecting the SIP CANCEL anomaly is 97.69%.For BYE anomaly tests, we generated 755 BYE mes-sages including 118 BYE DoS anomalies in the exper-iment. The proposed BYE DoS anomaly traffic detec-tion algorithm found 112 anomalies with the F-score of 96.13%. If an RTP flow is te rminated before the threshold, we regard the anomaly flow as a normal one. In this algorithm, we extract RTP session information from INVITE and OK or session description messages using the same Call-ID of BYE message. It is possible not to capture those packet, resulting in a few false-negative cases. The RTP flooding anomaly traffic detection experiment for 810 RTP sessions resulted in the F score of 98%.The reason of false-positive cases was related with the sequence number in RTP header. If the sequence nu mber of anomaly traffic is overlapped with the range of the normal traffic, our algorithm will consider it as normal traffic.V. CONCLUSIONSWe have proposed a flow-based anomaly traffic detec-tion method against SIP and RTP-based anomaly traffic in this paper.We presented VoIP anomaly traffic detection methods with flow data on the wireless access router. We used the IETF IPFIX standard to monitor SIP/RTP flows passing through wireless access routers, because its template architecture is easily extensible to seve ral protocols. For this purpose, we defined two new IPFIX templates for SIP and RTP traffic and four new IPFIX fields for 802.11 traffic. Using these IPFIX flow templates,we proposed CANCEL/BYE DoS and RTP flooding traffic detection algorithms. From experimental results on the working VoIP network in Korea, we showed that our method is able to detect three representative VoIP attacks on SIP phones. In CANCEL/BYE DoS anomaly trafficdetection method, we employed threshold values about time and sequence number gap for classfication of normal and abnormal VoIP packets. This paper has not been mentioned the test result about suitable threshold values. For the future work, we will show the experimental result about evaluation of the threshold values for our detection method.二、英文翻译:交通流数据检测异常在真实的世界中使用的VoIP网络一.介绍最近,许多SIP[3],[4]基于服务器的V oIP应用和服务出现了,并逐渐增加他们的穿透比及由于自由和廉价的通话费且极易订阅的方法。