EVALUATION OF EDCF MECHANISM FOR QoS IN IEEE802.11 WIRELESS NETWORKS
- 格式:pdf
- 大小:216.54 KB
- 文档页数:7
Performance Analysis of the IEEE802.11DistributedCoordination FunctionGiuseppe BianchiAbstract—Recently,the IEEE has standardized the802.11pro-tocol for Wireless Local Area Networks.The primary medium ac-cess control(MAC)technique of802.11is called distributed coor-dination function(DCF).DCF is a carrier sense multiple access with collision avoidance(CSMA/CA)scheme with binary slotted exponential backoff.This paper provides a simple,but nevertheless extremely accurate,analytical model to compute the802.11DCF throughput,in the assumption of finite number of terminals and ideal channel conditions.The proposed analysis applies to both the packet transmission schemes employed by DCF,namely,the basic access and the RTS/CTS access mechanisms.In addition,it also ap-plies to a combination of the two schemes,in which packets longer than a given threshold are transmitted according to the RTS/CTS mechanism.By means of the proposed model,in this paper we pro-vide an extensive throughput performance evaluation of both ac-cess mechanisms of the802.11protocol.Index Terms—802.11,collision avoidance,CSMA,performance evaluation.I.I NTRODUCTIONI N recent years,much interest has been involved in thedesign of wireless networks for local area communication [1],[2].Study group802.11was formed under IEEE Project 802to recommend an international standard for Wireless Local Area Networks(WLAN’s).The final version of the standard has recently appeared[3],and provides detailed medium access control(MAC)and physical layer(PHY)specification for WLAN’s.In the802.11protocol,the fundamental mechanism to access the medium is called distributed coordination function(DCF). This is a random access scheme,based on the carrier sense mul-tiple access with collision avoidance(CSMA/CA)protocol.Re-transmission of collided packets is managed according to bi-nary exponential backoff rules.The standard also defines an op-tional point coordination function(PCF),which is a centralized MAC protocol able to support collision free and time bounded services.In this paper we limit our investigation to the DCF scheme.DCF describes two techniques to employ for packet transmis-sion.The default scheme is a two-way handshaking technique called basic access mechanism.This mechanism is character-ized by the immediate transmission of a positive acknowledge-ment(ACK)by the destination station,upon successful recep-tion of a packet transmitted by the sender station.Explicit trans-Manuscript received November1998;revised July25,1999.this work was supported in part by CNR and MURST,Italy.The author is with the Universitádi Palermo,Dipartimento di Ingegneria Elet-trica,Viale delle Scienza,90128Palermo,Italy(e-mail:bianchi@elet.polimi.it). Publisher Item Identifier S0733-8716(00)01290-7.mission of an ACK is required since,in the wireless medium,a transmitter cannot determine if a packet is successfully received by listening to its own transmission.In addition to the basic access,an optional four way hand-shaking technique,known as request-to-send/clear-to-send (RTS/CTS)mechanism has been standardized.Before transmit-ting a packet,a station operating in RTS/CTS mode“reserves”the channel by sending a special Request-To-Send short frame. The destination station acknowledges the receipt of an RTS frame by sending back a Clear-To-Send frame,after which normal packet transmission and ACK response occurs.Since collision may occur only on the RTS frame,and it is detected by the lack of CTS response,the RTS/CTS mechanism allows to increase the system performance by reducing the duration of a collision when long messages are transmitted.As an important side effect,the RTS/CTS scheme designed in the 802.11protocol is suited to combat the so-called problem of Hidden Terminals[4],which occurs when pairs of mobile stations result to be unable to hear each other.This problem has been specifically considered in[5]and in[6],which,in addition,studies the phenomenon of packet capture.In this paper,we concentrate on the performance evaluation of the DCF scheme,in the assumption of ideal channel con-ditions and finite number of terminals.In the literature,perfor-mance evaluation of802.11has been carried out either by means of simulation[7],[8]or by means of analytical models with sim-plified backoff rule assumptions.In particular,constant or geo-metrically distributed backoff window has been used in[5],[9], [10]while[11]has considered an exponential backoff limited to two stages(maximum window size equal to twice the minimum size)by employing a two dimensional Markov chain analysis. In this paper,which revises and substantially extends[12],we succeed in providing an extremely simple model that accounts for all the exponential backoff protocol details,and allows to compute the saturation(asymptotic)throughput performance of DCF for both standardized access mechanisms(and also for any combination of the two methods).The key approximation that enables our model is the assumption of constant and indepen-dent collision probability of a packet transmitted by each station, regardless of the number of retransmissions already suffered.As proven by comparison with simulation,this assumption leads to extremely accurate(practically exact)results,especially when the number of stations in the wireless LAN is fairly large(say greater than ten).The paper is outlined as follows.In Section II we briefly re-view both basic access and RTS/CTS mechanisms of the DCF. In Section III we define the concept of Saturation Throughput, and in Section IV we provide an analytical technique to com-0733–8716/00$10.00©2000IEEEpute this performance figure.Section V validates the accuracy of the model by comparing the analytical results with that ob-tained by means of simulation.Additional considerations on the maximum throughput theoretically achievable are carried out in Section VI.Finally,the performance evaluation of both DCF ac-cess schemes is carried out in Section VII.Concluding remarks are given in Section VIII.II.802.11D ISTRIBUTED C OORDINATION F UNCTION This section briefly summarizes the DCF as standardized by the 802.11protocol.For a more complete and detailed presen-tation,refer to the 802.11standard [3].A station with a new packet to transmit monitors the channel activity.If the channel is idle for a period of time equal to a dis-tributed interframe space (DIFS),the station transmits.Other-wise,if the channel is sensed busy (either immediately or during the DIFS),the station persists to monitor the channel until it is measured idle for a DIFS.At this point,the station generates a random backoff interval before transmitting (this is the Colli-sion Avoidance feature of the protocol),to minimize the prob-ability of collision with packets being transmitted by other sta-tions.In addition,to avoid channel capture,a station must wait a random backoff time between two consecutive new packet transmissions,even if the medium is sensed idle in the DIFS time.1For efficiency reasons,DCF employs a discrete-time backoff scale.The time immediately following an idle DIFS is slotted,and a station is allowed to transmit only at the beginning of each slot time.The slot timesize,is called contention window,and de-pends on the number of transmissions failed for the packet.At the first transmissionattempt,is doubled,up to a maximumvaluereported in thefinal version of the standard [3]are PHY-specific and are sum-marized in Table I.The backoff time counter is decremented as long as the channel is sensed idle,“frozen”when a transmission is detected on the channel,and reactivated when the channel is sensed idle again for more than a DIFS.The station transmits when the backoff time reaches zero.Fig.1illustrates this operation.Two stations A and B share the same wireless channel.At the end of the packet transmis-1Asan exception to this rule,the protocol provides a fragmentation mecha-nism,which allows the MAC to split an MSDU (the packet delivered to the MAC by the higher layers)into more MPDUs (packets delivered by the MAC to the PHY layer),if the MSDU size exceeds the maximum MPDU payload size.The different fragments are then transmitted in sequence,with only a SIFS between them,so that only the first fragment must contend for the channel access.TABLE IS LOT T IME ,M INIMUM ,AND M AXIMUMC ONTENTION W INDOW V ALUES FOR THE THREE PHY S PECIFIED BY THE 802.11S TANDARD :F REQUENCY H OPPING S PREAD S PECTRUM (FHSS),D IRECTS EQUENCE S PREAD S PECTRUM (DSSS),AND I NFRARED(IR)sion,station B waits for a DIFS and then chooses a backoff time equal to 8,before transmitting the next packet.We assume that the first packet of station A arrives at the time indicated with an arrow in the figure.After a DIFS,the packet is transmitted.Note that the transmission of packet A occurs in the middle of the Slot Time corresponding to a backoff value,for station B,equal to 5.As a consequence of the channel sensed busy,the backoff time is frozen to its value 5,and the backoff counter decrements again only when the channel is sensed idle for a DIFS.Since the CSMA/CA does not rely on the capability of the sta-tions to detect a collision by hearing their own transmission,an ACK is transmitted by the destination station to signal the suc-cessful packet reception.The ACK is immediately transmitted at the end of the packet,after a period of time called short inter-frame space (SIFS).As the SIFS (plus the propagation delay)is shorter than a DIFS,no other station is able to detect the channel idle for a DIFS until the end of the ACK.If the transmitting sta-tion does not receive the ACK within a specified ACK_Timeout,or it detects the transmission of a different packet on the channel,it reschedules the packet transmission according to the given backoff rules.The above described two-way handshaking technique for the packet transmission is called basic access mechanism.DCF de-fines an additional four-way handshaking technique to be op-tionally used for a packet transmission.This mechanism,known with the name RTS/CTS,is shown in Fig.2.A station that wants to transmit a packet,waits until the channel is sensed idle for a DIFS,follows the backoff rules explained above,and then,in-stead of the packet,preliminarily transmits a special short frame called request to send (RTS).When the receiving station detects an RTS frame,it responds,after a SIFS,with a clear to send (CTS)frame.The transmitting station is allowed to transmit its packet only if the CTS frame is correctly received.The frames RTS and CTS carry the information of the length of the packet to be transmitted.This information can be read by any listening station,which is then able to update a network allocation vector (NA V)containing the information of the period of time in which the channel will remain busy.Therefore,when a station is hidden from either the transmitting or the receiving station,by detecting just one frame among the RTS and CTS frames,it can suitably delay further transmission,and thus avoid collision.The RTS/CTS mechanism is very effective in terms of system performance,especially when large packets are considered,as it reduces the length of the frames involved in the contention process.In fact,in the assumption of perfect channel sensing by every station,collision may occur only when two (or more)packets are transmitted within the same slot time.If both trans-BIANCHI:PERFORMANCE ANALYSIS OF THE IEEE 802.11DCF537Fig.1.Example of basic access mechanism.mitting stations employ the RTS/CTS mechanism,collision oc-curs only on the RTS frames,and it is early detected by the transmitting stations by the lack of CTS responses.A quanti-tative analysis will be carried out in Section VII.III.M AXIMUM AND S ATURATION T HROUGHPUT P ERFORMANCE In this paper we concentrate on the “Saturation Throughput”.This is a fundamental performance figure defined as the limit reached by the system throughput as the offered load increases,and represents the maximum load that the system can carry in stable conditions .It is well known that several random access schemes exhibit an unstable behavior.In particular,as the offered load increases,the throughput grows up to a maximum value,referred to as “maximum throughput.”However,further increases of the offered load lead to an eventually significant decrease in the system throughput.This results in the practical impossibility to operate the random access scheme at its maximum throughput for a “long”period of time,and thus in the practical mean-ingless of the maximum throughput as performance figure for the access scheme.The mathematical formulation and interpretation of this instability problem is the object of a wide and general discussion in [13].Indeed,the 802.11protocol is known to exhibits some form of instability (see,e.g.,[5],and [11]).To visualize the unstable behaviour of 802.11,in Fig.3we have run simulations in which the offered load linearly increases with the simulation time.The general simulation model and parameters employed are summa-rized in Section V .The results reported in the figure are obtained with 20stations.The straight line represents the ideal offered load,normalized with respect of the channel capacity.The sim-ulated offered load has been generated according to a Poisson arrival process of fixed size packets (payload equal to 8184bits),where the arrival rate has been varied throughout the simulation to match the ideal offered load.The figure reports both offered load and system throughput measured over 20s time intervals,and normalized with respect to the channel rate.From the figure,we see that the measured throughput follows closely the measured offered load for the first 260s of sim-ulation,while it asymptotically drops to the value 0.68in the second part of the simulation run.This asymptotic throughput value is referred to,in this paper,as saturation throughput,and represents the system throughput in overload conditions.Note than,during the simulation run,the instantaneous throughput temporarily increases over the saturation value (up to 0.74inFig.2.RTS/CTS AccessMechanism.Fig.3.Measured Throughput with slowly increasing offered load.the example considered),but ultimately it decreases and stabi-lizes to the saturation value.Queue build-up is observed in such a condition.IV .T HROUGHPUT A NALYSISThe core contribution of this paper is the analytical evalu-ation of the saturation throughput,in the assumption of ideal channel conditions (i.e.,no hidden terminals and capture [6]).In the analysis,we assume a fixed number of stations,each al-ways having a packet available for transmission.In other words,we operate in saturation conditions,i.e.,the transmission queue of each station is assumed to be always nonempty.The analysis is divided into two distinct parts.First,we study the behavior of a single station with a Markov model,and we obtain the stationaryprobability.A.Packet Transmission Probability Consider a fixednumber538IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS,VOL.18,NO.3,MARCH2000Fig.4.Markov Chain model for the backoff window size.for transmission,after the completion of each successful trans-mission.Moreover,being all packets “consecutive,”each packet needs to wait for a random backoff time before transmitting.Letand,as it may include a packettransmission.In what follows,unless ambiguity occurs,with the term slot time we will refer to either the (constant)valueis non-Markovian.However,define forconvenience .Let,and let us adopt thenotation ,wherebe the sto-chastic process representing the backoffstage.The key approximation in our model is that,at each transmis-sion attempt,and regardless of the number of retransmissions suffered,each packet collides with constant and independentprobabilityand will be referred to asconditional collision probability ,meaning that this is the prob-ability of a collision seen by a packet being transmitted on the channel.Once independence is assumed,and;k ;k ;b (t +1)=k ;b (t )=kBIANCHI:PERFORMANCE ANALYSIS OF THE IEEE802.11DCF539 stage0,and thus the backoff is initially uniformly chosen in therange,the backoff stage increases,and the newinitial backoff value is uniformly chosen in therange,it is not increased in subsequentpacket transmissions.Let,(3)rewritesas(4)Thus,by relations(2)and(4),all thevaluesand of the conditional collisionprobability is finally determined by imposing the nor-malization condition,that simplifies asfollows:that a station trans-mits in a randomly chosen slot time.As any transmission occurswhen the backoff time counter is equal to zero,regardless of thebackoff stage,itis,i.e.,no exponential backoff is considered,theprobability,and(7)becomes the much simplerone independently found in[9]for the constant backoff windowproblem(8)However,ingeneral,,which is still unknown.To find the valueofthat a transmittedpacket encounters a collision,is the probability that,in a timeslot,at least one ofthe remaining stations transmit.Thefundamental independence assumption given above implies thateach transmission“sees”the system in the same state,i.e.,insteady state.At steady state,each remaining station transmits apacket withprobability(9)Equations(7)and(9)represent a nonlinear system in the twounknowns,which can be solved using numerical tech-niques.It is easy to prove that this system has a unique solution.In fact,inverting(9),weobtaindefined by(7)is also continuous intherangecan bealternatively writtenasis trivially shown to be a monotone decreasing function thatstartsfrom and reduces uptoandbe the normalized system throughput,defined as thefraction of time the channel is used to successfully transmit pay-load bits.Tocomputebe the probability that thereis at least one transmission in the considered slot time.Since(10)Theprobability(11)We are now able toexpress,540IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS,VOL.18,NO.3,MARCH2000Fig.5.T for basic access and RTS/CTS mechanisms.the slot time is empty;withprobabilityit con-tains a collision.Hence,(12)becomesis the duration of an empty slot time.Ofcourse,thevalues must be expressed withthe same unit.Note that the throughput expression (13)has been obtained without the need to specify the access mechanism employed.To specifically compute the throughput for a given DCF ac-cess mechanism it is now necessary only to specify the corre-spondingvaluesbe the propagation delay.As shownin Fig.5,in the basic access case weobtainis the the average length of the longest packetpayload involved in a collision.In the case all packets have the same fixedsize,for the packet's payload size.Letof collidingpackets,writes asfollows:(15)When the probability of three or more packets simultaneously colliding is neglected,(15)simplifiesto(16)BIANCHI:PERFORMANCE ANALYSIS OF THE IEEE 802.11DCF 541only if they exceed a given predetermined threshold on the packet's payload size.More specifically,being,again,theprobability distribution function of the packetsize,(19)Tocompute;2)collision between two packets transmitted viabasic access,withprobabilitythe respective average collisiondurations,weobtain(20)The average collision durations adopted in (20)detail as fol-lows.Letbe the extra length of the packet header with respect of the RTS frame,andlethas been already computed in thecase(22)Finally,noting that in the case of collision between two basic ac-cess packets,the probability distribution function of the length of the longest packet payload involved in a collision is the square of the conditional probability distribution function of the packet sizedistribution(23)By substituting (21),(22),and (23)in (20),we finallyobtain(24)For simplicity,in the rest of this paper we restrict our numer-ical investigation to the case of fixed packet size,and therefore we will evaluate the performance of systems in which all sta-tions operate either according to the basic access mechanism or according to the RTS/CTS mechanism (i.e.,never operating in the hybrid mode.)3V .M ODEL V ALIDATIONTo validate the model,we have compared its results with that obtained with the 802.11DCF simulator used in [9].Ours is an event-driven custom simulation program,written in the C++programming language,that closely follows all the 802.11pro-tocol details for each independently transmitting station.In par-ticular,the simulation program attempts to emulate as closely as possible the real operation of each station,including propa-gation times,turnaround times,etc.The values of the parameters used to obtain numerical results,for both the analytical model and the simulation runs,are sum-marized in Table II.The system values are those specified for the frequency hopping spread spectrum (FHSS)PHY layer [3].The channel bit rate has been assumed equal to 1Mbit/s.The frame sizes are those defined by the 802.11MAC specifications,and the PHY header is that defined for the FHSS PHY .The values of the ACK_Timeout and CTS_Timeout reported in Table II,and used in the simulation runs only (our analysis neglects the effect of these timeouts)are not specified in the standard,and they have been set equal to 300µs.This numerical value has been chosen as it is sufficiently long to contain a SIFS,the ACK transmission and a round trip delay.Unless otherwise specified,we have used in the simulation runs a constant packet payload size of 8184bits,which is about one fourth of the maximum MPDU size specified for the FHSS PHY ,while it is the maximum MPDU size for the DSSS PHY .Fig.6shows that the analytical model is extremely accurate:analytical results (lines)practically coincide with the simulation3Adetailed performance analysis of the hybrid mode requires to assume one or more suitable probability distribution functions for the packet's payload size,and also to determine the sensitivity of the throughput on the assumed distri-butions.Such a straightforward,but lengthy,study is out of the scopes of the present work.542IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS,VOL.18,NO.3,MARCH2000 results(symbols),in both basic access and RTS/CTS cases.Allsimulation results in the plot are obtained with a95%confidenceinterval lower than0.002.Negligible differences,well below1%,are noted only for a small number of stations(results forthe extreme case of as low as2and3stations are tabulated inTable III).VI.M AXIMUM S ATURATION T HROUGHPUTThe analytical model given above is very convenient to de-termine the maximum achievable saturation throughput.Let usrearrange(13)toobtain(25)As,are constants,thethroughput.Taking the derivative of(26)with respecttoholds,and yields the following approximatesolution:that each sta-tion should adopt in order to achieve maximum throughput per-formance within a considered network scenario(i.e.,number ofstations)maximum performance can be,in principle,achieved for everynetwork scenario,through a suitable sizing of the transmissionprobabilitydepends only on the networksize and on the systemparameters.Asa n d.T h i s p r o b l e m h a s b e e n s p e c i f i c a l l y c o n s i d e r e d i n[9]f o r t h ec a s e o f f i x ed b a c k o f f w i n d o w s i z e(i.e.,).I n s u c h a c a s e,T A B L E I IF H S S SY S T E MP A R A M E T E R S A N DA D D I T I O N A LP A R A M E T E R SU S E D T OO B T A I N N U M E R I C A LR E S U L TSF i g.6.S a t u r a t i o n T h r o u g h p u t:a n a l y s i s v e r s u s s i m uT A B L E I I IA N A L Y S I SV E R S U S S I M U L A T I O N:C OMPARISON FOR A V ERY L OW N UMBER OFS TATIONS—W=32;m=32BIANCHI:PERFORMANCE ANALYSIS OF THE IEEE802.11DCF543Fig.7.Throughput versus the transmission probability for the basic access method.Table II.The figure reports also the different throughput values obtained in the case of exact and approximate solutionforleads to similar throughput values.The accuracy of the throughput obtained by the approxi-mate solution is better testified by the numerical values reported in Table IV.Note that the agreement is greater in the basic ac-cess case,assufficientlylarge(29).Using the numerical valuesof Table V,weobtain.An advantage of the RTS/CTS scheme is that the throughputis less sensitive on the transmissionprobabilityaxis scale)thata small variation in the optimal valueofANDTn n n nu n n ne nn544IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS,VOL.18,NO.3,MARCH2000Fig.9.Saturation Throughput versus initial contention window size for the basic access mechanism.network size,the lower is the throughput.The only partial ex-ception is thecaseincreases.In fact,the throughput is practically constantfor,and even increases with the number of mobile sta-tionswhenwe have reported in Figs.9and 10the saturation throughput versus thevalue.The figures report four different networksizes,i.e.,number ofstations,and the optimal valueof(e.g.,1024)gives excellent throughput performancein the case of50contending stations,while it drastically penal-izes the throughput in the case of small number(e.g.,5)of con-tending stations.This behavior is seen also in Fig.10,where theRTS/CTS mechanism is rge valuesofthat a transmission occurs onthe channel is equal to theprobabilityequal to0,,and particularly in correspondence of thevaluesgets closer to64.Moreover,the throughputsignificantly decreases as the number of stations increases.Onthe contrary,Fig.10shows that the throughput obtained withthe RTS/CTS mechanism is almost independent of thevalue,com-puted according to(14)and(17),in the assumption of systemand channel parameters of Table II.The differencebetweenis the average number of slot times spent onthe channel in order to have a successful transmission.Of thoseslot times,afraction is empty,and each empty slot timelasts,is plotted in Fig.11versus thenetwork size,for three different values of the initial contentionwindowBIANCHI:PERFORMANCE ANALYSIS OF THE IEEE 802.11DCF 545of idle slot times per packet transmission is very low,particu-larly when compared with thevaluesgets greater (thecaseis theaverage number of collided transmissions per each successful transmission,which is multipliedbyand the networksize.It shows that,for the basic access mechanism,the amount of channel time wasted in collisions is extremely large for a smallvalueandof backoff stages is marginal.The figurereports the cases of both Basic and RTS/CTS access schemes,with(similar behaviour is observed for other values of theparameter,i.e.,in correspondenceof the standardized engineering parameters of the DSSS PHY (Table I).We see that the choiceofis greater than four or five.The only case in which the throughput still grows,foris the probability,seen by thestation,that its transmitted packet collides.Owing to the model's key assumption of independence at each retransmission,the av-erage number of transmissions that each station must perform in order to successfully complete a packet transmission is givenby.This value is reported in Fig.14,obtained with the same system parameters of Figs.9and 10.Fig.14shows that the number of transmissions per packet significantly increases as the initial backoffwindow.In particular,the lowerisis related to the fact,com-mented above by means of (33)and Fig.11,that the number of idle slot times per packet transmission is extremely small.A changeofand。
QoS---QoS服务模型、影响⽹络质量的因素QoS服务模型传统的⽹络设备在处理报⽂转发时,会依据先到达的报⽂优先被转发的机制进⾏处理,所以这样就会导致当⽹络发⽣拥塞时,⼀些关键业务的通信质量就得不到保障(如语⾳延迟、视频卡顿、关键业务⽆法通信等),进⽽影响到客户体验QoS在带宽有限的情况,根据不同的流量,提供不同的优先服务⼀.影响通信质量的因素1.带宽⽹络的最⼤带宽是由传输路径的最⼩带宽决定的⽹络带宽不⼀致,出现拥塞点FIFO队列:先进先出2.⽹络时延:发送端到接收端的路径所有时延总和时延超过50MS,认为⽹络质量不好处理时延:⽹络设备内部处理等待时延传输时延:传输介质和传输距离决定队列时延:⽹络设备内部数据调度的等待时间串⾏化时延:链路上第⼀个bit⾄发完最后⼀bit所需时间3.抖动每个报⽂到达⽬的的时延不同所导致,每个报⽂到达的最⼤时间差⼀般抖动不超过30MS例如⼀个时延60ms,⼀个时延30s,那么抖动就是60ms-30ms=30ms4.丢包丢包由很多因素导致处理过程:CPU繁忙导致⽆法处理报⽂排队过程:在队列时,可能由于队列被装满⽽导致丢包传输过程:链路的种种原因丢包率⼀般不能⼤于2%⼆、服务模型1.尽⼒⽽为模型(默认)best-effort通过增⼤⽹络带宽,硬件性能提升⽹络通信质量优点:效果显著缺点:成本代价⼤,存在⼀定的中断业务风险(替换设备) 2.综合服务模型Integrated Services Model应⽤程序发送消息前需要先向⽹络设备申请带宽和服务,收到设备同意后,程序才会发出报⽂,通过RSVP协议保障业务带宽,延时实现复杂(运⾏RSVP协议),空闲时独占带宽,使⽤率低通过RSVP协议,申请预留带宽资源3.差分服务模型分类、标记不同流量,定义不同处理动作,进⼊队列中按照调度机制实现差分服务将⽹络的流量分成多个类,形成多个队列,每个类有不同的优先转发、丢包率,时延等差分时服务域针对报⽂进⾏区别服务的区域DS边界DS域的⽹络⼊⼝设备节点负责流量的分类,标记DS节点DS域中间设备,出⼝设备根据报⽂标记将外部优先级(报⽂)映射成本地优先级(设备内部)是否映射由设备决定,如果设备不信任该流量就不会映射转换其优先级根据本地优先级将报⽂放⼊不同的缓存队列,利⽤调度技术,使其优先转发每个DS节点独⽴,对报⽂处理⽅式可以不⼀致灵活性缺点需要在每台设备部署,对⼈员技术要求⾼实现差分服务的关键技术:报⽂分类和标记(报⽂优先级字段,DSCP)拥塞管理(队列技术)、拥塞避免(尾丢弃)流量整形和流量监管(令牌桶)优点缺点尽⼒⽽为服务模型实现机制简单对不同业务流不能进⾏区分对待综合服务模型可提供端到端QoS服务,并保证带宽、延迟需要跟踪和记录每个数据流的状态,实现较复杂,且扩展性较差,带宽利⽤率较低区分服务模型不需跟踪每个数据流状态,资源占⽤少,扩展性较强;且能实现对不同业务流提供不同的服务质量需要在端到端每个节点都进⾏⼿⼯部署,对⼈员能⼒要求较⾼。
Paper Title:QualProbes:Middleware QoS Profiling Servicesfor Configuring Adaptive ApplicationsAuthor Names:Baochun Li(Corresponding Author),Klara Nahrstedt Nature of Paper:Full PaperSurface Mail Address:Baochun Li or Klara Nahrstedt1304W.Springfield AvenueUniversity of Illinois at Urbana-ChampaignUrbana,IL61801Telephone:Baochun Li:(217)244-5841Klara Nahrstedt:(217)244-6624Fax:Both:(217)244-6869Email:Baochun Li:b-li@Klara Nahrstedt:klara@Corresponding Author:Baochun LiAbstractIt is widely accepted that in order to deliver the best Quality-of-Service(QoS),applications need to be adaptive to thefluctuating computing and communication environments.The middleware layer may assist such adaptation behavior in two ways.First,the middleware may adapt and reconfigure itself in order to transparently provide a stable and predictable environment to the application.Second, it may control the behavior of the applications so that they adapt and reconfigure themselves.The latter alternative enjoys the advantage of knowing exactly what are the application-specific adaptation priorities and requirements,but lacks an easy way to pinpoint the relationships between application-specific adaptation choices and the actual changes in resource demands,caused by reconfiguring an adaptive application.In this paper,we present QualProbes,a set of middleware QoS Probing and Profiling services to discover such relationships at run-time.Our approach focuses on meeting the requirements of the crit-ical performance criterion in the application.Frequently,such criterion may be affected by changes in more than one application-specific QoS parameters,and these parameters have diversely different re-source usage patterns.QualProbes services are able to precisely capture the effects made to the critical performance criterion when resource availability varies,and thus enable more effective control of the application to adapt to resource variations.Our case study with OmniTrack,an omni-directional visual tracking application,provides solid proof that QualProbes significantly enhance our capabilities to satis-fy the critical performance criterion,the tracking precision,while controlling the adaptation process of the application.QualProbes:Middleware QoS Profiling Servicesfor Configuring Adaptive ApplicationsBaochun Li,Klara NahrstedtDepartment of Computer ScienceUniversity of Illinois at Urbana-Champaignb-li,klara@AbstractIt is widely accepted that in order to deliver the best Quality-of-Service(QoS),applications need to be adaptive to thefluctuating computing and communication environments.The middleware layermay assist such adaptation behavior in two ways.First,the middleware may adapt and reconfigureitself in order to transparently provide a stable and predictable environment to the application.Second,it may control the behavior of the applications so that they adapt and reconfigure themselves.Thelatter alternative enjoys the advantage of knowing exactly what are the application-specific adaptationpriorities and requirements,but lacks an easy way to pinpoint the relationships between application-specific adaptation choices and the actual changes in resource demands,caused by reconfiguring anadaptive application.In this paper,we present QualProbes,a set of middleware QoS Probing and Profiling services to discover such relationships at run-time.Our approach focuses on meeting the requirements of the crit-ical performance criterion in the application.Frequently,such criterion may be affected by changes inmore than one application-specific QoS parameters,and these parameters have diversely different re-source usage patterns.QualProbes services are able to precisely capture the effects made to the criticalperformance criterion when resource availability varies,and thus enable more effective control of theapplication to adapt to resource variations.Our case study with OmniTrack,an omni-directional visualtracking application,provides solid proof that QualProbes significantly enhance our capabilities to satis-fy the critical performance criterion,the tracking precision,while controlling the adaptation process ofthe application.1IntroductionRecent research advances in Quality-of-Service(QoS)and resource management have brought forth numer-ous solutions to support QoS-aware applications,so that their demands for both end system and network resources are met.Two major categories of such solutions have evolved.First,reservation-based systems employ various resource reservation and admission control mechanisms to enforce the delivery of requested QoS to the applications.Such enforcement may be deterministic or statistical,depending on the policies involved for resource reservation.One drawback of this approach is that many reservation mechanisms de-mand major overhaul in the design of prevalent operating systems in use today,such as Windows NT,or networking protocols,such as TCP/IP.In contrast,adaptation-based systems operate based on best-effortenvironments,and attempt to adapt themselves or the applications for the purpose of providing the best possible QoS under available resource conditions,and of achieving the most graceful quality degradation in case of scarce resources.It is advantageous to implement such adaptation-based systems in the middleware level,since it does not require tight integration or modifications to the best-effort services in OS kernel and network protocol stack, which is the major advantage of adaptation-based systems over reservation-based systems.Indeed,notable examples of adaptation-based systems,such as the QuO[1]and Da CaPo++[2],implement adaptation-based services in the middleware.Naturally,since both middleware components and the actual QoS-aware applications may be reconfigured to adapt to the changing environment,two approaches exist with two distinctive focuses.One approach is to dynamically reconfigure the middleware itself so that it can transpar-ently provide a stable and predictable operating environment to the application.This approach is attractive since it does not require any modifications to the application,any legacy application can be deployed with little efforts and with a certain level of QoS assurance.However,since it can only provide a generic solution to all applications,a set of highly application-specific requirements cannot be addressed.Alternatively,the middleware may be active,and exert strict control of the adaptation behavior of QoS-aware applications, so that these applications adapt and reconfigure themselves under such control.This approach enjoys the advantage of knowing exactly what are the application-specific adaptation priorities and requirements,so that appropriate adaptation choices can be made to address these requirements.However,it lacks an easy way to manifest the relationship between application-specific adaptation choices and the actual changes in resource demands,caused by reconfiguring an adaptive application.We take the latter alternative in our approach.Since the primary objective common to all adaptation-based approaches is to provide the best possible QoS with the current resource availability in a swiftly changing environment,the problem comes to the proper choice of a certain criterion that can assist the judgment of”What is best?”.Most applications have more than one QoS parameters that are application-specific,and any changes in these parameters contribute to an increase or degradation of the delivered quality.In this paper,we focus on the critical performance criterion,which concentrates on the satisfaction of requirements related to the most critical application QoS parameter.The quality of other non-critical parameters can be traded off.For example,in our case study of OmniTrack,an omni-directional visual tracking application,the tracking precision1is the most critical QoS parameter in the tracking application.The critical performance criterion,therefore,is to keep the tracking precision accurate and stable.In this paper,we present QualProbes,a set of middleware QoS probing and profiling services,that are uniquely designed to address the following problems:(1)How do changes in non-critical application QoS parameters relate to the critical QoS parameter,and thus the critical performance criterion?(2)How do the changes in application QoS parameters relate to changes in resource demands or consumption?(3)How do the solutions to the previous problems translate to appropriate control actions activated by the middleware, so that the critical performance criterion,e.g.,a stable tracking precision,are satisfied and maintained?Once we have solved these problems,we are able to control the adaptation process within the application from the middleware,so that under any circumstances in a best-effort environment and withfluctuating resource availability,the application is able to maintain the best possible quality-of-service,in the sense that the critical performance criterion is always satisfied.The rest of the paper is organized as follows.Section2briefly introduces the design and architecture of Agilos(Agil e Q oS),a middleware control architecture that actively controls the application’s adaptation behavior.The QualProbes services are introduced and serve as critical core components in the Agilos architecture.Section3presents our theoretical and practical solutions to the above problems,forming thebasis of QualProbes .Section 4shows a detailed experimental analysis of the control effectiveness from the middleware,with and without the assistance of QualProbes .We use OmniTrack ,our omni-directional visual tracking application,as an example of complex applications.Section 5discusses related work and Section 6concludes the paper.2Agilos Middleware:A Background IntroductionThe ultimate objective of Agilos ,our middleware control architecture,is to control the adaptation process within the application so that it is steered towards the satisfaction of application-specific critical perfor-mance criterion .In order to accomplish the objective,the core middleware components of Agilos consist of application-neutral Adaptors and application-aware Configurators ,which reflect a two-level hierarchy of middleware control.In the application-neutral level,each Adaptor corresponds to a single type of resource,e.g.,CPU Adaptors or network bandwidth Adaptors.Though the Adaptors are specific to resources,they are not aware of the semantics of individual applications.In contrast,the Configurators in the application-specific level are fully aware of the application-specific semantics,and thus each Configurator only serves one application.This hierarchical design of the Agilos architecture is illustrated as in Figure 1. Engine Omni-Directional Visual TrackingAdaptor (Application-Neutral)Configurator (Application-specific)Negotiator end systemsQualProbes Other ObserverCPU Network Bandwidth Other ResourcesAgilos Middleware Control Architecture OS and Communication Protocol Stack Middleware LayerApplication Layer Application-specific LevelApplication-neutral LevelRuleBase Membership Functions Fuzzy Inference Figure 1:The Hierarchical Design of the Agilos ArchitectureThough the Adaptors and Configurators form the basis of the Agilos architecture,three additional com-ponents are necessary to complete the design and to achieve the desired functionality.First,the Negotiator is responsible for all communications among Agilos middleware components on different end systems.Sec-ond,the Observer is responsible for monitoring resource availability and inspecting any application-specific parameters.Third,QualProbes provide QoS probing and profiling services so that application-specific map-pings between the two adaptation levels can be derived.This paper focuses on the algorithm design of the QualProbes services.QualProbes are designed to assist controlling the applications so that control actions are generated with better awareness of application’s behavior and resource demands.To achieve this goal,the results of Qual-Probes are utilized in replacing the ”fuel”of the Configurator.As detailed in previous work [4],the Config-urator is designed as a rule-based fuzzy control system.As illustrated,the Configurator can be partitioned into three parts:the Fuzzy Inference Engine ,Membership Functions and Rule Base .While fuzzy inference engine is application-neutral,the ”fuel”,namely the rule base and membership functions of associated lin-guistic variables,are application-specific.Such model guarantees that discrete adaptation choices and a wide variety of resource/application QoS mappings can be addressed easily with a replacement of the rules and membership functions in the rule base.Rules in the rule base are written using linguistic variables and values.In OmniTrack,examples of variables are cpu demand,and examples of values are below low. These values are uniquely characterized by membership functions,so that the inference engine can have exact definitions of these values.The design of the rule base involves the generation of a set of conditional statements in the form of if-then rules,such as if cpu high and throughput average then configuration is compress.Apparently,the role of QualProbes is to capture the run-time relationships between application QoS and their resource demands,so that the above rules are activated with appropriate timing.3QualProbes:Investigating Application-Specific BehaviorSince the ultimate objective is to steer adaptations towards satisfaction of the critical performance criterion, the primary goal of QualProbes services is to devise mechanisms that best facilitate such optimal steering of adaptation decisions.To achieve this goal,QualProbes need to address the following issues.First, QualProbes need to accurately capture the relationships between the most critical application QoS parameter, such as the tracking precision,and other non-critical ones.This is crucial to perform tradeoffs of non-critical parameters.Second,QualProbes need to capture the resource demands of each non-critical QoS parameters. Both of the above are achieved via run-time probing and profiling mechanisms.Finally,such profiling results should be used to assist the generation of application-specific control rules,which are integrated in the Configurator.We address the above issues in the following sections.We illustrate our solutions with actual examples derived from OmniTrack.3.1Relations Among QoS Parameters and Resources:The Dependency Tree ModelAs previously noted,the application-specific QoS parameters can be classified as critical(usually one pa-rameter such as the tracking precision)and non-critical.In addition,the changes of each parameter in the non-critical collection may cause and be dependent on the changes of zero,one,or multiple types of resources.Assume that we study different resource types,and the current observation of consumed resources are,measured with their respective units.Typically in OmniTrack,,and is measured with the CPU load percentage,while is measured with bytes per second.In addition,assume that there are unique non-critical QoS parameters that may influence the critical parameter,,in the application.These parameters are,.For,there are of resource types related to,where.In the OmniTrack example,if is frame rate,its changes correspond to and.In contrast,if is the object velocity,it does not directly correspond to any resources, though,the tracking precision,depends on its variations.3.1.1The Application ModelIn all subsequent discussions about application QoS parameters and resource types,we assume a Task Flow Model for distributed applications.A complex distributed application can be modeled as several tasks,each task generates output for the subsequent task,which can be measured by one or more output QoS parameters. Such output forms the input of subsequent tasks.In order to process input and generate output,each task requires a specific amount of resources.An acyclic task graph,as shown in Figure2can be used to illustrate such a model.Application Task: A Closer Look ApplicationTask 1Application ApplicationApplication Task 2Task 3Task 4Task Output ResourcesInput A Generic Task Flow GraphFigure 2:Illustration of The Task Flow ModelWith such a conceptual model,we note that there may be various definitions of the concept application task ,distinguished among themselves by the granularity of functional partitions in the application.Since we attempt to optimize the adaptation behavior of the application to achieve a performance goal,we divide the applications with coarse granularity ,and demand that each task must present a one-to-one mapping to an individual executable component within the application.Static or dynamic linked library objects (such as codec or encryption modules)and individual working threads are not tasks themselves,though they may be partitioned as subtasks .As an example,the Task Flow Model of OmniTrack is shown in Figure 3.Subtask Frame GrabbingTracking ServerTransmissionNetwork Omni-directionalFacilitator Send Display Tracking Algorithms Receive Display Interactive selection of camera directionsand tracking serversTask Figure 3:The Task Flow Model of OmniTrack3.1.2A Dependency Tree for Application QoS ParametersAlthough each corresponds to resources ,,we observe that such dependencies are generally hard to capture directly.We take the parameter frame rate in OmniTrack as an example.Naturally,the frame rate of video streaming depends on network bandwidth availability.However,the nature of such dependence is non-deterministic:For the same available bandwidth,the frame rate varies diversely for compressed video versus uncompressed video;different CPU load may limit the capacity that trackers can consume the frames,thus limiting the frame rate.Similar situation applies to other parameters.Such observations illustrate that each ,in addition to being directly dependent on resource types,depends directly on a subset of ,,and via its dependence with this subset of parameters ,indirectly corresponds to resources.We define that if is dependent on ,then changes in can cause changes in .Ideally,a generic model for capturing the dependencies is by using an acyclic directed dependency graph,with the critical parameter as the source,and resources ,as the sink.For simplicity reasons,we only consider a special case that such a dependency graph is a directed binary tree ,with as the root of the tree,and resources as the leaves.Each depends on zero,one or two other parameters or resources.There are two key characteristics in such a dependency tree.First,the resource types ,are always leaf nodes of the tree.This is based on a simplified assumption that the changes of each resource typenever depend on any other resources,i.e.,that resource types are independent with each other.Second,we note that in addition to demanding resources of certain types,the changes of an application QoS parameter may change the resource availability of some other resource types,without demanding them.For example,while changing the compression ratio in OmniTrack demands CPU resources,its changes will have signif-icant effects on available network bandwidth also,since less data is necessary to be transmitted.This caseis presented by a directed arrow from the resource nodeto the QoS parameter node ,showing that the availability of relies on ,rather than the usual case that demands and relies on .An illustration of our directed dependency tree model and an real-world example with OmniTrack is given in Figure 4.Dependency Tree of OmniTrackc1R R 2nil p p p p 12345Tracking Precision Object Velocity Tracking Frequency Frame Rate Number of Trackers Property of One Tracker Size of Region Tracker Type Weighted Quantity of Trackers Image PropertiesCodec Type Compression Ratio Dependency Tree A Generic nil Resource Image SizeSize in pixels CodecParametersColordepth CPU Load Network Throughputp Figure 4:The Dependency Tree for Application QoS Parameters3.1.3Characterizing the Relationship Between Dependent NodesOnce we have established the dependency tree of QoS parameters for an application,the relationship be-tween dependent nodes needs to be characterized appropriately.We assume that for ,,any values beyond this range is either not possible or not meaningful.For example,the frame rate may vary in between fps.Assume the parent node depends on two descendant nodes and .The dependency can thus be characterized by a function,defined as:,with(1)Functiondefines the dependence relationship between the parent node and its descendant nodes and .If only depends on one node ,then is equivalent to ,where .If one or two of the descendant nodes are resource types and ,then we define so that.Note that for the special case that the availability of resource type depends onchanges in ,i.e.,there is a directed link from to ,we define such that .Figure 5visually shows the above characterization.∆CPU Load 00Weighted Quantity of TrackersFrame Ratef fOne parent - one descendant case:One parent - two descendants case:Two-Dimensional Characterization Three-Dimensional CharacterizationNumberof ActiveTrackers Frequency Tracking ∆∆∆∆Figure 5:Characterization of Dependencies among QoS ParametersIf we obtained allin the dependency tree via probing and profiling services,the relationship of any application QoS parameter and its related resources can be characterized by a series of substitutions.As an example,for the generic dependency tree in Figure 4,we have(2)and(3)which characterizes the relationship between and resources and .3.2QualProbes Services Kernel:The QoS Profiling AlgorithmQualProbes services are responsible for run-time capturing of the relationships and between dependent nodes in an application-specific dependency tree,and for properly storing the results in profiles.QualProbes services are middleware components,and implement a QoS Probing and Profiling algorithm as the kernel in each component.The QualProbes services kernel is designed to be application-neutral,thus we require that all related application QoS parameters should present the following properties:1.Observable .Their run-time values at any instant can be obtained in a timely manner.Implementation-wise,we utilize the CORBA Property Service.Applications report values of their QoS parameters as CORBA properties to the Property Service when initializing or when there are changes,while QualProbes services kernel retrieves these values from the Property Service when necessary.2.Tunable .They should be either directly or indirectly tunable from outside of the application.Sincethe application exports interfaces to the middleware Configurator for such tuning and reconfigura-tion,QualProbes services only need to reuse these interfaces to control the QoS parameters in the application.Having ready ”read/write”access to the application QoS parameters,QualProbes services execute a QoS Profiling algorithm in their kernel.The algorithm traverses the dependency tree from leaves up to the root,and attempts to discover the function andpreviously defined by tuning the values in descendant QoS parameters or resource types and measuring those of the parent QoS parameter.If is three-dimensional,a nested loop involving both descendant parameters is executed.Figure6demonstrates the QoS profiling algorithm in the pseudo-code form.In this algorithm,function tune executes recursively in order to tune an application QoS parameter indirectly.for each resource leaf node in the dependency tree:if link()or link()existsfor to steptune(,);log observedfor each non-leaf node in the dependency tree(nodes on descendant levelsfirst):if has one descendant parameter nodefor to steptune(,);log observedelse if has two descendant parameter node andfor to stepfor to steptune(,);tune(,);log observedtune(,value)if is directly tunable via exported interfacecall application exported interface to set=valueelseassume descendant nodes of are andfor to stepfor to steptune(,);tune(,);if((observed)==value)return;Figure6:QualProbes Services Kernel AlgorithmAs an concrete example,Figure7illustrates the results of tuning the QoS parameters object velocity and tracking frequency in order to measure the tracking precision.The output of the inner loop(by only tuning tracking frequency)is shown as bold dotted lines.3.3Towards Better Middleware ControlThe design of QualProbes services in previous sections addresses the problem of discovering relationships between the critical performance criterion and resource demands of an application.In order to complete the solutions provided by QualProbes,we need to address the issue of bridging the obtained profiles with actual membership functions and inference rules in the Configurator.3.3.1The Inference RulesBased on our extensive experiences with the real-world application OmniTrack,we believe that the inference rules inside the rule base cannot be generated automatically.Such rules need to be written by the application developer for a specific application.The reasons are two-fold:First,a rule base customized by the appli-cation developer is best in exploiting all available adaptation choices and best optimize the rich semanticsObject Velocity (pixels/sec)1max0Tracking Frequency (times/sec)1015Tracking2Precision (pixels, smaller values shows better precision)Figure 7:QualProbes Services:An Exampleof these choices,naturally integrating the relative priorities of different application QoS parameters.In oth-er words,the application developer should decide the set of QoS parameters to be traded off in the event of quality degradation.Second,the rules are not constant.It should be tuned towards the needs and user preferences in different occasions where the application is executed.3.3.2Thresholds :Towards Better Membership FunctionsEven though the rules can not be generated automatically,the profiles discovered by QualProbes services are of significant assistance in the process of determining the membership functions of linguistic values in the inference rules.In order to demonstrate such assistance,we take one inference rule in OmniTrack as an example:if cpuhigh and throughput low then configuration is compress This inference rule operates as follows.First,it takes the output of CPU adaptor and Network Bandwidth Adaptor in the application-neutral level as input.When the CPU is idle,the CPU adaptor will apply its application-neutral control algorithm and suggests that the application under its control to demand more CPU resources.This yields a high cpudemand .Second,the inference rule decidesthat if cpudemand is low,the application should reconfigure itself and add compression to its video streaming.Third,the actual definitions,made via the membership functions,of linguistic values very low decide the activation timing of such reconfiguration choice.The question is:How ”high”is veryhigh can be defined as higher than 60%,while veryfrequency is low and object trackerAs illustrated by Figure 7,QualProbes services have discovered an approximate threshold value for tracking frequency at respective object speed levels.If the tracking frequency drops below such threshold values,we could speculate that tracking precision may degrade.In order to keep the tracking precision,which is the critical performance criterion for OmniTrack,we define the membership function of linguistic value low to cover the values lower than the threshold value that we have discovered,e.g.,10iterations per second.When this definition is applied to the above inference rule,the configuration choice of re-move。
第13章QOS故障排除
本章介绍了分析QOS模块故障的基本思路,以及QOS故障的常用方法和使用命令等。
本章内容:
QOS故障排除基本思路
QOS常见故障处理
13.1QOS故障排除基本思路
1、分析数据报文或者测试仪设置
分析数据报文是否和期望控制的特定数据流匹配,如果是使用测试仪进行QOS测试,则检查测试仪设置,构造的报文类型和流量是否符合预期。
2、检查配置及参数设置
查看相关配置是否正确,应用的规则条目是否正常生效。
13.2QOS常见故障处理
故障一:QOS不生效
故障二:为报文映射DOT1P或者DSCP后不能入队进行队列调度
故障三:报文不能按照携带的DSCP值入队。
故障四:端口限速和流量整形配置值和实际值不一致。
故障五:基于流的ACL控制不生效。
故障六:EVC POLICY应用于端口不生效。
故障七:QOS限速流量不准确
故障八:端口应用EVC POLICY后不能通信
故障九:配置QOS后语音、IPTV等视频业务不能正常开展
故障十:通过对网管数据映射高优先级后使得升级版本变慢。
MLS QOS默认3550上关闭对于接口的信任状态只能是一种,要么是COS,要么是DSCP,要么是IP优先级,不可重复。
当携带有(cos dscp IP precedence)的数据进入交换机后,如果交换机信任这个数据包携带的标签(cos dscp ip precedence)的话,那么会映射到一个内部的DSCP表(也就是cos-内部dscp,ip precedence-内部dscp,数据包的dscp-内部dscp的映射),交换机的策略都是根据内部的dscp来制定的。
由于默认rewrite是开启的,所以交换机会把内部的DSCP映射到数据包本身的DSCP上发出。
如果是trust COS那么会根据入口数据帧的COS 通过COS-内部DSCP来执行策略。
如果数据帧本身不携带COS,那么使用定义的默认的COS。
如果接口是非信任状态,那么会洗掉所有的标记(二层和三层),并且默认的COS为0。
可以通过命令改变默认的COS值,数据帧进入交换机后依然会使用COS-内部DSCP的映射来处理。
如果在接口使用了mls qos cos override那么不会信任任何从这个接口进入的标记,并且所有的数据都会使用定义的default COS来执行策略(通过COS-内部DSCP的映射,然后通过内部DSCP做策略)。
如果已经在接口开启了trust,再使用这个cos override后会变成not trust状态。
此时不管进入的数据的任何标记,都会是使用默认的COS,COS-内部DSCP的映射做策略,在发出数据时会自动的使用内部DSCP-COS的映射,所以会给人一种修改了override就影响发出帧的COS的假象。
因为默认的COS-内部DSCP和内部DSCP-COS的映射出来的值是相等的。
对于trust的种类(COS TOS DSCP其中之一)都是会映射到一个内部的DSCP进行策略。
而内部的DSCP是否映射到出口的数据包的DSCP要看是否开启了rewrite。
QoS-Enabled Broadband Mobile Access to Wireline NetworksA BSTRACTThird-generation wireless systems, known as IMT-2000 within the ITU, offer opportunities to support a wide range of multimedia services.Packet data services will play a major role in these new multimedia services. A key compo-nent of packetized data services is to ensure end-to-end QoS requirements through efficient management of the network’s resources. In this article we present an overview of radio resource scheduling schemes including architecture, radio interface protocol, and interactions in a wide-band CDMA environment. We then present an example of QoS architecture followed by a dis-cussion on end-to-end provisioning and inter-working from wireless to fixed networks.I NTRODUCTIONDuring the last decade, there has been tremen-dous growth in two technological sectors: the Internet and wireless communications. The Inter-national Telecommunication Union (ITU) pro-jects that by the end of 2002 there will be approximately 600 million Internet users. Running almost concurrently with this growth in the Inter-net has been the equally extraordinary growth in the number of mobile cellular networks and sub-scribers. It is expected that the number of mobile phones will be more than a billion by 2003. The Internet and wireless communications have con-ventionally been regarded as separate technolo-gies. This is because originally the Internet was designed to carry mainly data traffic, while wire-less networks were designed to carry mainly voice traffic. This boundary has become increasingly blurry in recent years, especially with the introduc-tion of several proposals for International Mobile Telecommunications 2000 (IMT-2000) in the ITU [1, 2]. These third-generation (3G) mobile com-munications systems are poised to be multiservice platforms supporting voice, video, and data ser-vices with bit rates up to 2 Mb/s.An important part of this evolution is quality of service (QoS), which is simply a set of servicerequirements to be met by the network while transporting a traffic stream from source to des-tination. QoS attributes are usually specified in terms of bit error rate (BER), guaranteed bit rate (R G ), delay, and so on. Essential to the notion of QoS is resource management that resolves users’ competition to utilize network resources according to their QoS agreements.According to major 3G partnership projects,code-division multiple access (CDMA) is the dominant air interface in 3G systems. That is why we focus in this article on resource alloca-tion in a CDMA environment. We first present a brief overview of UMTS architecture. Next, we review CDMA radio resources scheduling fol-lowed by different QoS classes’ requirements and some thoughts on QoS mapping from radio access to fixed core networks. We then close with some concluding remarks.S YSTEM A RCHITECTUREA simplified architectural model of the Univer-sal Mobile Telecommunication System (UMTS)[1] is shown in Fig. 1. It mainly consists of three components: a wireless mobile station (MS),1a radio access network (RAN), and a core net-work (CN). RAN, which performs all radio-access-specific procedures, can be viewed as a network extension providing wireless access to the core network (CN).This wireless extension essentially consists of a set of interconnected radio network systems (RNSs). Each RNS is responsible for the resources of its set of cells and contains several base stations (BSs) connected through a radio network con-troller (RNC). Each BS serves a group of MSs currently residing in a cell and is responsible for intracell control, while RNC is in charge of the intercell operations like handover decisions. On the fixed network side, CN consists mainly of some edge nodes (ENs) and gateway nodes (GNs)that interconnect with external networks like Internet Protocol (IP), integrated services digital network (ISDN), public switched telephone net-work (PSTN), or old mobile networks. Thus, inMohamed N. Moustafa and Ibrahim Habib, The City University of New York Mahmoud Naghshineh, IBM Thomas J. Watson Research Center Mohsen Guizani, University of West FloridaW IDEBANDW IRELESS A CCESS T ECHNOLOGIESTO B ROADBAND I NTERNET。
QoS模型术语详解随着数据设备对QoS实现的越来越多,我们也应该更多地去关注QoS方面的知识。
但是在阅读QoS文献的时候,发现太多的QoS术语让我们对相关文档望而却步。
如果要对各种QoS模型做详尽的阐述,限于篇幅不太可能,而且也没有必要,因为关于QoS文献很多。
本文试图对QoS模型及其中的术语做深入浅出的解释,并给出出现该术语的RFC,以便大家做深入的了解,希望对大家的学习有所帮助。
QoSQoS,英文全称Quality of Service,即服务质量。
不同网络的服务质量指标不同,不同的组织对QoS也有不同的定义:电信网的QoS由ITU(国际电信联盟)定义;ATM网络的QoS由ATM论坛定义;IP网络的QoS由IETF定义。
IP QoSIP网络服务质量由RFC 2386定义,具体的服务质量指标可以量化为带宽、延迟、延迟抖动、丢失率和吞吐量等。
以下术语都是与IP QoS相关的术语。
QoS模型目前IETF定义了两种QoS模型:综合服务(IntServ)和区分服务(DiffServ),综合服务是一种基于资源预留的端到端服务质量模型;区分服务是基于每跳PHB的服务质量模型。
IntServ模型RFC1633定义的IntServ模型只是一个基本的体系架构,它规定了在网络上保证QoS 的一些基本要素。
IntServ的基本思想是网络中的每个网络单元,包括主机和路由器,在需要进行数据传输时首先在传输路径上进行资源预留,这种预留是基于流的,相比较DiffServ 来讲,属于精细粒度的预留。
IntServ模型可以用在视频流和音频流应用方面,它可以保证多媒体流在网络拥塞时不受干扰。
在IntServ中,Flow Specs作为资源预留的描述,RSVP 作为资源预留的信令。
Flow Specs中文翻译成流规范,流规范包括两个方面:1、流是什么样子的?在流描述(T-Specs,Traffic Specification)中定义。
专利名称:Quality of service enabled device andmethod of operation therefore for use withuniversal plug and play发明人:Dennis Bushmitch申请号:US11226736申请日:20050914公开号:US20070058612A1公开日:20070315专利内容由知识产权出版社提供专利附图:摘要:A method of operation for use with a QoS enabled device includes maintaining a record of a configuration of QoS objects present at a QoS enabled device and availablefor configuration. Further operation includes communicating the record to a user of the device, receiving a request from the user specifying one or more alterations to the configuration, and performing the alterations to the configuration as specified. In the case where information about the configuration is recorded in accordance with a publicly available catalog of standardized terminology developed to describe configuration elements, UPnP technology is supported. Accordingly, a UPnP enabled control point on a user device can automatically configure the QoS enabled device, greatly facilitating combination of QoS enabled and UPnP enabled user devices in a home network.申请人:Dennis Bushmitch地址:Somerset NJ US国籍:US更多信息请下载全文后查看。
QOS评审准则一.领导作用评审过程中应验证:1.工厂采用的相关过程及措施(如:“相关文件”中所提到的);2.这些过程的实施、文件化及跟踪追溯要求;3.提供评审问题表中问题的证据;高层管理者的活动:1.制定与组织宗旨相一致的发展前景、方针及战略目标;2.树立模范来领导整个组织,在群众中建立起相互信任;3.就质量及质量管理体系方面传达组织的方向及价值;4.参与改进项目,寻找新的模式、解决方法及产品;5.与实现质量管理体系的有效性及效率获得直接的反馈;6.识别产品实现过程,为组织提供增值;7.识别影响产品实现过程有效性及效率的支持性过程;8.创造环境,以鼓励员工的参与及发展;二.时间数据管理审核员将审阅几个不同层次及部门的常规时间及数据管理会议的安排,提供的证据有:●如QOS的时间数据管理程序中规定的时间数据管理会议安排;●进行年度质量操作系统的评审并存档;●满足会议参加者的权力和职责要求,以便实现所规定的战略意图;●组织的各层次及部门都应相应地制定合理的会议安排,包括频次、参加者、制度、战略意图,并存档。
三.APQP投产要求按APQP评分要素评审主要投产项目适当的节点根据所审投产项目的大小按APQP通用要求来审核:1、指定项目经理协察人;2、工作计划的准备包括大小、长度级数的重要性;3、成立核心小组总装及制造派代表参加;4、当前整车项目的APQP输出;5、使用正式汇报工具评估追溯所有APQP要素;6、小型项目也要出示使用APQP的证明。
传达制造风险积极参与精致工艺评审参加新车型设计评审工厂的设施及工艺要求参与并指导样车的生产参与的识别和选择与工厂的产品专家一起造成装配制造问题的设计并共同解决,在试生产样车上的问题协助解决与焊装工装及车身生产的VTTO评审供应商新工具机的潜在行为,指导自产模具工装的验证制定并利用试生产证明指导1PP生产以验证装配工具评审PFMEA为CC.SC指导FEUPVP验证零件及装配可行性制定正式生产的控制计划指导质量测试的确认生产过程提供质量数据进行签发,工厂用生产控制计划管理工艺不断适用PPCA于装配及制造过程中。
转载请保留版权信息:作者:红头发出处: 联合发布Pt.1 Congestion Management OverviewQueueing Overview常见的几种队列机制:1.先进先出(FIFO).2.加权公平队列(WFQ).3.基于分类的加权公平队列(CBWFQ).4.自定义队列(CQ).5.优先级队列(PQ).注意,一个接口只能使用一种队列机制.Pt.2 FIFO QueueingFIFO Queueing OverviewFIFO队列机制也叫先到先服务(FIFS)队列机制.这种队列机制不提供优先级和流量分类的特性.并且只有一个队列,所有的数据包被公平的对待.数据包按抵达接口的先后顺序被转发出去.当没有使用其他的队列机制时,除了传输速率大于2.048Mbps的串行接口以外的所有接口,默认都使用这种队列机制.Pt.3 Weighted Fair QueueingWFQ OverviewWFQ是一种对网络中所有流量提供公平的带宽分配的动态调度方式.WFQ根据权重,优先级来鉴别流量,把它加入到某个队列里,并决定队列之间的带宽分配.WFQ采用基于流是算法,该算法把交互式的流量同时调度到队列的最前端来减少响应时间,并公平的分配剩余的带宽.当拥塞信息产生的时候,高带宽的数据流的信息将被丢弃;而低带宽的数据流的信息仍然被加到队列中.WFQ是传输速率低于2.048Mbps的串行接口默认的队列机制.帧中继的FE位,FECN位,和BECN位将影响WFQ权重的分配.RestrictionsWFQ的一些限制:1.WFQ不支持隧道接口或采用了加密技术的接口,因为这些技术要修改数据包中WFQ用于分类的信息.2.WFQ提供的带宽控制的精确度不如CBWFQ和CQ等队列机制.Bandwidth Allocation对于IP优先级高的队列,WFQ会分配更多的带宽给它.WFQ还给每个数据流分配权重,来决定这些队列的转发顺序.权重低的优先被转发.Cisco IOS软件把IP优先级做为除数,得到的值就是权重.WFQ对每个队列的带宽分配的方式,由权重来决定,而权重又取决于IP优先级.公式为:队列的带宽分配百分比=(该队列IP优先级+1)/(所有队列的IP优先级+1)RSVP使用WFQ来分配缓冲区空间和对数据包进行调度,并保证了对数据流带宽的保留.RSVP允许对应用程序所使用的带宽的保留.RSVP是IP网络中唯一的提供端到端的信令标准的协议.Configuring WFQ接口下启用WFQ:Aiko(config-if)#fair-queueMonitoring Fair Queueing一些辅助性的命令:1.显示公平队列的配置状态:Aiko#show queueing fair2.显示接口的队列信息:Aiko#show queue [interface]Pt.4 Class-Based Weighted Fair QueueingCBWFQ OverviewCBWFQ是WFQ的扩展,根据用户自己定义的类别把数据包进行分类,再加到某个队列中去.当每个队列中数据包的数目达到上限的时候,队列将采用尾丢弃的方式将这些数据包丢弃;或者根据每个类的策略,进行数据包的丢弃.CBWFQ一般采用尾丢弃的机制,除非你定义了采用加权早期随机检测(WRED)来对超过队列上限的数据包进行丢弃.注意,如果你打算为一个或多个队列采用WRED而不是采用尾丢弃的方式来进行数据包的丢弃,必须保证应用了服务策略的接口没有配置WRED.如果在policy map中是通过bandwidth命令来定义默认的分类,没有划分类的流量将被划分到单独的采用FIFO机制的队列中;如果在policy map中是通过fair-queue命令来定义默认的分类,没有划分类的流量将以尽力而为的方式被处理;如果没有定义默认的分类,那么所有流量将被以尽力而为的方式被处理.并且数据流的方式是基于WFQ的.Bandwidth Allocation带宽分配的总和不能超过接口带宽的75%,剩余25%的带宽用于别的负载,比如路由协议的流量,尽力而为的流量.RSVP也可以和CBWFQ协同工作.当一个接口同时配置了CBWFQ和RSVP,它们之间的工作是独立的.并且当CBWFQ不存在的时候RSVP还是会继续工作. RestrictionsCBWFQ的一些限制:1.目前流量和整形不支持CBWFQ.2.CBWFQ不支持以太网子接口.Configuring CBWFQ配置CBWFQ的3个过程:1.定义分类的策略,即class map.2.关联策略,即定义policy map.3.把policy map应用在相关接口上.定义class map步骤如下:1.定义class map:Aiko(config)#class-map [match-all|match-any] {map-name}2.定义匹配语句:Aiko(config-cmap)#{condition}一些条件选项:命令含义match access-group {ACL}匹配IP ACLmatch protocol {protocol}匹配协议match input-interface {interface}匹配进站接口match qos-group {Group ID}匹配组IDmatch destination-address {mac MAC-address}匹配目标MAC地址match source-address {mac MAC-address}匹配源MAC地址match ip {dscp dscp}匹配IP DSCP值match ip {precedence precedence}匹配IP优先级match class-map {map-name}匹配class map定义分类的策略,即policy map的步骤如下:1.设置policy map:Aiko(config)#policy-map {policy-name}2.调用class map或默认的class-map(所有未分类的流量默认都属于该分类,否则未分类的流量将以尽力而为的方式被处理):Aiko(config-pmap)#class {class-map|class-default}3.设置策略:Aiko(config-pmap-c)#bandwidth {kbps|percent percent}4.定义尾丢弃机制允许的队列中数据包的上限,默认值为64:Aiko(config-pmap-c)#queue-limit {packets}在接口上应用policy map:Aiko(config-if)#service-policy output {policy-name}Example 1限制源自192.168.10.0的流量的带宽为1000kbps:class-map match-all aikomatch access-group 1policy-map asuqaclass aikobandwidth 1000queue-limit 30class class-defaultinterface Serial1ip address 172.16.10.1 255.255.255.252service-policy output asuqaaccess-list 1 permit 192.168.10.0Configuring the Bandwidth Limiting Factor更改用于RSVP和CBWFQ等队列机制保留的最大带宽值(默认为75%): Aiko(config-if)#max-reserved-bandwidth {percent}Verifying Configuration of Policy Maps and Their Classes一些辅助性的命令:1.查看policy map信息:Aiko#show policy-map [policy-name]2.查看接口的policy map信息:Aiko#show policy-map interface [interface]3.显示接口的队列信息:Aiko#show queue [interface]Pt.5 IP RTP PriorityIP RTP Priority OverviewIP实时传输协议(RTP)优先级对延迟比较敏感的数据流,比如语音数据,提供了优先级队列机制的特性.该特性表示如果某些数据包存在于优先级队列中,它们的出列顺序将优先于别的队列中的数据包.IP RTP优先级特性无需知道语音呼叫的端口号,它提供了鉴别放进优先级队列中的数据流的端口号范围的能力.并且你可以定义整个用于语音数据流的端口范围(UDP端口号16384到32767),来保证所有的与语音数据流都可以得到优先级服务.IP RTP优先级特性对带宽低于1.544Mbps的链路尤为有用.IP RTP优先级特性可以和使用了WFQ或CBWFQ的出站接口结合使用.匹配IP RTP 优先级某个端口范围的数据包将优先于别的CBWFQ分类.一般语音数据包的体积较小,如果有体积较大的数据包要从该接口被转发出去,该接口应配置链路分片和交叉(LFI)特性.体积较大的数据包被分片为体积较小的数据包.该特性防止语音数据包要等待到体积较大的数据包被转发完毕之后才能被转发,这样语音数据包可以和被分片的数据包交叉被转发出去,从而减少了语音数据包转发消耗的时间.Configuring IP RTP Priority配置IP RTP优先级:Aiko(config-if)#ip rtp priority {starting-rtp-port-number port-number-range} {bandwidth}Monitoring and Maintaining IP RTP Priority一些辅助性的命令:1.显示接口队列信息:Aiko#show queue [interface]2.调试优先级队列:Aiko#debug priorityPt.6 Low Latency QueueingLLQ Overview低延迟队列(LLQ)把优先级队列的特性加入到了CBWFQ中,这点和IP RTP优先级特性类似.如果没有LLQ,对于一些实时的数据流量,比如语音数据流量,CBWFQ对于每个定义好的分类的操作是基于WFQ的,采用了LLQ之后,该分类的操作将优先于别的分类.LLQ减少了语音会话的抖动.LLQ和IP RTP优先级特性的区别在于,它不受UDP端口号的限制.配置LLQ:Aiko(config-pmap-c)#priority {bandwidth}Monitoring and Maintaining LLQ一些辅助性的命令:1.显示接口队列信息:Aiko#show queue [interface]2.调试优先级队列:Aiko#debug priorityPt.7 Custom QueueingCQ Overview自定义队列(CQ)可以自定义16个队列,为每个队列定义一定字节数的数据包,这些被定义好的数据包采用轮循的方式被转发.当接口启用了CQ之后,队列0为系统队列,系统队列为优先级最高的队列,当系统队列中的数据包全部被转发完毕之后,接下来才会轮到自定义队列(队列1到16).自定义队列中的数据包也按照队列号顺序被转发,当队列为空或超出本次队列允许发送的数据包时,接下来会轮到下一个队列.Determining Byte Count Values for Queues为了能够为每个队列分配一定的带宽,必须为每个队列定义一定字节数的数据包.自定义队列中的数据包也按照队列号顺序被转发,当队列为空或超出本次队列允许发送的数据包时,接下来会轮到下一个队列.但是假如定义的字节数为100字节,而某个数据包的大小为1024字节,那么该队列每次将转发的数据包的大小即为1024字节,而不是100字节.假如有3个队列,每个队列中的数据包大小分别为500字节,300字节和200字节,如果想让这3个队列平均的占用带宽,为这3个队列定义的字节数分别为200字节,200字节和200字节,但是实际上生效的带宽占用比为5/3/2.因此如果把队列中数据包的字节数定义的过小的话,将导致带宽分配的不尽如人意.但是如果把队列中数据包的字节数定义的过大,那么将导致下一个队列中的数据包被转发的等待时间过长.RestrictionsCQ的一些限制:1.由于CQ是静态配置的,因此它不能适应网络结构的改变.2.由于数据包要经过处理器卡的分类,因此CQ对数据包转发的速度要比FIFO慢.配置CQ的步骤如下:1.定义CQ列表:Aiko(config-if)#custom-queue-list {list}2.定义队列中数据包的字节数或最大个数,默认为20个.可选:Aiko(config)#queue-list {list} queue {queue-number} {limit number|byte-count bytes}3.把数据包分配进特定的CQ中,可以基于协议或基于进站接口:Aiko(config)#queue-list {list} {protocol protocol|interface interface} {queue-number}4.定义默认的CQ队列,未分类的流量默认被分配进该队列:Aiko(config)#queue-list {list} default {queue-number}Monitoring CQ Lists一些辅助性的命令:1.显示接口队列信息:Aiko#show queue [interface]2.显示CQ列表信息:Aiko#show queueing customPt.8 Priority QueueingPQ Overview优先级队列(PQ)通过为队列指定优先级,根据优先级对队列中的数据包进行转发.当优先级最高的队列为空时,接下来对优先级次高的队列进行处理.4个优先级类别:1.high.2.medium.3.normal.4.low.未指定优先级的数据包默认属于normal队列.RestrictionsPQ的一些限制:1.由于PQ是静态配置的,因此它不能适应网络结构的改变.2.由于数据包要经过处理器卡的分类,因此PQ对数据包转发的速度要比FIFO慢.3.PQ不支持隧道接口.Configuring PQ配置PQ的步骤如下:1.定义优先级列表,可以基于协议或基于进站接口:Aiko(config)#priority-list {list} {protocol protocol|interface interface} {queue-number} {high|medium|normal|low}2.定义默认的优先级队列,未分类的流量默认被分配进该队列,优先级为normal: Aiko(config)#priority-list {list} default {high|medium|normal|low}3.定义每个队列中数据包的最大个数,由高到低,默认为20,40,60和80.可选:Aiko(config)# priority-list {list} queue-limit {high-limit medium-limit normal-limit low-limit}4.把优先级列表应用在接口上:Aiko(config)#priority-group {list}Monitoring PQ Lists一些辅助性的命令:1.显示接口队列信息:Aiko#show queue [interface]2.显示PQ列表信息:Aiko#show queueing priority。
MERL–A MITSUBISHI ELECTRIC RESEARCH LABORATORYEV ALUATION OF EDCF MECHANISMFOR QoS IN IEEE802.11WIRELESSNETWORKSDaqing Gu and Jinyun ZhangTR-2003-51May2003AbstractIn this paper,a medium access scheme called EDCF,which is adopted in an upcoming new stan-dard IEEE802.11e to allow prioritized medium access for applications with QoS requirements,is described and discussed.Its performance is also evaluated via simulations.This work may not be copied or reproduced in whole or in part for any commercial purpose.Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following:a notice that such copying is by permission of Mitsubishi Electric Information Technology Center America;an acknowledgment of the authors and individual contributions to the work;and all applicable portions of the copyright notice.Copying, reproduction,or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Information Technology Center America.All rights reserved.Copyright c Mitsubishi Electric Information Technology Center America,2003201Broadway,Cambridge,Massachusetts02139Publication History:1.First printing,TR-2003-51,May2003EVALUATION OF EDCF MECHANISM FOR QoS IN IEEE802.11 WIRELESSNETWORKSDaqing Gu, Jinyun ZhangMitsubishi Electric Research LaboratoriesCambridge, MA 02139E-mail: {dgu, jzhang}@ AbstractIn this paper, a medium access scheme called EDCF, which is adopted in an upcoming new standard IEEE802.11e to allow prioritized medium access for applications with QoS requirements, is described and discussed. Its performance is also evaluated via simulations.1. IntroductionIEEE802.11 wireless LAN (WLAN) is a shared-medium communication network that transmits information over wireless links for all IEEE802.11 stations within the transmission range to receive. It is one of the most deployed wireless networks in the world and is high likely to play a major role in multimedia home networking and next-generation wireless communications. The architecture of IEEE802.11 standard includes the definitions of Medium Access Control (MAC) sublayer and Physical Layer (PHY). Its MAC layer has two access mechanisms: DCF (Distributed Coordination Function) and PCF (Point Coordination Function). DCF uses CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) protocol, and it is best known for asynchronous data transmission (or best-effort service). PCF uses a central-controlled polling method to support synchronous data transmission. Unlike DCF, the implementation of PCF is optional as stated in the standard [1]. IEEE802.11 wireless networks can be configured into two different modes: ad hoc and infrastructure modes. In ad hoc mode, all wireless stations within the communication range can communicate directly with each other, whereas in infrastructure mode, an Access Point (AP) is needed to connect all stations to a Distribution System (DS), and each station can communicate with others through AP. DCF is the basic medium access mechanism for both ad hoc and infrastructure modes. In DCF mode, each station checks whether the medium is idle before attemptingto transmit. If the medium has been sensed idle for a DIFS (Distributed InterFrame Space) period, the transmission can begin immediately. If the mediumis determined to be busy, the station shall defer until the end of the current transmission. After deferral, the station will select a random backoff interval and shall decrement the backoff interval counter while the medium is idle. Once the backoff interval has expired, the station begins the transmission. More specifically, the station selects a random number called backoff time, in the range of 0 and CW (Contention Window). The backoff timer decrements the backoff time each time the mediumis detected to be idle for an interval of one slot time.As soon as the backoff timer expires, the station can begin to transmit. If the transmission is not successful, a collision is considered to have occurred. In this case, the contention window is doubled, and a new backoff procedure starts again. The process will continue until the transmission is successful or discarded.Video, audio, real-time voice over IP and other multimedia applications over WLAN with Qualityof Service (QoS) support is very important for IEEE802.11 WLAN to be successful in wireless home networking and future wireless communications. Some high layer applications suchas video, audio, email, and data transfer have different requirements in bandwidth, delay, jitter, and packet loss. However, in DCF mechanism of IEEE802.11, all the stations and data flows have the same priority to access medium. There is no differentiation mechanism to support thetransmission of data streams with different delay requirements. So, the support of QoS for IEEE802.11 becomes critical for its success in multimedia applications. With the motivation in mind, IEEE 802.11 working group is currently developing a standard called IEEE802.11e, which enhances the current 802.11 MAC to support applications with QoS requirements. The upcoming IEEE802.11e standard adds a new function called HCF (Hybrid Coordination Function), which concurrently exists with basic DCF/PCF for backward compatibility and has both controlled contention-free and contention-based channel access methods in a single channel access protocol. The HCF combines functions from the DCF and PCF with some enhanced, QoS-specific mechanisms and frame subtypes to allow a uniform set of frame exchange sequences to be used for QoS transfers during both the CP (Contention Period) and CFP (Contention Free Period). The HCF uses a contention-based channel access method, called the enhanced DCF (EDCF) that operates concurrently with a controlled channel access mechanism based on a polling mechanism. In this paper, we only describe EDCF mechanism.2. EDCF MechanismAs stated in the previous section, DCF works as a listen-before-transmission scheme. In this mode, if the medium is determined to be idle for DIFS interval, the station transmits a packet immediately. Otherwise, a backoff procedure is started. The backoff time is a random number that lies between 0 and CW. The backoff time is computed as follows [1]:Backoff Time = Random() * SlotTime (1)Where Random() is a pseudorandom integer drawn from a uniform distribution over the interval [0,CW]. CW is an integer within the range of values of the PHY characteristics CWmin and CWmax, that is CWmin ≤ CW ≤ CWmax. SlotTime equals the value of the corresponding PHY characteristics. CW parameter shall take an initial value of CWmin. The CW shall take the next value in the series after each unsuccessful transmission until the CW reaches the value of CWmax. Once it reaches CWmax, the CW shall remain at the value of CWmax until it is reset. This improves the stability of the access protocol under high-load conditions. The CW shall be reset to CWmin after each successful attempt to transmit a packet. The set of CW values shall be sequentially ascending integer powers of 2, minus 1, beginning with a PHY specific CWmin value, and continuingup to CWmax value. The purpose of use of backoff procedure is to reduce the chances of collision by letting stations to select a random number as backoff time. It is seen that CWmin and CWmax are fixedfor a given PHY. So, DCF doesn’t differentiate the data traffic and stations. All stations and traffic classes have the same priority to access the wireless medium (WM). The QoS is not supported with the use of DCF.In order to support applications with QoS requirements, some priority schemes have been proposed [2, 3]. The upcoming 802.11e standard is being defined to support QoS [4, 5]. Among the new features of 802.11e, the EDCF provides differentiated, distributed access to the wireless medium for 8 priorities for stations. EDCF channel access defines the access category (AC) mechanism that provides support for the priorities at the stations. Each station may have up to 4 ACs to support 8 user priorities (UPs). One or more UPs are assigned to one AC.A station accesses the medium based on the access category of the frame that is to be transmitted. The mapping from priorities to access categories is defined in Table 1 [5].Priority (Sameas 802.1DPriority)AccessCategory (AC)Designation1 0 BestEffort 2 0 BestEffort 0 0 BestEffort 3 1 VideoProbe4 2 Video5 2 Video6 3 Voice7 3 VoiceTable 1 – Priority to Access Category MappingsEach AC is an enhanced variant of the DCF. It contends for TXOPs (Transmission Opportunities) using a set of EDCF channel access parameters. An AC with higher priority is assigned a shorter CW in order to ensure that in most cases, higher-priority AC will be able to transmit before the lower-priority ones. This is done by setting the contention window limits CWmin[AC] and CWmax[AC], from which the CW[AC] is computed, to different values for different ACs. For further differentiation, different IFS (Inter Frame Space) is introduced according to ACs. Instead of DIFS, an arbitration IFS [AIFS] is used. The AIFS is at least DIFS, and can be enlarged individually for each AC. Similar to DCF, if the medium is sensed to be idle in EDCF mechanism, a transmission can begin immediately. Otherwise, the station defer until the end of current transmission on the WM. After deferral, the station waits for a period of AIFS(AC) to start a backoff procedure. The backoff interval is now a random number drawn from the interval [1, CW(AC)+1]. Each AC within a single station behaves like a virtual station. It contends for access to the wireless medium and independently starts its backoff time after sensing the medium is idle for at least AIFS. Collision between ACs within a single station are resolved within the station such that the data frames from higher-valued AC receive the TXOP and the data frames from lower-valued colliding ACs behave as if there were an external collision on the WM. The timing relationship for EDCF is shown in Figure 1 [4].Figure 1: Timing Relationship for EDCFThe QoS support in 802.11e is realized with theintroduction of Traffic Categories (TCs). MSDUs (MAC Service Data Units) are now delivered through multiple backoff instances within one station. Each backoff instance is parameterized with TC-specific parameters. The typical values for the parameters in QoS parameters set are defined in Table 2. A model of the reference implementation is shown in Figure 2 [5]. It illustrates a mapping from frame type or priority to access categories, the four queues and four independent channel access functions, one for each queue.AC CWminCWmaxAIFS0 CWmin CWmax 2 1 CWmin CWmax 1 2 (CWmin+1)/2 - 1 CWmin 1 3 (CWmin+1)/4 - 1 (CWmin+1)/2 - 11Table 2: Typical QoS ParametersMapping toAccess CategoryTransmit QueuesPer-queuechannel access functions with internal collision resolutionFigure 2: Reference Implementation Model3. Simulation ResultsA Simulation model was constructed using OPNET. In the simulation, four IEEE802.11 wireless stations with EDCF mechanism were configured into ad-hoc mode shown in Figure 3. Four stations remain stationary during the simulations. The simulation uses standard OPNET 802.11b PHY module with maximum data rate up to 11 Mbps to simulate the wireless medium [6]. While, the original 802.11MAC was modified to support EDCF mechanism. In the simulation, we just simulated the EDCF access function and didn’t consider other traffic parameters such as TXOPs. Any AC get an access to the medium, it transmits one packet and then release the channel for the next access contention. All PHY characteristics was according to 802.11b DSSS PHY parameters, in which CWmin= 31, CWmax=1023 and SlotTime=20 µs.Figure 3. Simulation ScenarioAll four traffic classes were fed into the MAC layer from higher layer, which are corresponding to AC(0), AC(1), AC(2) and AC(3) respectively. In the simulation, we assumed that each traffic class has the equal portion of the total data traffic in terms of the average number packets generated per unit time. The packets of AC(0), AC(1) and AC(2) were generated according to Poisson Process with a mean interarrival time equal to 0.0001 second, while AC(3) packets were generated at a constant rate to simulate a voice source.Figure 4 shows the average medium access delays for different access categories in EDCF mechanism. As shown, access category (3) has the smallest average medium access delay, and access category (0) has the largest medium access delay. The horizontal coordinate represents the simulation time (minutes). In Figure 5, the throughputs for different ACs over the WLAN are shown. We can see that AC(3) has the highest value of throughput, while the throughput of AC(0) is lowest. These results are as expected since EDCF differentiates the traffic classes and supports priority access.Figure 4. Medium Access Delay for Different ACsFigure 5. Throughputs for Different ACsIn order to compare with DCF function, an ad hoc IEEE802.11 wireless network with DCF mechanism was configured and simulated too. All the simulation parameters for the DCF scenario are exactly the same as the ones for the EDCF scenario except DCF is used instead of EDCF. In DCF scenario, the total data traffic is equal to sum of the 4 traffic categories of EDCF scenario. Figure 6 andFigure 7 show the network average medium access delays and throughputs on the air for these two scenarios, respectively. The simulation results conclude that at this particular simulation condition, EDCF scenario has a little bit larger average medium access delay and a little bit smaller throughputs for all kinds of traffic. This can be explained. Since in EDCF mechanism, each AC functions like a virtual station for medium access, more collision will be expected for EDCF scenario.Figure 6. Network Medium Access DelaysFigure 7. Network Throughputs for Different Access Schemes 4. ConclusionsIn the paper, the EDCF mechanism for the upcoming standard IEEE802.11e is presented. A simulation was conducted. The simulation results show that EDCF works well for differentiated data services.References[1] IEEE std 8802.11-1999, Wireless LAN MediumAccess Control (MAC) and Physical Layer (PHY) specifications. 1999.[2] Aad and C. Castelluccia, Differentiationmechanisms for IEEE 802.11, In Proc. of IEEE Infocom 2001, April 2001.[3] J. Deng and R. S. Chang, A priority Scheme forIEEE 802.11 DCF Access Method, IEICE Trans.in Comm., vol. 82-B, no. 1, January 1999.[4] IEEE 802.11e draft/D3.1, Part 11:WirelessMedium Access Control (MAC) and physical layer (PHY) specifications: Medium Access Control (MAC) Enhancements for Quality of Service (QoS), July 2002.[5] IEEE 802.11e draft/D4.1, Part 11:WirelessMedium Access Control (MAC) and physical layer (PHY) specifications: Medium Access Control (MAC) Enhancements for Quality of Service (QoS), Feb. 2003.[6] IEEE std 802.11b-1999, Higher-Speed PhysicalLayer Extension in the 2.4 GHz Band. 1999.。