基于移动自组网加强集群高速缓存管理性能(IJCNIS-V5-N10-4)
- 格式:pdf
- 大小:433.54 KB
- 文档页数:6
华为认证HCIA-5G题库(考试代码:H35-660)1. 以下哪个电网业务对终端数量要求是最高的?(单选题,2分)A. 分布式配电自动化B. 精准负荷控制C. 低压用电信息采集D. 无人机电网巡检答案:C试题解析:低压用电信息采集为mMTC业务,对终端数量要求高。
2. 以下哪个措施可以很好的满足未来智能电网的业务多样化需求?(单选题,2分)A. 业务按需部署B. 资源隔离C. 端到端SLA保障D. 运维自动化答案:A试题解析:业务按需部署可以很好的满足未来智能电网的业务多样化需求。
3. 关于智能电网特点,以下描述错误的是?(单选题,2分)A. 清洁友好发电B. 安全高效输电C. 集中式配电D. 多样互动用电答案:C试题解析:配电自动化而不是集中式配电。
4. 以下电网业务中,属于eMBB类型的业务是哪个?(单选题,2分)A. 分布式配电自动化B. 精准负荷控制C. 电网视频监控D. 智能抄表答案:C试题解析:电网视频监控属于eMBB业务;分布式配电自动化、精准符合控制为uRLLC业务;智能抄表为mMTC业务。
5. 基于蜂窝网络的车联网指的是什么?(单选题,2分)A. C-V2XB. DSRCC. ADASD. D2D答案:A试题解析:C-V2X蜂窝车联,是基于3GPP全球统一标准的通信技术,包含LTE-V2X和5G-V2X。
LTE-V2X支持向5G-V2X平滑演进。
6. 车辆远程控制技术需要达到体验优良的水平,时延必须低于多少?(单选题,2分)A. 1msB. 4msC. 10msD. 20ms答案:B试题解析:车辆远程控制技术需要达到体验优良的水平,时延必须低于4ms。
7. 智能电网中,以下哪个业务属于uRLLC业务?(单选题,2分)A. 远程抄表B. 自动配电C. 线路视频监控D. 全网用电信息采集答案:B试题解析:自动配电属于uRLLC业务;线路视频监控属于eMBB业务;远程抄表、全网用电信息采集属于mMTC业务8. C-V2X的网络技术包括什么?(单选题,2分)A. 4G或者5GB. 只有5GC. 只有4GD. 3G答案:A试题解析:C-V2X蜂窝车联,是基于3GPP全球统一标准的通信技术,包含LTE-V2X和5G-V2X。
4G/5G无线网络协同组网应用发布时间:2021-07-20T03:46:43.708Z 来源:《中国科技人才》2021年第10期作者:安晓晴[导读] 目前4G的无线网络发展趋于成熟,边远地区的建设仍需加强。
5G要想实现超高速发展,必须加大资金投入和技术推广攻关,着在方方面面都对4G/5G无线网络协同组网应用提出了较高要求。
其中涉及到频谱和技术上的要求很多,因此要妥善解决4G/5G无线网络协同组网应用中所遇到的问题,使两种网络模式形成合力,为通信事业更快更好的发展起到推动作用。
中国移动通信集团河北有限公司邢台分公司河北邢台 054000摘要:5G技术目前属于初步发展阶段,在发展过程中,5G网络为用户提供了非常便捷快速的上网体验,让人们对通信的日常需求得到满足,通过5G技术的测试不断取得突破性成果,我国的5G技术和产业也在大力加快发展。
4G的网络建设也在继续进行中,在其覆盖范围和容量上都有提升。
5G尚处于探索阶段,在5G技术未成熟之前,4G/5G无线网络的协同组网应用是加快发展的必经之路。
基于此,本文将主要论述4G/5G无线网络协同组网应用。
关键词:4G/5G无线网络,协同组网,应用引言:目前4G的无线网络发展趋于成熟,边远地区的建设仍需加强。
5G要想实现超高速发展,必须加大资金投入和技术推广攻关,着在方方面面都对4G/5G无线网络协同组网应用提出了较高要求。
其中涉及到频谱和技术上的要求很多,因此要妥善解决4G/5G无线网络协同组网应用中所遇到的问题,使两种网络模式形成合力,为通信事业更快更好的发展起到推动作用。
一、4G/5G无线网络协同组网应用的重要性分析4G/5G无线网络协同组网应用中虽然问题颇多,但是在协同组网应用中,开展覆盖协同和干扰协同、性能协同、容量协同等方面,进行专题优化,能够保证网络的质量,做到用户体验最佳的目的,有助于5G站点的优化和发展。
协同组网应用能够共享丰富的站址资源,还能够在频率、设备、业务等方面实现共享。
一种改进的移动无线自组织网络路由算法
滕艳平;周浩令;王海珍;李丽丽
【期刊名称】《计算机仿真》
【年(卷),期】2022(39)9
【摘要】在节点高速移动或节点密度较大的移动无线自组织网络中,传统AODV算法在路由请求使用洪泛广播RREQ,选择路由跳数最少的链路,并没有考虑到网络拓扑的频繁变化导致的链路中断,在节点数量较多时其洪泛所导致的广播风暴将对网络性能产生影响。
针对上述情形,提出了一种基于GPS信息和Q学习相结合的AODV改进算法,GQ-AODV算法同时考虑了节点位置和节点速度,通过节点位置计算偏差角度和前程值,节点与下一跳节点的相对速度来确定链路稳定度,采取下一跳节点与其邻居节点的平均相对速度、Q学习训练的下一跳节点与其邻居节点的历史平均相对速度,来避免下一跳选取陷入局部最优。
NS3仿真表明,GQ-AODV算法能够选择较好的下一跳,降低了路由开销、时延和抖动,提高了分组投递率和吞吐量,在节点数目较多的场景下更具优势。
【总页数】6页(P221-225)
【作者】滕艳平;周浩令;王海珍;李丽丽
【作者单位】齐齐哈尔大学计算机与控制工程学院
【正文语种】中文
【中图分类】TN92-4;G434
【相关文献】
1.一种区分路由频次的移动无线自组织网络混合路由协议
2.DART:一种利用有向天线并适用于高速移动的自组织无线网络路由算法
3.一种无线移动自组织网络路由协议设计方法
4.无线自组织网络中CBRP路由算法改进
5.基于动态路由与蚁群优化的移动无线自组织网络算法
因版权原因,仅展示原文概要,查看原文内容请购买。
I. J. Computer Network and Information Security, 2013, 10, 1-10Published Online August 2013 in MECS (/)DOI: 10.5815/ijcnis.2013.10.01Security Algorithms for Mitigating Selfish and Shared Root Node Attacks in MANETsJ.SengathirResearch Scholar, Department of Computer Science and Engineering,Pondicherry Engineering College, Pondicherry, India,j.sengathir@R.ManoharanAssociate Professor, Department of Computer Science and Engineering,Pondicherry Engineering College, Pondicherry, India,rmanoharan@Abstract—Mobile ad hoc network is a type of self configurable, dynamic wireless network in which all the mobile devices are connected to one another without any centralised infrastructure. Since, the network topology of MANETs changes rapidly. It is vulnerable to routing attacks than any other infrastructure based wireless and wired networks. Hence, providing security to this infrastructure-less network is a major issue. This paper investigates on the security mechanis ms that are proposed for Selfish node attack, Shared root node attack and the Control packet attack in MANETs with the aid of a well known multicast routing protocol namely Multicast Ad hoc On Demand Distance Vector (MAODV). The security solutions proposed for each of the above mentioned attacks are evaluated with the help of three evaluation parameters namely packet delivery ratio, control overhead and total overhead. The algorithmic solutions thus obtained are analysed in the simulation environment by using ns-2 simulator.Index Terms— MANETs, Selfish nodes, Shared root node, MADOV, Control packet, Sequence numbersI.I NTRODUC TIONA mobile ad hoc network is a self-organizing distributed network in which each and every mobile node performs routing autonomously without any centralized authority [1]. In this Wireless network, the packets are relayed in a multi-hop fashion from the source node to the multicast group members based on the reliability of the nodes present in the routing path. Thus, routing in MANETs necessitates the cooperation of each and every node for successful packet delivery [2]. But the presence of non co-operating nodes i.e., selfish nodes reduces the throughput of the entire network, so an algorithm has to be devised for handling selfish nodes [3]. In this paper, we propose a reactive mechanis m called Secure Destined Packet algorithm which can detect and prevent the selfish behaviour based on the calculation of both the cut off ratio and the packet delivery ratio computed on each of the mobile nodes in the network.In MANETs, the transmissions of data between the groups of hosts are identified through a unique group destination address. But still, the security issues of MANETs in group communications are more challenging because of the commitment of multiple senders and multiple receivers [4]. Although several types of security attacks in MANETs have been studied in the literature, the focus of earlier research was only on unicast (point-to-point) applications [5-7]. The impact of security attacks on multicast scenario of MANETs has not yet been explored. Especially in case of MAODV protocol, the reliability of the data transfer depends on the shared root node or the rendezvous point of each multicast group. Hence securing shared root node becomes a necessary task. In order to make the shared root node more secure, the group leader election algorithm becomes essential.The protocol used for our study is the MAODV Protocol. Some of the striking features of the protocol are enumerated below. Multicast Ad hoc On-demand distance vector protocol (MAODV) is an enhanced multicast version of A ODV Protocol, where all the members of the multicast group are formed into a multicast shared tree [8]. The tree formation includes the non-members and the root of the tree is called the group leader. Multicast data packets are relayed among the tree nodes [9]. The salient feature of the MAODV protocol is about how they form the tree, repair the tree when link break occurs and to join the existing is disconnected tree into a new tree. The four types of packets supported by MAODV are RREQ, RREP, MACT and GRPH [10]. A node broadcasts a RREQ only when it is a member node and if it wants to join the tree or when it is a non-member node but has a data2Security Algorithms for Mitigating Selfish and Shared Root Node Attacks in MANETspacket to be delivered to the group [11]. When the node receives the RREQ, it sends the reply by sending the RREP using unicast routing. GRPH is the group hello packet, which is send periodically by group leader to know whether the group members are within the range of communication [12]. MA CT packet originates only when there is a need for group communication.The rest of the paper is organized as follows: In section 2 we discuss on the literature a survey and the list of possible attacks in MANET. In section 3, 4 and 5, we discuss elaborately on the proposed detection and mitigation algorithm of selfish node, shared root node attack and control packet attack respectively. The detailed performance analyses for the proposed algorithms relative to the existing traditional MAODV are discussed in section 6. Finally, we conclude in section 7 with future scope.II.L ITER ATUR E R EVIEWFrom the recent past, many secure mitigation algorithms were proposed varying from trust basedsolution to energy based algorithms for selfish nodebehavior and shared root node attack. These algorithms were implemented based on the confidence level that aeach and every node possesses about their neighbournodes and they are mainly employed to tackle energy, congestion or bandwidth allocation. Some of the works present in the existing literature are enumerated below.Ching-Chuan Chiang et al [13] proposed a multicast routing protocol that needs only minimal infrastructure. This protocol makes its profit by exploiting the broadcast facilities of the wireless channel which is present implicitly in the network. The design protocol is a hybrid protocol that shows the property of both flooding and shortest multicast tree.S. Kumar Das et al [14] proposed a reactive multicastrouting protocol which performs its routing by buildingand maintaining a shared meshes. This shared mesh isformed by the group of core based trees.H. Yang et al [15] determined the genuineness of themobile nodes with the help of one way hash function.This one way hash function was manipulated based on the initial input iteratively. The obtained output can be used for authentication. This mechanism also enables to find any fault in the network using explicit acknowledgementS.Roy, V.G.Addada, S.Setia and S.Jajodia [16]proposed detection and prevention solutions to various attacks on multicast tree maintenance. The various attacks against route discovery and establishment are RREP-INV, MACT (J) –MTF and RREP-INV, MACT (P) –PA RT. They elaborated on the shared root node attack, how they occur and how they can be mitigated. They have explained about the clear scenario of how the shared multicast tree are formed and how a node or a group of nodes join a source multicast tree.C.Demir and aniciu et al [17] proposed an auction based routing methodology for MANETs. The auction based methodology was implemented with the following properties in mind, the first one is that the route can be selected depending upon minimum cost calculated from individual node bids. The second one i s that the payment allocated to the winning route should be the one requested by the second smallest biding route. The mechanism is implemented during route discovery following the route discovery process the payment is carried out the specific amount of currency is paid to the intermediate routes.Chi-Yuan Chang et al [18] proposed an efficient bootstrap router which was designed based on the rendezvous point mechanis m. This proposed mechanis m can provide a solution to the RP recovery in case of shared tree network. This work also emphasizes the need of PIM multicast network, which provides one-to-many services like videoconferencing and chat applications.D. Patel et al [19] addressed various security issues against Worm Hole attack. They used the parameter called time of flight which calculates the RTT for each and every node. This work also determines whether mobile nodes are within the communication range or not by using directional antennas.Bing Wu et al [20] suggested mechanism like Watch Dog, Pathrater and IDS for monitoring, so that the attacks could be prevented with an aid of a reactive solution. This solution mainly concentrates on the key manipulations performed on the mobile node. These computations help to determine whether a node is genuine or not.A. Similar Types of AttacksThere are a number of routing attacks which may reduce the performance of the MAODV protocol. Short descriptions of some of such attacks are given below: Neighbouring Attack:In a normal scenario, each and every participating node first records the id of the packet received. In case of neighbor attack the compromising node simply forwards the packet without recording the packet id assuming that they are neighbours even though they are not in the same radius of communication.Blackmail Attack:In case of black mail attack, the attacker node advertises a genuine node as a compromised node which may carry out some malicious behaviour during routing mechanis m. Such attacks could prevent the source to choose the best path to the destination there by reducing the overall efficiency and throughput in the network. Jelly fish Attack:In this kind of attack, an attacker delays the data packet unnecessarily for some quantum of time before forwarding them. Thus disturbing the performance of the multicast group results in high end to end delay in the networksSybil Attack:In case of Sybil attack, the malicious nodes present in the network topology generates a large number of fake identities to disturb the normal functioning of MANET applications.Resource consumption Attack:In this attack, the malicious node tries to consume the resources like battery power and bandwidth of the other nodes available in the network. This could be established by triggering unnecessary route request control messages, beacon messages packets or stale information to nodes.Sinkhole Attack:In this specific kind of attack, the compromised attacker node tries to get the attraction of the data packet to itself from all other neighboring nodes. This could make all the data flow to flow to particular node and hence the packet may be altered or eavesdropped. Byzantine Attack:In this byzantine attack, an attacked intermediate node or a set of compromised attacker nodes works in collusion and carries out the attacks such as creating routing loops, forwarding packets on non-optimal paths and selectively dropping packets which results in disruption or degradation of the routing services. It is hard to detect byzantine failures. The network would seem to be operating normally in the viewpoint of the nodes, though it may actually be showing Byzantine behavior.Gray hole Attack:The gray hole attack has two phases. In the first phase, a malicious node exploits the protocol to advertise itself having an optimal route to a destination node, with the intention of intercepting packets, even though the route is not optimal. In the second phase, the node drops the intercepted packets with a certain probability. This kind of attack is more difficult to detect than the black hole attack where the malicious node drops the received data packets with certainly. A gray hole may exhibit its malicious behavior different ways. It may drop packets coming from (or destined to) certain specific node(s) in the network while forwarding all the packets for other nodes. Another type of gray hole node may behave maliciously for some time duration by dropping packets but may switch to normal behavior later. A gray hole may also exhibit a behavior which is a combination of the above two, thereby making its detection even more difficult.B. Extract of the Literature ReviewThe review of the literature on the security mechanis ms available for MAODV protocol are lacking in the following issues:i.The methodology by which a shared root nodeattack can be detected and if detected how to isolatethe shared root node is not yet proposed. ii.The algorithm for choosing a node as the group leader, when the existing shared root node is compromised is not yet explored.iii.Non-reputation mechanisms for identifying a selfish node in a group are not available.iv.The use of the sequence number for identifying the control packet attack are not yet established for inthis protocolv.The preventive mechanism that has to be carried out, when the control packets like RREQ, RREP,MACT or GRPH are attacked has not been implemented.Thus it is motivated to detect and provide solution to these attacks on MAODV, in order to enhance the security and the performance of the network.III.S ELFISH N ODE A TTACKSelfish nodes are defined as the mobile nodes that deny forwarding other nodes’ packets but relays the packets originated from them. This behaviour is intentionally for maximizing their resources at the expense of all other neighbor nodes. Hence, Selfish nodes are found to be the most vulnerable in MANET environment. Due to the presence of selfish nodes the packet delivery ratio of the network drastically drops and leads to poor performance of the network. Suitable trust solution is required to mitigate the above said attack. Here we propose an algorithm called Secure Destined Packet Algorithm to secure the MAODV protocol against non cooperating nodes and make the protocol more robust. In Secure Destined Packet Algorithm, the detection of selfish behaviour present in the network topology is identified at two different levels. In the primary level, the selfish nodes are identified based on information obtained from neighbours using two hop acknowledgement mechanisms. In the secondary level, the mobile nodes which are already identified as selfish nodes are screened based on inbound and outbound data Counter.The optimal Packet transmission ratio obtained for each and every node is compared with the cut-off Packet Delivery Ratio. If the optimal Packet transmission ratio is lower, a selfish node is found.Algorithm 1 Pseudo code for Primary Level Secure DestinedPacket AlgorithmAlgorithm 2 Pseudo code of Secondary Level Secure DestinedPacketA. Illustration for Selfish Node Behaviour.Consider a multicast scenario in MAODV as illustrated in the Fig.1. Here ‘S’ is the source node, ‘M’ is the selfish node and ‘R1’, ‘R2’, ‘R3’ are the receiver nodes in the group. When the source ‘S’ present in the first multicast group transmits the data packets to the receiver nodes present in the next multicast group. Since the node ‘M’ is selfish, the packets routed through the S-RV1-RV2-M-R2 path are dropped by that node. Since the receiver ‘R2’ node has not received any packets, it sends RREQs to its neighbors. Further if any RREPs are not received, it sends THACK i.e., two hop acknowledgement for detecting the reliable route and marks the node ‘M’ as the selfish node in the routing table.Figure 1: The Presence of Selfish Node in the MulticastScenario of MAODVThe second level of selfish node detection is achieved by comparing the Packet Transmission Ratio of each and every node with the cut-off ratio computed distributive in each and every node. The mitigation of selfish node is achieved through the help of Rehabilitate ( ).IV. S HARED R OOT N ODE A TTACKIn case of shared root node attack, the attacker node disguises a tree node and sends a MACT (P) packets i.e., a tree prune control packet to all the nodes’ present in the multicast tree. If a downstream node has one and only downstream link and if it is a non member, it prunes itself and sends a prune message to its entire downstream node. This may cause multicast tree to be pruned. So the multicast pruning may disturb the group communication by not relaying the packets to the multicast members as well as the non members. Here, we propose a Detecting Shared Root Node algorithm to identify the mobile nodes which tries to exhibit shared root node behaviour. The identified attacker nodes are removed and new zone leader is elected by means of the secure zone leader election algorithm. The newly elected zone leader will update its entries in the multicast table. The zone is reconstructed with the help of newly elected zone leaderAlgorithm 3: Pseudo code for mitigating Shared root nodeattack.Algorithm 4 Secure Zone leader Election algorithmA. Illustration for Shared Root Node Behaviour.The source node D present in the first multicast tree wants to send data to receivers R1, R2 and R3, the rendezvous point RV of group 1 or group 2 may be compromised as shown in Fig. 2.Figure 2: The Presence of Shared Root Node Attack in theMulticast Scenario of MAODVThis can be counteracted by electing a zone leader which has maximum reputation Probability in the multicast group. In this scenario, the newly elected zone leader is E in group 1 and G in group 2 according to their Reputation Probability factor.V. C ONTROL P ACKET A TTAC KIn case of MAODV, the route establishment between the source node and the receiver nodes in group communication is achieved through control packets namely RREQ, RREP, GRPH and MACT. In our proposed solution, we have devised an algorithm mainly for detecting RREQ and RREP control packet attack. This algorithm makes use of the sequence number for detecting the control packet attack, where sequence number is the monotonically increasing number when the packet relays from one hop to the other hop. The following algorithm detects the control packet attack using sequence number.Algorithm 5: Pseudo code for detecting control packet attack A. Illustration for Control Packet Attack .Initially the source node S sends RREQs to all possible routes and waits for time period called Simclock for RREPs through reverse route as shown in Fig. 3.Figure 3: The Presence of Control Packet Attack in theMulticast Scenario of MAODVComputeEnd forIf M is not malicious RREPs are acknowledged, it stores the predecessor node id, current sequence number, destination sequence number and Multicast identifier in the Multicast Table. In case, the node M behaves as malicious node then the detection of control packet attack is performed by comparing the current node’s sequence number with destination node’s sequence number present in the control packet. If the current node sequence number is less than the destination node sequence number, the source node initiates the forwarding of data packets. If not, identify the predecessor node M is identified as malicious node and isolated. Finally call for retransmission of RREQs by the source node S.VI.S IMULATION AND R ESULTSThe simulation environment used for our study is ns-2.26.This simulation environment is chosen because of possessing the feature of high scalability especially for large scale wireless communication networks. We have used the above mentioned simulation platform for analyzing the influence of the selfish, root node and control packet attack based on the evaluation parameters like packet delivery ratio, control overhead and total overhead. In our simulation environment, 50 mobile nodes are placed in a terrain size of 1000X1000. Each source transmits packets of size 512 bytes each at various time intervals. The refresh interval time is set as20 seconds while the channel capacity is 2 Mbps.A.Performance MetricsThe performance analysis of multicast ad-hoc on demand distance vector protocol was carried out with the help of the following evaluation parameters.Packet Delivery Ratio:Packet delivery ratio may be defined as the ratio of the total number of data packets received to the total number of the data packets sent towards the multicast group in a multicast session.Control Overhead:Control overhead may be defined as the ratio of the sum of control data bytes needed by the source to explore the optimal route between the source and the receiver group to the total number of application data bytes transmitted.Total Overhead:Total overhead may be defined as the ratio of total number of packets comprising of both the control packets and data packets required for establishing a multicast session to the number of data packets sent towards the group.B.Simulation ParametersThe following table 1 illustrates the parameters for simulation study.Table 1 Simulation ParametersC.Performance Evaluation of Secured PacketDestination Algorithm for Selfish NodesPacket Delivery Ratio:In ideal conditions, the maximum packet delivery ratio of MAODV protocol is 97%. The performance of the protocol crumbles based on the number of selfish nodes present in the multicast scenario and reaches a minimum of 58%. The Fig. 4 shows the performance analysis of Secured Destination Packet Algorithm based on Packet Delivery RatioFigure 4: Performance Analysis of Secured Destination Packet Algorithm based on Packet Delivery Ratio.But When Secure Destination Packet Algorithm is deployed, the packet delivery ratio increases to an extent of 21%.Control OverheadThe presence of the selfish node behavior increases the control overhead in the MAODV protocol, whichreduces the effective group communication. Hence the control overhead has to be reduced for ideal performance of the protocol. The Fig. 5 shows the Performance Analysis of Secured Destination Packet Algorithm based on Control Overhead.Figure 5: Performance Analysis of Secured Destination Packet Algorithm based on Control Overhead.Control overhead considerably increases in the presence of the selfish node attack to an extent of 27% but when mitigation algorithm is deployed it shows a decrease of 25%.Total OverheadThe presence of the selfish node increases the total overhead in the MAODV protocol, thus by affecting the effective group communication. Hence the total overhead has a greater impact on the performance of the protocol. The Fig. 6 shows the performance Analysis of Secured Destination Packet Algorithm based on Total Overhead.Figure 6: Performance Analysis of Secured Destination Packet Algorithm based on Total Overhead.The total overhead considerably increases in the presence of the Selfish node attack to an extent of 32% but when mitigation algorithm is deployed it shows a decrease of 30%. D.Performance Evaluation of Secured Zone LeaderElection Algorithm.Packet Delivery RatioIn ideal conditions, the maximum packet delivery ratio of MAODV protocol is 97%. The performance of the protocol crumbles based on the number of selfish nodes present in the multicast scenario and reaches a minimum of 57%. The Fig. 7 shows the Performance Analysis of Secured Zone Leader Election Algorithm based on Packet Delivery Ratio.Figure 7: Performance Analysis of Secured Zone LeaderElection Algorithm based on Packet Delivery Ratio.But When Secure Zone leader election Algorithm is deployed, the packet delivery ratio increases to an extent of 33%.Control OverheadThe presence of the shared root node attack increases the control overhead in the MAODV protocol, which reduces the effective group communication. Hence the control overhead plays a vital impact on the performance of the protocol. The Fig. 8 shows the Performance Analysis of Secured Zone Leader Election Algorithm based on Control Overhead.Figure 8: Performance Analysis of Secured Zone Leader Election Algorithm based on Control Overhead.The control overhead considerably increases in the presence of the selfish node attack to an extent of 27% but when mitigation algorithm is deployed it shows a decrease of 25%.Total OverheadThe presence of the selfish node attack increases the total overhead in the MAODV protocol, which reduces the effective group communication. Hence the total overhead has a greater impact on the performance of the protocol. The Fig. 9 shows the Performance Analysis of Secured Zone Leader Election Algorithm based on Total Overhead.Figure 9: Performance Analysis of Secured Zone Leader Election Algorithm based on Total Overhead.The total overhead considerably increases in the presence of the Selfish node attack to an extent of 32% but when mitigation algorithm is deployed it shows a decrease of 30%.E.Performance Evaluation of Sequence Numberbased Detection Algorithm for Control PacketAttack.Packet Delivery RatioIn ideal conditions, the maximum packet delivery ratio of MAODV protocol is 97%. The Fig. 10 shows the performance Analysis of Control Packet Attack Detection Algorithm using Sequence Number based on Packet Delivery Ratio.Figure 10: Performance Analysis of Control Packet Attack Detection Algorithm using Sequence Number based on PacketDelivery Ratio.The performance of the protocol crumbles based on the number of selfish nodes present in the multicast scenario and reaches a minimum of 57%. But when is sequence number based detection algorithm was deployed, the packet delivery ratio increases to an extent of 21%.Control OverheadThe presence of the control packet attack increases the control overhead in the MAODV protocol, which reduces the effective group communication. Hence the control overhead plays a vital impact on the performance of the protocol. The Fig. 11 shows the performance analysis of Control Packet Attack Detection Algorithm using sequence number based on Control Overhead.Figure 11: Performance Analysis of Control Packet AttackDetection Algorithm using Sequence Number based onControl Overhead.The control overhead considerably increases in the presence of the control packet attack to an extent of 29% but when mitigation algorithm is deployed it shows a decrease of 26%.Total OverheadThe presence of the control packet attack increases the total overhead in the MAODV protocol, which reduces the effective group communication. Hence the total overhead has a greater impact on the performance of the protocol. The Fig. 12 shows the Performance Analysis of Control Packet Attack Detection Algorithm using Sequence Number based on Total Overhead.Figure 12: Performance Analysis of Control Packet Attack Detection Algorithm using Sequence Number based on TotalOverhead.The total overhead considerably increases in the presence of the control packet attack to an extent of 30% but when mitigation algorithm is deployed it showsa decrease of 27%.VII.C ONCLUSION AND F UTUR E E NHANC EMENTSThis paper provides an elaborate description about the existing reactive and tree based multicast protocol MAODV and how the security can be provided for the same by detecting and mitigating the attacks like shared root node attack, selfish node attack and control packet attack. The Performance of the algorithm has been analyzed by varying the number of mobile nodes with respect to the metrics like packet delivery ratio, Control overhead and total overhead.This work can be further proceeded to establish multiple level security, so as to propose security as oneof the QOS in group communication. New Security metrics can be framed and the above proposed algorithms can be analyzed with those metrics.R EFERENCES[1] E.M.Royer and C.E.Perkins. Multicast Operation ofthe Ad hoc On-Demand Distance Vector RoutingProtocol. Proceedings of ACM MOBICOM 2002,pp.207-218, Aug 2002.[2] Seungjoon Lee and Chongkwon Kim. NeighborSupporting Ad Hoc Multicast Routing Protocol. InProceedings of ACM the First ACM InternationalSymposium on Mobile Ad Hoc Networking andComputing, pages 37–44, August 2000.[3] M.Phani, Vijaya Krishna, Dr M.P.Sebastain.HMAODV: History aware on Multicast Ad HocOn Demand Distance Vector Routing. IEEE 2006. [4] S.Buchegger and J-Y L.Boudec. Nodes bearingGrudges: Towards routing security, Fairness, andRobustness in Mobile Ad-Hoc Networks preseentedat tenth Eurominicro workshop on Partallel.Distributed and Networkbased Processing , CanaryIslands, spain, 2002.[5] L.Buttyan and J-P, Hubaux, StimulatingCooperation in Self –organizing Mobile Ad hocNetworks, Mobile Computing and Networking. Pp255-265, 2003.[6] S.Marti, T.J Giuli, i, and M.Baker. Mitigatingrouting misbehavior in mobile ad hoc networks.Mobile Computing and Networking, pp 255-2656,2000.[7] P.Michiardi and R.Molva, CORE: A collaborativereputation mechanism to enforce node cooperationmobile ad hoc networks. presented atCommunication and Multimedia security, Protoroz,Solvenia, 2002.[8] S.Buchegger and J-Y L.Boudec. PerformanceAnalysis of the CONFIDANT protocol: Cooperation of Nodes – Fairness in Distributed Ad-hoc Networks. Presented at tenth Eurominicroworkshop on Partallel, Distributed and Networkbased Processing , Canary Islands, spain, 2002.[9] F.Kargl A.Klenk , S.Schlott and M.Weber.Advanced Detection of selfish or Malicious Nodesin Ad hoc Networks. Presented at 1st EuropeanWorkshop on Security in Ad-Hoc and SensorNetwork. Heidelberg Germany,2004.[10] Md. Amir Khusru Akhtar & G. Sahoo.Mathematical Model for the Detection of SelfishNodes in MANETs. International Journal ofComputer Science and Informatics (IJCSI) ISSN(PRINT): 2231 –5292, Volume-1, Issue-3.[11] A.H Azni, Rabiah Ahmad,Zul Azri Muhamad Noh,Abd Samad Hasan Basari, Burairah Hussin.Correlated Node Behavior Model based on SemiMarkov Process for MANETS. IJCSI InternationalJournal of Computer Science Issues, Vol. 9, Issue 1,No 1, January 2012.[12] C.A.V.Campos and L.F.M.de Moraes. AMorkovian Model Representation of IndividualMobility Scenarios in Ad Hoc Networks and ItsEvaluation. EURASIP Journal on WirelessCommunicationsand Networking.,Pp231 –5292,Volume-1, Issue-3.[13] Ching-Chuan Chiang, Mario Gerla, and LixiaZhang. Forwarding Group Multicast Protocol(FGMP) for Multihop, Mobile Wireless Networks.Cluster Computing, Springer Special Issue onMobile Computing, 1(2):187–196, 1998.[14] Subir Kumar Das, B.S. Manoj, and C. Siva RamMurthy. Dynamic Core-Based Multicast RoutingProtocol for Ad Hoc Wireless Networks. InProceedings of the Third A CM InternationalSymposium on Mobile Ad Hoc Networking andComputing, pages 24–35, June 2002.[15]H. Yang, H. Y. Luo, F. Ye, S. W. Lu, and L. Zhang,Security in mobile ad hoc networks: Challenges andsolutions. IEEE W ireless Communications, vol. 11,pp. 38-47, 2004.[16] S. Roy, V.G. Addada, S. Setia and S.Jajodia,Securing MAODV: Attacks and countermeasures,in Proceedings of. SECON’05, IEEE,2005.[17] Demir, C and Comaniciu C. An Auction basedAODV Protocol for Mobile Ad Hoc Networks withSelfish Node. Communications ICC'07. IEEEInternational Conference in June 2007.[18] Chi-Yuan Chang, Yun-Sheng Yen. Chang-WeiHsiesh Han-Chieh Chao An Efficient RendezvousPoint Recovery Mechanism in MulticastingNetwork. International Conference on Communications and Mobile Computing, 2007. [19] Ashish D. Patel, Jatin D. Parmar and Bhavin I.Shah. MANET Routing Protocols and WormholeAttack against AODV. IJCSNS InternationalJournal of Computer Science and Network Security,Vol.10 No.4, pp. 12-18, April 2010.[20] Bing Wu, Jianmin Chen, Jie Wu, Mihaela Cardei.A Survey on Attacks and Counter measures inMobile Ad Hoc Networks. W IRELESS/MOBILE。
I.J. Information Technology and Computer Science, 2013, 11, 42-53Published Online October 2013 in MECS (/)DOI: 10.5815/ijitcs.2013.11.05A Proposed Model for Web Proxy CachingTechniques to Improve Computer NetworksPerformanceNashaat el-KhameesyProf. and Head of Computers & Information systems Dept- SAMS, Maady Cairo, EgyptE-mail: Wessasalsol@Hossam Abdel Rahman MohamedComputer & Information System Dept- SAMS, Maady Cairo, EgyptE-mail: Hrahman@.eg, Habdel@.egAbstract—one of the most important techniques for improving the performance of web based applications is web proxy caching Technique. It is used for several purposes such as reduce network traffic, server load, and user perceived retrieval delays by replicating popular content on proxy caches that are strategically placed within the network. The use of web proxy caching is important and useful in the government organizations that provides services to citizens and has many branches all over the country where it is beneficial to improve the efficiency of the computer networks performance, especially remote areas which suffer from the problem of poor network communication. Using of proxy caches provides all users in the government computer networks by reducing the amount of redundant traffic that circulates through the network. It also provides them by getting quicker access to documents that are cached. In addition, there are a lot of benefits we can obtain from the using of proxy caches but we will address them later. In this research, we will use web proxy caching to provide all of the above benefits that we are mentioned above and to overcome on the problem of poor network communication in ENR (Egyptian National Railways). We are going to use a scheme to achieve the integration of forward web proxy caching and reverse proxy caching.Index Terms—Web Proxy Caching Technique, Forward proxy, Reverse ProxyI.IntroductionOne of the most well-known strategies for improving the performance of Web-based system is Web proxy caching by keeping Web objects that are likely to be used again in the future in location closer to user. The mechanisms The Web proxy caching are implemented at the following levels: client level, proxy level and original server level. [1], [2]it is known that proxy servers are located between users and web sites for lessening of the response time of user requests and saving of network bandwidth. We also should build an efficient caching approach in order to achieve better response time.Generally, we use web Proxy servers to provide internet access to users within a firewall. For security reasons, companies run a special type of HTTP servers called "proxy" on their firewall machines. When a Proxy server receives any requests from the clients, it forwards them to the remote servers intercepts the responses, and sends the replies back to the clients. Due to we use the same proxy servers for all clients inside of the firewall in the same organization, these clients share common interests and they would probably access the same set of documents and each client tends to browse back and forth within a short period of time, So this provides the effectiveness of using these proxies to cache documents. Therefore, this will increase the hit ratio for a previously requested and cached document on the proxy server in the future. In addition to web caching at proxy server saves network bandwidth, it also provides lower access latency for the clients.Most Web proxy servers are still based on traditional caching policies. These traditional caching policies only consider one factor in caching decisions and ignore the other factors that have impact on the efficiency of the Web proxy caching. Due to this reason these conventional policies are suitable in traditional caching like CPU caches and virtual memory systems, but they are not efficient in Web caching area. [3], [4]We use the proxy cache of the proxy server and it is located between the client machines and origin server. The work of the proxy cache is similar to the work of the browser cache in storing previously used web objects. The difference between them is the browser cache which deals with only a single user, the proxy server services hundreds or thousands of users. The work of the proxy cache is as follow, when the proxy server receives a request it checks its cache at first if therequest is found the proxy server sends the request directly to the client but if the request is not found the proxy server forwards the request to the origin server and after the origin server replies to the proxy server it forwards the request to the client and also save a copy from the request in local cache for future use. The proxy caching is used to reduce user delays and to reduce Internet congestions it is widely utilized by computer network administrators, technology providers, and businesses. [5], [6], [7]The proxy server uses its filtering rules to evaluate the request, so it may use IP address or protocol to filter the traffic. If the request is valid by the filter, the proxy provides the content by connecting to the origin server and requesting the service on behalf of the client in case the required content is not cached on the proxy server. The proxy server will return the content directly to the client if it was cached before by the proxy serverWe must consider the following problems before applying web proxy caching:Size of Cache: In traditional architectures each proxy server keeps records for data of all other proxy servers. This will lead in increasing in cache size and if cache size becomes large this will be a problem because as cache size is larger, Meta data become difficult to be managed. [8]Cache Consistency:We should ensure that Cache Consistency is verified to avoid Cache Coherence problem. Cache Consistency means when a client send requests for data to proxy server that data should be up-to-date. [9]Load balancing: There is must be a limit for number of connections to certain proxy server to avoid the problem of overloaded only one server than the other in case we use load balancing. [10]Extra Overhead:When all the proxy servers keep the records of all the other proxy servers, this will lead to extra overload in the system which already produces congestion on all proxy servers. This extra head due to each proxy server in the system must check the validity of its data with respect to all other proxy servers. [11]In addition to the proxy cache provide some advantages such as a reduction in latency, network traffic and server load, it also provides some more advantages∙Web proxy caching decreases network traffic and network congestion and this automatically reduces the consumption of bandwidth∙Web proxy caching reduces the latency because of the followings:A.When a client sends to the proxy server arequest already cached in the proxy server so inthis case the proxy server will reply directly tothe client instead of send the request to theorigin server.B.The reduction in network traffic will makeretrieving not cached contents faster because ofless congestion along the path and less workloadat the server.∙Web proxy caching reduces the workload of the origin Web server by caching data locally on the proxy servers over the wide area network.∙The robustness and reliability of the Web service is enhanced because in case the origin server in unavailable due to any failure in the server itself or any failure in the network connection between the proxy server and the origin server, the proxy server will retrieve the required data from its local cache.∙Web caching has a side effect that allows us a chance to analyze an organization's usage patterns.In addition to proxy caches provide a significant reduction in latency, network traffic and server load, they also produce set of issues that must be considered. ∙ A major disadvantage is the resend of old documents to the client due to the lack of proper proxy updating. This issue is the focus of this research.∙ A single proxy is a single point of failure.∙ A single proxy may become a bottleneck. A limit has to be set for the number of clients a proxy can serve. Therefore, in all government institution those provide services to citizen, we must be searched about methods and solutions to enhance the efficient of services delivery , and as we know that most places away from Cairo state is facing failure in the network because the lack of infrastructure and possibilities of the services provider (ISP).There has been a lot of research and enhancement in computer technology and the Internet has emerged for the sharing and exchange of information. There has been a growing demand for internet-based applications that generates high network traffic and puts much demand on the limited network infrastructure. We can use addition of new resources to the network infrastructure and distribution of the traffic across more resources as a possible solution to the problems of growing network traffic.Using of proxy caches in the government computer networks is useful to the server administrator, network administrator, and end user because it reduces the amount of redundant traffic that circulates through the network. And also end users get quicker access to documents that are locally cached in the caches. However, there are additional issues that need to be considered by using of proxies. In this study, we will focus on Web proxy caching since it is the most common caching method.1.1 Problem StatementThe governmental organizations which provide services to citizen must target efficient and more reliable services while keeping cost-effective criteria. Throughout this paper we consider the case of the Egyptian National Railway (ENR) datacenter which serve many applications supported to many branches spreaded allover Egypt which are quite faraway from Cairo state. Current infrastructure faces so many challenging problems leading to poor reliability as well as ineffective services and even more discontinuity of such services even at the headquarter datacenter. The main attributes of the problems facing the ENR network are summarized as following:∙ A major problem of the remote site is unstructured and their heavy network traffic.∙The network overloading might result in the loss of data packets∙The origin servers loaded most of the time∙Transmission delay –normal traffic data but low speed of the line.∙Queuing delay due to huge network traffic∙Slow the services that provided to citizens.∙Processing delay –due to any defection of the network device∙Network faults can cause loss of data∙Broadcast delay – due to presence of broadcasting on networkII.Proxy Caching OverviewCaches are often deployed as forward proxy caches, reverse proxy caches or as transparent proxies.2.1 Forward Proxy CachingThe most common form of a proxy server is a forward proxy; it is used to forward requests from an intranet network to the internet.[12]Fig. 2.1: Forward proxy cachingWhen the forward proxy receives a request from the client, the request can be rejected by the forward server or allowed to pass to the internet or [13] retrieved from the cache to the client. The last one reduces the network traffic and improves the performance.On the other hand, the forward proxy treats the requests by two different ways according to the requests are blocked or allowed. In case the request is blocked, the forward proxy returns an error to the client. In case the request is allowed, the forward proxy checks either the request is cached or not; if it is cached, the forward proxy returns cached content to the client. If it is not cached, the forward proxy forwards the request to the internet then returns the retrieved content from the intent to the client.The above figure explains the work of the forward proxy in case the request is allowed but not cached on the forward proxy A. the forward proxy will send the request to the server on the internet then the server on the internet return the required content to the forward proxy and finally the forward proxy return the received content to the client and cached it on its cache for future and same request. The cached content on the forward proxy will reduce the network traffic in the future and actually improves the performance of the whole system.2.2 Reverse Proxy CachingThe other common form of a proxy server is a reverse proxy; it performs the reverse function of the forward proxy, it is used to forward requests from an internet network to the intranet network. [14]This provides more security by preventing any hacking or an illegal access from the clients on the internet to important data stored on the content servers on the intranet network. By the same way, if the required content is cached on the reverse proxy, this will reduce the network traffic and improves the performance.[15]Fig. 2.2: Reverse proxy cachingThe advantages of reverse proxy are∙Solving single point of failure problem by using load balancing for content servers.∙Reducing the traffic on the content servers in case the request is blocked by the reverse proxy. In this case the request is rejected directly by the reverse proxy without interrupt the content servers.Reducing the bandwidth consumes by blocked requests as it is blocked directly by reverse proxy before reaching to the content servers.The function of the reverse proxy is the same as the function of the forward proxy except the request is initiated from the client on the internet to the content servers in the internal network. At first, the client on the internet sends a request to the reverse proxy. If the request is blocked, the reverse proxy returns an error to the client. If the request is allowed, the reverse proxy checks if the request is cached or not. In case the request is cached, the reverse proxy returns the content information directly to the client on the internet. In case the request is not cached, the reverse proxy sends the request to the content server in the internal network then resends the retrieved content from the content server to the client and also cached the content information from the content server for future requests to same content information [16]2.3 Transparent CachingTransparent proxy caching eliminates one of the big drawbacks of the proxy server approach: the requirement to configure Web browsers. Transparent caches work by intercepting HTTP requests and redirecting them to Web cache servers or cache clusters.[17]This style of caching establishes a point at which different kinds of administrative control are possible; for example, deciding how to load balance requests across multiple caches. There are two ways to deploy transparent proxy caching: at the switch level and at the router level. [18]Router-based transparent proxy caching uses policy-based routing to direct requests to the appropriate cache(s). For example, requests from certain clients can be associated with a particular cache.[19]In switch-based transparent proxy caching, the switch acts as a dedicated load balancer. This approach is attractive because it reduces the overhead normally incurred by policy-based routing. Although it adds extra cost to the deployment, switches are generally less expensive than routers.[20]III.Proxy Caching ArchitectureThe following architectures are popular: hierarchical, distributed and hybrid.3.1 Hierarchical Caching ArchitectureCaching hierarchy consists of multiple levels of caches. In our system we can assume that caching hierarchy consists of four levels of caches. These levels are bottom, institutional, regional, and national levels [21]The main object of using caching hierarchy is to reduce the network traffic and minimize the times that a proxy server needs to contact to the content server in the internet or in the internal network to provide the client with needed content information .These multiple caches works in that manner in case of forward proxy, at first the client initiate a request to the bottom level cache. If the needed content information is found and cached on it, it returns this information to the client directly. If this information is not cached on it, it will forward the client request to the next level cache that is institutional. If this cache found the needed information cached on it, it will return it to bottom level cache then the bottom level cache returns them to the client. If the needed information is not cached on it, it will forward the request to regional level. If the needed information is cached on it, it will return the needed information to the institutional level cache then the institutional level cache returns them to the bottom level cache and finally bottom level cache returns them to the client. If the needed information is not found not found on it, it will forward the request to the last level of cache that is national, if the needed information is found on that cache, it works the same way as above till the information reach to the client. If the needed information is not cached on that cache, it will forward the request to the content server on the internet and also repeat the same steps as above till the information reached to the client.In case of the reverse proxy, the same steps above are repeated except the request will forward by reverse way as in the forward proxy. Here, the request will forward from national level cache then to then to regional then to institutional bottom and finally to the content server in the internal network. The important note in caching hierarchy either in case of the forward proxy or the reverse proxy is each cache receives information from another level cache will cache a copy from thatinformation for future need to the same request.Fig. 3.1: Hierarichal caching architecture3.2 Distributed Caching ArchitectureIn distributed Web caching systems, there are no other intermediate cache levels than the institutional caches, which serve each others' misses. In order to decide from which institutional cache to retrieve a miss document, all institutional caches keep meta-data information about the content of every other institutional cache. With distributed caching, most of the traffic flows through low network levels, which are less congested. In addition, distributed caching allows better load sharing and are more fault tolerant. Nevertheless, a large-scale deployment of distributed caching may encounter several problems such as high connection times, higher bandwidth usage, administrative issues, etc.[22]There are several approaches to the distributed caching. Internet Cache Protocol (ICP), which supports discovery and retrieval of documents from neighboring caches as well as parent caches. Another approach to distributed caching is the Cache Array Routing protocol (CARP), which divides the URL-space among an array of loosely coupled caches and lets each cache store only the documents whose URL are hashed to it.[23]3.3 Hybrid CachingA hybrid cache scheme is any scheme that combines the benefits of both hierarchical and distributed caching architectures. Caches at the same level can cooperate together as well as with higher-level caches using the concept of distributed caching.[24]A hybrid caching architecture may include cooperation between the architecture's components at some level. Some researchers explored the area of cooperative web caches (proxies). Others studied the possibility of exploiting client caches and allowing them to share their cached data.One study addressed the neglect of a certain class of clients in researches done to improve Peer-to-Peer storage infrastructure for clients with high-bandwidth and low latency connectivity. It also examines a client-side technique to reduce the required bandwidth needed to retrieve files by users with low-bandwidth. Simulations done by this research group has proved that this approach can reduce the read and write latency of files up to 80% compared to other techniques used by other systems. This technique has been implemented in the OceanStore prototype (Eaton et al., 2004).[25]IV.Design Goals & Proposed ArchitectureTo improve the computer network performance, decrease the workload for data center and ensure continual service improvement, we aim to design efficient mechanisms for reducing the workload of a data center and business Continuity verification and achieve the following goals:∙Reduces network bandwidth usage consumption which leads to reduce network traffic and network congestion∙Decrease the number of messages that enter the network by satisfying requests before they reach the server.∙Reduces loads on the origin servers.∙Decreases user perceived latency∙Reduced page construction times during both normal loading and peak loading∙If the remote server is not available due to a server \crash" or network partitioning, the client can obtaina cached copy at the proxy.4.1 Proposed ArchitectureWe define before two types of proxies, the forward proxy and the reverse proxy. The forward proxy is used to forward clients from the clients on the internal network to the content server in the internet. The reverse proxy is used to forward requests from the clients in the internet to the content server in the internal network. Fig 5-1 shows that the forward proxy serves as a servant for internal clients and as a cache because it cached the content received from the content server on the internet. So for any the same repeated request, the forward server can return the cached content on it to the client directly without backing again to the content server. On the same time the forward proxy does an important rule as it hides the internal clients from outside world as the request is initiated from the forward proxy.Fig. 4.1: Proposed ArchitectureFig 4-1 also shows the reverse proxy that used to forward the requests from external clients to content servers in internal network. In this case the reverse proxy makes encrypting content, compressing content, reducing the load on content servers. It also hides the responses from internal networks and as them come from the reverse proxy which increases the security. It also caches the content and forwards it directly to clients if they repeated again without backing again to the content server. Finally, we can use load balancing to balance between content servers and in this case the reverse proxy and forward the request from the client to any of this content serves which increase the availability of the system.4.2 Proposed Architecture Workflow1The Remote Site client sends a request for Web Application content to the Forward proxy cache. If Forward proxy caching contains a valid version of the Web Application content in its cache, it will return the content to the requesting user. 2If the content requested by Remote Site user is not contained in the Forward proxy cache, the request is forwarded to an upstream Reverse proxy caching.3If the upstream Reverse Proxy Cache has a valid copy of the requested content in cache, the content is returned to Forward proxy cache (Remote Site). Forward proxy cache places the content in its own cache and then returns the content to the Remote Site user who requested the content.4If the upstream Reverse proxy caching does not contain the requested content in its cache, it will forward the request to the Web Application server. 5The Web Application server returns the requested content to reverse proxy caching. Reverse proxy caching places the content in cache.6Web Application server returns the content to reverse proxy caching. Reverse proxy caching server places the content in its cache. Reverse proxy caching server returns the content from its cache to Forward proxy. Forward proxy cache places the content in its own cache and then returns the content to the Remote Site user who requested the content.Fig. 4.2: Proposed workflowV.Performance AnalysisIn this chapter, we evaluate cache performance of web proxy caching for web applications and compare it to the case of not using web proxy caching at all. We will monitor and evaluate the performance of web proxy caching in three cases:∙Without using web proxy caching.∙At the beginning of using web proxy caching.∙After certain period (one month) from using web proxy caching.We will take in our consideration the following parameters in evaluation process ∙Requests returned from the application server.∙Requests returned from cache without verification. ∙Requests returned from the application server, updating a file in cache.∙Requests returned from cache after verifying that they have not changed.5.1 Performance MetricsThe seven main categories of performance metrics are:1.Cache Performance:how requested Web objects were returned from the Web Server cache or from the network. It will be measured the according to2.Traffic: the amount of network traffic, by date, sent through Web Proxy including both Web and non-Web traffic.3.Daily traffic:average network traffic through Web Proxy at various times during the day. This report includes both Web and non-Web traffic.4.Web Application Responses:how ISA Server responded to HTTP requests during the report period.5.Failures communicating:Web proxy Cache encountered the following failures communicating with other computers during the report period.6.Dropped Packets:shows the users who had the highest number of dropped network packets during the report period Users that had the most dropped packets are listed first7.Queue Length:The System\Processor Queue Length counter shows how many threads are ready in the processor queue, but not currently able to use the processor.VI.ResultsIn this section we will investigate the performance analysis of cache, Network traffic, Failure communication, Dropped packets and queue length 6.1 Cache PerformanceThe cache performance results for each of the log files are shown below. The percentage of requests returned from cache without verification is high. It shows that between 38% of all requests result in a request returned from cache without verification, which is consistent with previously published results. Wills and Mikhailov[26] reported that only 15% to 32% of their proxy logs result in requests returned from cache without verification. Yin, et al.[27]revealed that 20% of requests to the server are due to revalidation of cached documents that have not changed. These results are consistent with the results found in our logs as discussed before. However, with the current logs, the number of requests returned from cache without verification has increased a little. This may be due to the duration of the analysis being longer for this study or to the use of different logs. It is assumed that a large fraction of these frivolous requests are due to embedded objects that do not change often.Table 0-1: Cache Performance ResultsStatus Requests% of TotalRequestsTotalBytesObjects returned from the applicationserver20873 59.30 % 617.73 MBObjects returned from cache withoutverification13450 38.20 % 26.93 MBObjects returned from cache afterverifying that they have not changed489 1.40 % 0.99 MB Information not available 354 1.00 % 49.74 KB Objects returned from the applicationserver, updating a file in cache59 0.20 % 14.62 KBTotal 35225 100.00 % 645.72 MB6.2 TrafficThe results for average network traffic through web proxy caching Server at various times during the day at the beginning of using web proxy caching and after certain time from using web proxy caching are in Table below. The results indicate that the average processing time for handling the request is reduced by 43% after certain time of using web proxy caching because proxy caches the previous visited pages and return them directly to client without waste time to ask application server each time.To reflect the physical environment of the network, we have to consider factors influencing traffic. Of various factors influencing traffic, object size is a factor of the objects themselves. Hence, we can reflect the size factor of web object. An average object size hit ratio reflects the factor of object size to object-hit ratio. Average-object Hit Ratio: The cache-hit ratio can indicate an object-hit ratio in web caching. The average object-hit ratio can calculate an average value of an object-hit ratio on a requested page the performance is evaluated by comparing an average object-hit ratio and response time [28]Response Time gain factor (RTGF): This factor give you amount of advantage in web cache response time. [29]。
关于高速公路备份链路中基于5G的VPDN技术的应用分析发表时间:2019-07-19T15:30:12.540Z 来源:《基层建设》2019年第12期作者:黄丽佳[导读] 摘要:高速公路备份链路中基于5G的VPDN技术的应用,可以有效的解决高速公路环境复杂而导致网络不稳定的问题,通过对5G技术与VPDN技术的业务等进行分析,探究了基于5G的VPDN技术备份链路未来的具体应用方法与措施。
广东新粤交通投资有限公司 510000摘要:高速公路备份链路中基于5G的VPDN技术的应用,可以有效的解决高速公路环境复杂而导致网络不稳定的问题,通过对5G技术与VPDN技术的业务等进行分析,探究了基于5G的VPDN技术备份链路未来的具体应用方法与措施。
关键词:高速公路;备份链路;5G;VPDN随着现代移动通信技术的发展,5G通信技术已经成熟,并开始在各个领域中进行应用,在未来高速公路的备份链路中,采用基于5G的VPDN(虚拟拨号专业网络)技术,将可有效解决高速公路收费复杂而疑难的问题,利用5G的VPDN高速网络技术,不仅可有效解决高速公路的收费专用网络问题,而且无疑会大幅提高高速公路信息技术的应用,优化高速公路的备份链路,成为人们关注的焦点之一。
一、基于5G的VPDN技术分析1、5G技术概述5G(fifth-generation)是现代移动通信技术的第五代产品,这也是4G技术的发展,华为目前已经研究了5G核心技术,并已经开始在国家一些地区进行部署,与前几代移动通信技术相比,5G技术能够在28GHz的超高频段上运行,并能够以1Gbps的速度进行传输,而且5G技术的运用,对数据与信息技术的传输最高可以达到两公里的传输距离,采用5G技术下载一部高品质的HD电影,只需要1秒钟就可以完成。
在高速公路备份链路中的应用,可以有效的解决高速公路中的专业网络问题,在未来高速公路中具有十分广泛的前景。
2、VPDN技术(1)VPDN的内涵VPDN技术(Virtual Private Dial -up Networks )是虚拟网络拨号专用网络,通过采用拨号接入的方法,来实现网络链接,在具体的应用中,可以实现多种网络交互的功能,也可以通过认证与授权的方式,与安全的虚拟网络建立链接,实现数据的传递,随着通信技术的发展,将虚拟专用拨号网络与5G技术结合在一起,可在多种环境中进行应用,采用VPDN网络的隧道技术,将用户的信息与项目数据封装在隧道内容中,并能实现较高的传输速度。
基于改进动态扩展环搜索算法的移动自组织网络能耗及时延优化方案曾劼;刘小强【摘要】针对移动自组织网络中传统的阻塞扩展环形搜索方案时延及耗能过多的问题,提出了一种基于路由节点泛洪终止的MANET阻塞扩展环形搜索方案.首先,源节点广播RREQ,当接收到RREP或生命周期结束时转发数据包;然后,为数据包设定一个单位时间的延迟以避免END指令和数据包之间的冲突;最后,中间节点根据接收到的不同消息类型执行不同的行动,如果识别出路由节点,则以当前跳数发送RREP(即Hr)给源节点;否则开启泛洪、重播RREQ.仿真实验验证了所提方案的有效性及可靠性.仿真结果表明,相比传统的BERS方案,所提方案不仅降低了MANET的时延和能耗,同时大大地节省了总成本.【期刊名称】《科学技术与工程》【年(卷),期】2014(014)022【总页数】7页(P250-256)【关键词】扩展环形搜索;移动自组织网络;路由节点泛洪终止;能量消耗;时延优化【作者】曾劼;刘小强【作者单位】贵州理工学院信息网络中心,贵阳550003;重庆邮电大学计算机科学与技术学院,重庆400065【正文语种】中文【中图分类】TP393.02移动自组织网络(mobile Ad hoc network,MANET)[1,2]依靠移动节点的合作来动态建立通信路由,因为节点一般是由电池供电的[3],能源和时效对MANET 尤为重要[4,5]。
为了节省MANET的能量消耗及降低时延,学者们提出了许多路由方案,典型的分层架构是通过聚类结构实现的,例如,文献[6]提出了一种高度聚类启发式算法,基于节点到其他节点的距离计算节点度,该方法存在一个很大的缺点,它没有在任何聚类中限制节点数目的上限,严重影响了聚类的吞吐量和稳定性。
文献[7]提出一个最低ID聚类启发式算法,它为每个移动节点安排一个唯一ID,选择最低ID的节点作为聚类头节点,该方法的吞吐量比最高度方法好,然而,有较小ID的节点往往反复选为聚类头节点,这可能会很快就耗尽电池。
华为5G中级考试(试卷编号141)1.[单选题]在4G到5G的重选过程中,UE通过哪条消息获取5G频率的重选优先级?A)SIB8B)SIB24C)SIB19D)SIB14答案:B解析:2.[单选题]在NR组网下,为了用户能获得接近上行最高速率,其MCS值最低要求应该是多少?A)16B)32C)25D)20答案:C解析:3.[单选题]MU-MIM0特性通过多个UE配对提升小区容量,以下描述错误的是哪一项?A)配对原则:UE间SINR接近B)当前下行最大的MU-MIMO流数为8C)配对原则:UE间信道相关性低D)Massive MIMO支持的流数越大,支持越多UE配对答案:B解析:4.[单选题]以下关于5G 小区存在的寻呼方式的描述,错误的是哪一项?A)RAN 使用 I-RNTI 在 RNA 寻呼B)使用 PDSCH 信道发送寻呼消息C)核心网使用 I-RNT I 在 RNA 寻呼D)核心网使用 5G-S-TMSI 在 TAL寻呼答案:D解析:5.[单选题]以下哪条消息用于指示完成站内RNA更新?A)RRCSetupB)RRCReleaseC)RRCResumeRequest1D)RRCReconfiguration6.[单选题]以下关于单注册下系统间移动性的描述,错误的是哪一项?A)从4G移动到5GUE使用4G-GUTI映射的5G-GUTI发起RegistrationB)R15协议支持5G直接与2G/3G进行互操作C)从5G移动到4GUE使用5G-GUTI映射的4G-GUTI发起TAUD)如果没有N26接口,系统间互操作可能导致IP地址变更答案:B解析:7.[单选题]G频点栅格定义了NR小区的中心频点,Sub3G的频点号计算公式中△FG1oba1是多少?A)60kHzB)5kHzC)15kHzD)30kHz答案:B解析:8.[单选题]5GSA小区进行异系统移动性测量时,可下发最大多少个LTE测量频点?A)32B)8C)16D)64答案:C解析:9.[单选题]假设SSB物理层周期配置为10ms, 那么同一个MIB块要进行几个周期的SSB扫描?A)8次B)2次C)6次D)4次答案:A解析:10.[单选题]如果需要支持5G到4G的PS切换,那么5GC和EPC之间必须部署什么接口?A)N2B)N26C)N10D)N20答案:B解析:D)UE不活动定时器答案:D解析:12.[单选题]SA组网架构下,5GRAN使用哪层完成Qos映射?A)SDAPB)PDCPC)NGAPD)RLC答案:A解析:13.[单选题]5G系统支持ExtendCP的子载波间隔是多少?A)15KHzB)30KHzC)60KHzD)24KHz答案:C解析:14.[单选题]华为RAN3.1中RNA的更新周期是多少?(不确定)A)1500sB)1200sC)54minD)20s答案:B解析:15.[单选题]gNOdeB通过PDCCH的DCI格式Uo/U1调整TPC取值,DCI长度是多少?A)1bitB)4bitC)2bitD)3bit答案:C解析:16.[单选题]在做NR网络的下行峰值调测时,PDSCHDMRS类型应该配置为以下哪一项?解析:17.[单选题]在PUSCH功率算法中,路损是通过以下哪一个信道或信号的测量计算出来的A)CSI-RSB)SSBC)PDCCHD)PDSCH答案:B解析:18.[单选题]3GPP标准定义的5G传播模型中,对杆站和宏站天线的典型高度定义分别是多少?A)20米,40米B)10米,25米C)15米,25米D)10米,25米答案:B解析:19.[单选题]以下属于PUCCH短格式的是哪一项?A)Format1B)Format4C)Format3D)Format2答案:D解析:20.[单选题]在5G异频重选流程中,终端通过哪个消息获取异频的重选优先级?A)SIB2B)SIB3C)SIB4D)SIB5答案:C解析:21.[单选题]NSA终端在eNodeB侧接入过程中,如果eNdoeB收到ME的消息,里面携带了nRestriciontion的指示,以下 哪个是问题的原因?( )A)终端不支持NSA功能D)EPC相关网元不支持NSA答案:B解析:22.[单选题]NR上下行采用相同的HARQ机制,具体是哪一项?A)同步非自适应B)异步非自适应C)异步自适应D)同步自适应答案:C解析:23.[单选题]测试过程中,测试软件Probe中的哪个参数可以用来评估下行频偏?A)Time DifferenceB)UL TAC)Frequency offsetD)center Frequepey答案:B解析:24.[单选题]5G的CSI-RS最大支持多少个端口?A)64B)16C)32D)48答案:C解析:25.[单选题]以下关于QFI描述哪一项是错误的?A)用于在5G系统中标识一个QoS FlowB)在PDU会话中具有相同QFI的业务流接收相同的业务转发处理C)QFI在每个UE中都是唯一的D)QFI被封装在N3和N9的协议报头中答案:C解析:26.[单选题]以下关于基于覆盖的NG-RAN至E-UTRAN切换的参数配置的描述,错误的是哪一项?A)需打开系统间业务移动性算法开关B)切换方式包括切换和重定向C)4G频段越低的频点,建议连接态频率优先级越小解析:27.[单选题]在Option3x的架构下,承担数据分流的协议层是哪一项A)锚点PDCPB)锚点RLCC)NR PDCPD)NR RLC答案:C解析:28.[单选题]NSA场景下,低频NR小区异频测量GAP参数由哪个网元计算?A)gNodeBB)MAEC)eNodeBD)UE答案:C解析:29.[单选题]以下哪项不是对CPE做NR下行峰值调测时的建议操作?A)调制阶数设置支持256QAMB)MIMO层数设置为4流C)把CPE终端放置在离AAU两米处D)时隙配比设置为4:1答案:C解析:30.[单选题]NR系统中1个REG包含了多少个RE?A)6B)14C)12D)9答案:C解析:31.[单选题]在gNodeB中.以下哪个指标同时用于NSA和SA组网?A)NG接口寻呼次数B)Qos流建立成功次数C)小区调度用户数D)RRC连接用户数答案:C32.[单选题]用NR覆盖高层楼宇时,NR广播波束场景化建议配置成以下哪项?A)SCENARIO_6B)SCENARIO_13C)SCENARIO_1D)SCENARIO_0答案:B解析:33.[单选题]以下那条信令消息可以判断PScell变更为站间辅站小区变更准备成功A)SgNB Change RequiredB)SgNB Modification RequiredC)SgNB Addition Requst AckonwledgeD)SgNB Modification Request Acknowledge答案:C解析:34.[单选题]以下关于gNodeB的QoS管理的描述,错误的是哪一项?A)仅包含无线资源映射,无传输资源映射B)控制面功能往要负责业务的建立、修改和维护C)维护由特定QoS属性定义的数据流量D)提供不同QoS属性值的业务承载答案:A解析:35.[单选题]5G 控制信道采用预定义的权值会生成以下哪种波束?A)半静态波束B)静态波束C)宽波束D)动态波束答案:B解析:36.[单选题]在Optio3x架构下,双连接建立过程中,eNB会向MME发送ERAB Modity IND消息,该消息中携带的IP地址是属于以下哪个网元的?A)gNodeBB)eNodeBC)SGWD)UE答案:A解析:A)切换环节B)判决环节C)触发环节D)测量环节答案:C解析:38.[单选题]按照一般的配置原则,以下关于终端空闲态异系统测量的描述,正确的是哪一项?A)终端在4G网络时默认一直测量5GB)终端在4G网络时需要通过门限来判断是否启动5G测量C)空闲态测量参数通过测量配置消息获取D)终端在5G网络时默认一直测量4G答案:A解析:39.[单选题]以下哪条消息不能通过SRB3承载?A)RRCReleaeB)RRCReconfigurationC)MeasurementReportD)RRCReconfiguurationComplete答案:A解析:40.[单选题]PBCH的DMRs的频域位置和以下哪个参数相关?A)BandwidthB)ce11IDC)PCID)sI-RNII答案:C解析:41.[单选题]以下哪一个信道支持动态权值波束赋形?A)SSBB)PDSCHC)CSI-RSD)PDCCH答案:B解析:42.[单选题]NSA组网下NR的QoS配置信息可能包含在哪些消息中?C)SgNB Addition RequestD)SgNB Modification Request答案:B解析:43.[单选题]NR小区峰值测试中,UE支持256QAM时MCS最高能达到多少阶?A)26阶B)27阶C)29阶D)28阶答案:B解析:44.[单选题]华为基站载波聚合特性中不会用到哪个事件?A)A4B)A5C)A2D)A6答案:A解析:45.[单选题]在NSA组网中,如果只有5G发送了掉话,那么终端收到的空口消息是以下哪条?A)RRC ReleaseB)RRC ReconfigurationC)SCG Failue InfoD)RRC Reestablishment答案:C解析:46.[单选题]在PUSCH功率算法中,路损是通过以下哪个信道的测量计算出来的?A)SSBB)PDSCHC)PDCCHD)CSI-RS答案:A解析:47.[单选题]在下行调度计算用户的RB教量时以下哪个因素是不涉及的?A)RI上报B)MCS答案:A解析:48.[单选题]以下关于PDSCH DMRS的配置方案中,哪种方案的开销是最大的?A)Type2 前置单符号B)Type1 前置单符号C)Type2 前置双符号D)Type1 前置双符号答案:C解析:49.[单选题]SS-RSRP的测量是关于以下哪种信道或信号?A)PBCH DMRSB)SSB DMRSC)SSSD)PSS答案:C解析:50.[单选题]以下哪种格式PUCCH不存在DMRS?A)Format0B)Format1C)Format2D)Format3答案:A解析:51.[单选题]NSA场景中LTE小区切换过程称为以下哪项?A)主站(MeNB)切换B)辅站(PSCe11)切换C)辅站(PSCe11)变重D)主站(MeNB)变更答案:A解析:52.[单选题]MU-MIMO是提升小区容量非常有效的手段,不同信道支持的流数是不一样的,以下描述错误的是哪一项?A)PDSCH最大流数为16B)PUSCH最大流数为8C)PDCCH最大流数为8解析:53.[单选题]N.SA网络中,在以下哪条消息之后标示着UE完全接入5G网络?A)SgNodeB:Addi tion Comlete之后B)SgNodeBAdditionRequest之后C)UE完成在SgNodeB上的RA之后D)RRCConnection Reconfi guration之后答案:C解析:54.[单选题]5G 的CSI-RS最大支持多少个端口?A)16B)48C)64D)32答案:D解析:55.[单选题]在上行功率控制过程中,SRS的功率控制和哪个信道的功率控制是关联的?A)PRACHB)PUSCHC)DMRSD)PUCCH答案:B解析:56.[单选题]以下关于载波聚合和双连接的描述,正确是哪些项?()A)CA在PDCP层进行分流,而DC在MAC层进行分流B)CA和DC都是同时与多个小区连接,以获取更多的带宽资源传输数据C)DC可以在不同的制式间实现,但CA只能在同制式间实现D)当承载网络的时延较大时,推荐使用CA答案:B解析:57.[单选题]NSA网络中,以下哪一层的统计最接近用户的体验速率?A)RRC层B)RLC层C)物理层D)PDCP层答案:D58.[单选题]以下哪-项原因不会导致IBLER升高?A)SINR陡降B)天线不平衡C)强干扰D)上行TA异常答案:B解析:59.[单选题]5G小区中,UE的激活BWP带宽为100RB,则相对应关联的CSI-RS资源的最小RB个数是A)24B)100C)32D)64答案:C解析:60.[单选题]5G 网络开启Inactive态功能后,以下描述错误的是哪一项?A)RRC 建立次数减少B)X2 口消息可能增加C)Uu 口寻呼消息可能增如D)RRC 重配次数增加答案:D解析:61.[单选题]为了解决NR网络深度覆盖的问题,以下哪项措施是不可取的?A)采用低频段组网B)使用 Lampsi te提供室分覆盖C)增加NR系统带宽D)增加AAU发射功率答案:C解析:62.[单选题]在5G Option3X组网下终端用户面在哪层可能存在多个实体?( )A)RRC层B)RLC层C)SDAP层D)PDCP层答案:D解析:B)RRCCONNRECFGC)SgNB Add REQD)SgNB MOD REQ答案:C解析:64.[单选题]RRC连接建立流程中, RRC Setup Complete消息对应的逻辑信道是哪一项?A)CCCHB)DCCHC)DTCHD)BCCH答案:B解析:65.[单选题]5G系统中一共定义了多少种CQI的映射关系表?A)1种B)2种C)3种D)4种答案:C解析:66.[单选题]影响RANK的因素很多,以下描述错误的是哪一项?A)RF调整构造折射/反射提升RANKB)一般离基站越近RANK更高C)减少机械 下倾角来提升RANKD)SU MIMO多流开关需打开答案:C解析:67.[单选题]3G PP标准定义的5G 传播模型中,对杆站和宏站天线的典型高度定义分别是多少?A)20米,40米B)10米,40米C)15米,25米D)10米,25米答案:D解析:68.[单选题]NSA网络中,以下哪条消息之后标示着UE完全接入5G网络?A)SgNodeB:Addi tion Comlete之后D)RRC Connection Reconfiguration之后答案:C解析:69.[单选题]以下哪种Rate Matching生效后,支持不与PDSCH数据共符号?A)PDCCH Rate MatchingB)CSI-RE Rate MatchingC)SSB Rate MatchingD)TRS Rate Matching答案:D解析:70.[单选题]N41频段的GSCN频点号步长是多少A)2B)16C)3D)1答案:C解析:71.[单选题]载波聚合的SCell去激活管理中不会参考哪一项参数指标?A)SCell CQI对应的频谱效率B)UE上行调度的BSR数据量C)SCell SRS对应的频谱效率D)UE下行RLC缓存中的数据量答案:C解析:72.[单选题]对比QoS和切片SLA,以下描述错误的是哪一项?A)Qos是切片SA的基础B)切片SLA在较长时间范围内调整资源满足业务C)切片支持数最大可到254个D)切片SL A对准用户体验、系统性能、部署运维等全方位需求答案:C解析:73.[单选题]以下哪种组网架构属于NE-DC方案?A)Option4B)Option3C)Option574.[单选题]RAN3.0版本,异频切换使用哪个事件触发A)A3B)A4C)A5D)A6答案:C解析:75.[单选题]FR1频段下要实现100M带宽配置,最低的子载波间隔是多少?A)15kHzB)30kHzC)120kHzD)60kHz答案:B解析:76.[单选题]N310计数器主要是对下行链路质量进行检测,当满足以下哪一个条件是将触发失步指示A)PDCCH的BLERB)PDCCH的BLER>Qin限C)PDCCH的BLERD)PDCCH的BLER>Qout门限答案:D解析:77.[单选题]针对公共的PDCCH搜索空间,其最小的CCE格式是哪种?A)8CCEB)16CCEC)4CCED)2CCE答案:C解析:78.[单选题]Massive MIMO的哪一项增益不能改善系统覆盖性能?A)阵列增益B)空间复用增益C)干扰抑制增益D)空间分集增益79.[单选题]以下哪项不是对CPE做NR下行峰值调测时的建议操作?A)时隙配比设置为4:1B)调制阶数设置支持256QAMC)MIMO层数设置为4流D)把CPE终端放置在离AAU两米处答案:D解析:80.[单选题]针对60KHZ的SCS配置,一个无线帧中包含了多少个时隙?A)40B)80C)160D)20答案:A解析:81.[单选题]SA组网中,UE做小区搜索的第一步是以下哪项?A)帧同步,获取PCI组编号B)获取小区信号质量C)半帧同步,获取PCI组内IDD)获取小区其它信息答案:C解析:82.[单选题]在SA组网中,如果只有5G侧发送了掉话,那么终端收到的空口消息是以下哪条?A)RRC ReleaseB)RRC ReconfigurationC)SCG Failure InfoD)RRC Reestablishment答案:A解析:83.[单选题]以下关于NSA场景下UE在5C侧接入的描述,错误的是哪一项?A)UE和基站之间没有RRC信令B)优先采用基于非竞争的随机接入流程C)UE无需进行小区选择判决D)UE通过sync raster进行扫描找到SSB的位置答案:D解析:100MHz)的载波中心频段间隔最大为多大?A)89.8MHzB)99.8MHzC)69.8MHzD)79.8MHz答案:D解析:85.[单选题]控制RRC重建的定时器,是以下哪一项?A)T300B)T301C)T304D)UE不活动定时器答案:A解析:86.[单选题]以下关于SSB广播波束场景化的描述,错误的是哪一项?A)高楼场景,使用垂直覆盖比较宽的波束B)既有高楼又有广场,采用水平和垂直角度都大的波束C)场景化波束不用考 虑覆盖环境D)广场场景,近点使用宽波束答案:C解析:87.[单选题]在5G 异频重选流程中,终端通过哪个消息获取异频的重选优先级?A)SIB2B)SIB3C)SIB4D)SIB5答案:C解析:88.[单选题]64T64R AAU支持的NR广播波束的水平3dB波宽,最大可以支持多少?A)65°B)90°C)45°D)110°答案:D解析:B)15KhzC)60KhzD)30Khz答案:D解析:90.[单选题]如果需要开启干扰随机化调度,那么站内三个小区的PCI需要满足什么原则A)PCI mod 3错开B)PCI mod 8错开C)PCI mod 6错开D)PCI mod 4错开答案:A解析:91.[单选题]以下关于预调度的描述,错误的是哪一项A)预调度只用于上行调度B)该算法用于降低初始调度时延C)采用基本预调度时,无论UE是否有业务请求,只要调度资源有剩余,基站就会持续进行调度D)预调度功能也需要终端侧的支持答案:C解析:92.[单选题]N.R小区载波聚合的数据分流在哪层?A)MACB)SDAPC)RLCD)PDCP答案:A解析:93.[单选题]以下哪条信令消息可以判断PSce11变更为站间辅站小区变更准备成功?A)SgNB Modi fication Request AcknowledgeB)SgNB Modification RequiredC)SgNB Addition Request Acknow ledgeD)SgNB Change Required答案:C解析:94.[单选题]以下哪一个测量对象可以反映特定两小区的切换性能,统计系统内特定两两小区之间的切换指标?C)NRDUCELLTRPD)NRCellRelation答案:D解析:95.[单选题]在5G C中,以下哪个模块用于用户的鉴权管理?A)ANFB)AUSFC)PCFD)SMF答案:B解析:96.[单选题]空闲态UE,服务小区信号质量处于什么情况下,会执行异频高优先级重选邻区测量?A)服务小区信号质量小于一定门限时B)服务小区信号质量大于一定门限时C)服务小区信号质量等于一定门限时D)对服务小区信号质量没有要求答案:D解析:97.[单选题]以下哪个参数不用于PRACH功率控制?A)preamblereceivedt argetpowerB)preambletransmaxC)PRACH-CONFI guration indexD)SSB Tx Power答案:C解析:98.[单选题]以下描述错误的是A)PUCCH短格式适用于高频场景B)PUCCH长格式适用于高频场景C)上行同步定时器对于高速场景设置越小越好D)上行同步定时器对于高速场景设置越大越好答案:B解析:99.[单选题]以下关于5GQos流的描述,错误的是哪一项?A)1个QoS流在空口可以映射到多个DRBB)1个QoS流在空口只能映射到1个DFRB答案:A解析:100.[单选题]当前华为5G AAU可调电下倾角的调整粒度为以下哪项?A)1B)2C)0.1D)1.5答案:A解析:101.[单选题]5G 空口灌包时测出来的速率是哪个协议层的速率?A)PDCPB)RLCC)MACD)PHY答案:C解析:102.[单选题]在配置5G外部小区命令中,以下哪一个参数无需配置?A)gNodeB ID长度B)TACC)PCID)gNodeB ID答案:A解析:103.[单选题]对于NSA组网,gNodeB必须广播的系统消息是哪一项?A)MIBB)SIB1C)SIB5D)SIB5答案:A解析:104.[单选题]5G RAN2.1 64T64R的AAU可以最多支持多少种广播波束场景配置?A)17B)3C)8D)5105.[单选题]当子载波带宽为60kHz时,每帧包含多少个时隙?A)10B)80C)20D)40答案:D解析:106.[单选题]在RAN3.0版本中,针对SA且网的移动性算法。
第52卷第2期电力系统保护与控制Vol.52 No.2 2024年1月16日Power System Protection and Control Jan. 16, 2024 DOI: 10.19783/ki.pspc.230685光储式5G通信基站集群灵活性聚合与协同调度策略张禄澳1,范一芳1,2,王宇轩1,周 斌1,窦 青1(1.湖南大学电气与信息工程学院,湖南 长沙 410082;2.山西省交通科技研发有限公司,山西 太原 030032)摘要:5G通信基站通常配备光储,数量庞大、功耗可调,是一种优质的电力灵活性调节资源。
提出了多类型光储式5G基站集群灵活性资源聚合方法以及参与电网调峰的协同调度策略。
首先,分析休眠机制下多类型基站功耗可调特性与计及基站备用电量的储能调节能力。
基于极限场景思想,构建了光储式5G基站的灵活性空间量化模型。
在此基础上,利用闵可夫斯基和法刻画异构基站柔性资源的时空耦合能量轨迹,得到海量基站集群的灵活性资源聚合可调域。
其次,建立了基站集群聚合资源参与电能量市场和辅助服务市场的协同调度优化模型,提出了基于交替方向乘子法(alternating direction method of multipliers, ADMM)的分层分布式基站集群协同优化调度策略,将大规模基站集群调度问题降维分解为统一协同调峰功率响应、聚合功率自治调度和基站集群功率分配3个子问题进行求解。
通过算例对比分析可知,所提策略可降低通信基站69.86%的用能成本,为提升通信资源利用率和电力系统灵活调节能力提供了有效手段。
关键词:5G基站;灵活性聚合;闵可夫斯基和;电网调峰;分布式优化Flexible aggregation and coordinated scheduling strategy for renewable powered 5Gcommunication base station clustersZHANG Luao1, FAN Yifang1, 2, WANG Yuxuan1, ZHOU Bin1, DOU Qing1(1. College of Electrical and Information Engineering, Hunan University, Changsha 410082, China;2. Shanxi Transportation Technology Research & Development Co., Ltd., Taiyuan 030032, China)Abstract: 5G communication base stations are numerous and usually equipped with photovoltaic and energy storage and their power consumption is adjustable. It is a high-quality resource for flexible power regulation. This paper proposes a flexible resource aggregation method for multi-type renewable powered 5G communication base station clusters and a coordinated scheduling strategy for participating in power system peak shaving. First, the adjustable power consumption characteristics of multiple types of base stations under the sleep mechanism and the regulating capacity of energy storage considering the reserve power of the base station are analyzed. A flexibility space quantification model considering the impact of the uncertainty of solar power generation and base station communication load on the power feasible region of flexible resources for renewable powered 5G base stations is formulated based on the idea of limit scenarios. The flexible resource aggregation adjustable domain of a massive base station cluster is obtained using the Minkowski sum method to depict the spatio-temporal coupled energy trajectory of heterogeneous base station flexibility resources. Second, an optimal scheduling optimization model for base station flexibility resources participating in the electric energy market and auxiliary service market is established. Also a hierarchical distributed day-ahead peak shaving scheduling strategy based on the ADMM algorithm is proposed. The problem of large-scale base station cluster scheduling is decomposed into three sub-problems: unified collaborative peak shaving power response, aggregated power autonomous scheduling, and base station cluster power allocation for an iterative solution. Through the simulation test and comparative analysis, the proposed strategy can reduce the energy cost of communication base stations by 69.86%. This provides an effective means to improve the utilization of communication resources and the flexible regulation capability of the power system.This work is supported by the National Natural Science Foundation of China (No. 52277091 and No. 51877072).Key words: 5G communication base station; flexibility aggregation; Minkowski sum; power system peak shaving;distributed optimization基金项目:国家自然科学基金项目资助(52277091,51877072);湖南省杰出青年基金项目资助(2021JJ10019)- 102 - 电力系统保护与控制0 引言2021年国家工业和信息化部印发的《“十四五”信息通信行业发展规划》明确指出要全面部署通信网络基础设施,标志着我国5G通信基站进入了高速增长期[1]。
I. J. Computer Network and Information Security, 2013, 10, 24-29Published Online August 2013 in MECS (/)DOI: 10.5815/ijcnis.2013.10.04Performance Improvement of Cache Management In Cluster Based MANETAbdulaziz Zam, N. MovahediniaComputer Department, Faculty of Engineering University of Isfahan, Isfahan, Iranabduzam2000@, naserm@eng.ui.ac.irAbstract— Caching is one of the most effectivetechniques used to improve the data accessperformance in wireless networks. Accessing data froma remote server imposes high latency and powerconsumption through forwarding nodes that guide therequests to the server and send data back to the clients.In addition, accessing data may be unreliable or evenimpossible due to erroneous wireless links andfrequently disconnections. Due to the nature ofMANET and its high frequent topology changes, andalso small cache size and constrained power supply inmobile nodes, the management of the cache would be achallenge. To maintain the MANET’s stability andscalability, clustering is considered as an effective approach. In this paper an efficient cache management method is proposed for the Cluster Based Mobile Ad-hoc NETwork (C-B-MA NET). The performance of the method is evaluated in terms of packet delivery ratio, latency and overhead metrics.Index Terms — MANET, Cache Management, Cluster Based Caching SystemI.I NTRODUC TIONOne of the key issues in mobile ad-hoc networks (MANETs) is cache management, which improves the transmission capacity of the network. Moreover, perfect placement and control of caching system declines the power consumption.Two basic concerns in MANET’s cache management are how to maintain stability, and the scalability of the cache system. One solution to these problems is cluster based MANET as shown in Fig.1 . Cache management approaches consist of three phases [1 and 15] as shown in Fig.2 . 1- Replacement: this algorithm is responsible for evicting less important or expired data, with time to live (TTL) equal to zero, when the node cache is full and a new data is to be fetched on a request from client. 2- Consistency: this algorithm guarantees that all the copies of a data are identical to the original one on the server.3- Prefetching: this mechanism fetches the most important data items (that have high probability to be requested in the near future) into the caching node and stores them in the cache memory for responding to the future requests and queries.Figure 1. Cluster based MANETFigure 2. Cache management & consistency phases Cache management components are all implemented in the middleware layer that takes place under the application layer and above the transport and network layers in which Cluster Based Routing Protocol (CBRP) can be run [1 and 15]. The main goal of this paper is increasing the Cache hit ratio (the percentage of accesses resulting in cache hits) and increasing cluster hit ratio (the percentage of accesses causing cluster hits). We will attain this target by estimating adaptive TTL for the most requested data items and by using hybrid-based consistency maintenance algorithm that provides weak consistency level. Adaptive TTL calculation forms are trying to estimate the next Inter Update Interval (IUI) for a data on the server. The ideal TTL value is very close to IUI and its counter must reach zero on clients when the data is updated on the server. Both Cluster Head (CH) and the server will collaborate to implement our proposed method. In MANET networks, clustering will provide more stability by implementing hierarchical search in which all the cluster members have to send their requests to their CH first. Clustering also can overcome topologychanges and mitigate the cache size challenges. All MANET nodes have random mobility, so the distribution of the members in the clusters can be assumed uniform. All cluster members are assumed able to reach each other by one hop jump and thus, every member node can use other member’s caches as virtual memory and only one existing copy of any data in a cluster is needed.The rest of this paper is organized as follows. Section 2 presents the related works. The motivations of our proposed mechanism are described in section 3, and the contributions are elaborated in section 4. Section 5 presents the simulation results and the discussion on the choice of our method’s threshold value and finally the concluding remarks are given in section 6.II.R ELATED WOR KSA. ConsistencyConsistency maintenance techniques are categorized into three approaches: 1- Pull or client-based, where a client requests data updates from the source [2]. 2- Push or server-based, where the server sends updates to the clients every time the data items are to be updated [2 and 3]. 3- Hybrid-based, where the data source and the clients collaborate to maintain the data up to date [4, 5 and 12].Cache management and consistency maintenance mechanis ms in usual distributed environments seem to be simple and not proper to be instantly applied to MANET due to its limited cache size and bandwidth, dynamic topology and energy constraint. The common consistency level algorithms are Strong Consistency (SC) and Weak Consistency (WC) [17 and 18]. In SC, all cached copies in the nodes would be up to date at all time but this approach will cause high bandwidth and energy consumption due to the flooding of update messages. In WC, consistency of cached copies is kept between the servers and caching nodes with some latency or deviation, it does not provide warranty that all copies are identical to the original data at the same time, but it conserves energy and avoids unnecessary validation reports. In [3], one new consistency maintenance algorithm was presented, in which both Time To Refresh (TTR) value and query rate of any data are used to help data source to make decision and to push updates selectively to the most requested data in clients. Yu Huang in [16] has presented a new algorithm that provides weak level of consistency maintenance by utilizing hybrid-based approach, in which server will push updates if the most caching nodes need them whereas; caching node will pull updates for a cached data item if it has high probability to be updated on server. In this approach, server uses self-learning method based on a history of the clients-initiated queries.B. ReplacementWhen the cache memory of a caching node is full and client needs to fetch a new data, client node must run replacement algorithm that choose invalid or least accessed data to delete and fetch the new data. Time To Live (TTL) is a counter that indicates how long a cached copy is valid for. When counter value reaches zero, the data will be invalid. In MANET networks, TTL of cached copy of a data may reach zero whereas the original data is still up-to-date and should not be changed on data source. This matter often occurs when TTL value is fixed. Proposed TTL calculation algorithms for MANETs can be categorized into fixed TTL approaches in [6], and adaptive TTL approaches in [7, 8 and 19]. Adaptive TTL provides higher consistency, and can enhance replacement algorithm to be effective [7]. Adaptive TTL is calculated using different calculation forms[7 and 19]. The first mechanis m in [10], calculates adaptive TTL by multiplying the difference between the request time of the data item and its last update time, by a factor.In [19], the server sends back the previous IUI to the requesting node that saves the system defined fudged factor (CF). Next, the adaptive TTL is calculated by multiplying the previous IUI by CF. The mechanisms in [11] have taken advantages from the source IUI history to estimate the next IUI and to use it in calculating adaptive TTL. In the latest approach [12], Caching Node (CN) calculates its own estimation for the inter-update interval on the server, and utilizes it to calculate the adaptive TTL value of the data. In this approach, estimating of the inter-update interval requires saving the Last inter-Update Interval (LUI) and the previous estimated inter-update interval, mobile caching node is responsible for calculating TTL by saving IUIs and LUIs for all data items and it can lead to some power and cache consumption. Adaptive TTL plays a vital role in determining validity of the data and thus, replacement algorithm can accurately choose invalid data items to evict. In [1], one new replacement algorithm is proposed in which the cached copy of a data item that has lowest local cache hit number will be replaced, this algorithm will evict the least frequently used data that has been accessed minimum times during its presence in the cache.C. Pre-fetchingPrefetching is the less attended component of cache management scheme, due to its need to an accurate prediction and because of mobile nodes cannot tolerate high miss prediction penalty. Many prefetching algorithms such as in [12 and 20], have been presented to reduce data access latency in MANET. These algorithms rely on the importance of the data to prefetch the important data items proactively into the cache. The importance of a data item is determined based on some criteria such as access rate and update-to-query rate ratio (Dur/Dqr). Data items that have low Dur/Dqr ratio will be selected as pre-fetched data, if the caching node fetch these pre-fetched data items into the cache before they are actually needed, it will reply to a large number of requests with minimum latencyand it will reduce the number of update requests sent to the data source. In [12], the authors have presented a prefetching algorithm in which CN sets pre-fetched bits for the subset of data items that have high access rate. Then CN sends update request to server every short time namely polling interval (Tpoll), whereas it sends update request for the rest of non-pre-fetched data items every relatively long time cycle interval (N×Tpoll), where N, is a configurable number of polling intervals.III.M OTIVATIONA. The relationship between access rate and update rateAll data items in network can be categorized to: long TTL data like video, PDF and image files and short TTL data such as purse, weather and news files. It is rational that the long TTL data has less update rate whereas; the short TTL data has high update rate, fortunately; there is an extreme proportionality between the access rate from client nodes and update rate on server. In [13], c lients have high probability (75 percent) to access the data in short TTL set and low probability (20 percent) to access the data in the Long TTL set. In addition, it is shown that the data source updates are randomly distributed and compatible with the accessing distribution, in which (65.5 percent) of the updates are applied to the short TTL data subset. In this paper, we will use this Proportionality to give more attention to the set of data that have high access rate and high update rate.IV.E FFICIENT C ACHE M ANAGEMENT M ETHODA. system model and assumptionsIn this method, we assume that the network topology is divided into non-overlapping clusters as shown in Fig.1, The cluster head (CH) is elected by the cluster members based on power level, cache size and mobility. CH is assumed to have less or no mobility, high power level and large cache size as presented in [14 and 18]. CH of any cluster has two catalogue tables namely: the Local Cache Table (LCT) and the Global Cache Table (GCT). The LCT contains details of all cached data items presented in the respective cluster and the GCT contains details of the cached data items presented in the adjacent clusters.B. CH phaseIn our proposed method, the local cache table of any cluster head is divided into two lists as shown in Fig.3, namely: 1). Pre-fetched List Table (CH-PLT), that contains details of the most accessed data items presented in the respective cluster and 2). Non pre-fetched List Table (NLT), that contains details of the non pre-fetched data items presented in the cluster. In cluster based routing protocol, every client node in the cluster must send its request to CH if the requested data is not cached in its local cache. Then, CH can calculate the access rate for every data in its cluster and compare it with a threshold. For any data, if access rate is greater than threshold, then CH sets its pre-fetched bit, adds this data to the CH-PLT list, and sends report to server to add it to the Server Pre-fetched List (SPL).Figure 3. Pre-fetched & Non-pre-fetched lists in CHAll the pre-fetched data items should have adaptive TTL and push-based updating mechanism. The CH-PLT is a catalogue and is a part of the Local Cache Table (LCT) and thus, CH will save a copy of pre-fetched data only in two conditions:1) When the file is requested from CH itself.2) If the requesting node in cluster cannot save it, could not be a caching node.Otherwise, CH saves only details of the data. All of the CH-PLTs have limited and equal size indicates to the number of pre-fetched data items. The size of any CH-PLT is equal to the relative SPL size. If the CH-PLT is full and one new data item exceeds the threshold, it will be replaced with the least accessed data item. Server must be reported by this replacement and evicted data would take place in CH-NLT. Other data items that their access rate is lower than the threshold are added to the second part of the LCT namely CH-non pre-fetched list table CH-NLT. This list should have fixed TTL and pull based updating mechanis m, and its data items have to be sorted according to their access rate for replacement mechanis m purpose. By the way; CH is only responsible for dividing its cluster data items into two lists and its response to out of cluster requests i s normal. As assumed in the later works, CH will reply to out of cluster requests if it has the details of the requested data in its LCT.C. Server phaseServer has to save last ten inter update intervals (IUIs) for every data to be used for estimating of the current IUI.For all data in server pre-fetched list (SPL), server calculates the estimated inter update interval by using theFollowing EQ.1:II II II(tt)=aa∗II II II(tt−10)+2aa∗II II II(tt−9)+ … +10aa∗II II II(tt−1)(1+2+⋯+10)aa (1) This form means that the current IUI has high probability to be closer to the later inter update interval than to the older one. Server calculates the adaptive TTL of any pre-fetched data every time it would be updated using this form {AD-TTL = CF×IUI (t)}, where CF is system fudged factor. By receiving reports from CHs, Server forms many equal SPLs, every list is related to one cluster and indicates to the most accessed data items presented in the cluster. Server must send validation messages frequently for the SPLs. To avoid an unnecessary messages propagation in network, validation reports propagation must be controlled and we suggest the (average time-validation-sending algorithm) to control it. Once, a TTL of any data in any SPL list would be changed, server calculates the TTL average for this list. For example, if SPL(i) has N data items, then the average TTL of SPL(i) is defined as:TTTTTT−AAAAAA(ii)=TTTTTT1+TTTTTT2+⋯+TTTTTTTTNN(2) Server should send validation report to cluster i every TTL-A VG(i) time cycle. There is high probability that the most of pre-fetched data items in list i are requested by the members of cluster i orupdated by server through this {TTL-A VG(i)} time cycle. If a pre-fetched data would be updated on server between two successive time cycles, server sends updated data only to its related cluster head. In non pre-fetched data list, when a data is updated, server sends an invalidation report to all CHs that have thi s data in their non pre-fetched list table (NLT). when this data is requested in a cluster, then the head of this cluster CH reply with A CK message to the server in which, sends updated data with fixed TTL to all related CHs, to keep all cached copies of this data consistent.D. Server algorithmEvery Server Pre-fetched List is related to one cluster and has its own respective timer; all timers are set to zero at the beginning. The server will estimate the adaptive TTL of any pre-fetched data every time it is updated. To update the pre-fetched data lists in all clusters, the server should send validation reports periodically by implementing the flowchart shown in Fig.4. Validation report contains updated data items with adaptive TTL.Figure 4. Flowchart of the pre-fetched data lists updateE. ScenarioFig.5 shows two adjacent clusters, where cluster (2) has seven node members and one CH. Node (1) sends data (d) access request to its CH. The CH has not the details of this data in its pre-fetched and non pre-fetched lists, but it finds data (d) details in its Global Cache Table (GCT). Therefore, it forwards the request to the server according to its adjacent clusters.The server responses to the request by sending the data (d) with fixed TTL (20s) to CH (2). CH (2) adds this data to its non pre-fetched list table, and forwards it to the node (1). If other nodes in cluster (2) request the data (d), CH Responses that this data is cached in node (1) and it is valid for 20s. CH also calculates the access rate of the data (d). When access rate (d) exceeds the threshold, CH sets its pre-fetched bit and adds it to the (CH -PLT). CH also must send the report to the server to add data (d) to its SPL, and then the server has to calculate its adaptive TTL and also recalculate the pre-fetched list TTL-A VG of cluster (2).Figure 5. Forwarding request and forming (CH non pre-fetched list Table)V.S IMULATIONWe have used the network simulator (NS2) to simulate our efficient cache management method by using wireless Standard IEEE 802.11, Network size = 50 nodes in a 1000×1000 m zone, time of simulation = 100 seconds, The number of connections is 30, and we have used the Constant Bit Rate (CBR) traffic model. In these simulations, we have compared the proposed method (ECM) with the existing cluster based method (CBM). As the results shown in fig.6, 7 and 8 infer, the proposed method demonstrates superior performance in terms of the imposed overhead, the packet delivery ratio and the data access latency.Figure 6. The overhead Comparison between the proposedcache management method (ECM) and the cluster basedmethod (CBM)Determining the caching threshold value is very important factor in the performance of the proposed method. This value depends on the average number of cluster members. Any important data is to be requested from the most of cluster members in a short time period. In addition, the threshold value should be adaptive and adjusted to the network traffic characteristics. If the network is going to be congested, the threshold value must be set to a lower value to reduce the number of validation messages propagated in the network. Contrarily,if the network has low traffic, the threshold value must be higher to increase the number of pre-fetched data items in the lists, and then to improve the network performance.Figure 7. Packet delivery ratio comparison between theproposed cache management method (ECM) and clusterbased method (CBM)Figure 8. Latency comparison between the proposed cache management method (ECM) and cluster based method (CBM)VI.C ONCLUSIONSIn this paper, we have focused on the prefetching scheme according to pushing the validation reports to all the pre-fetched data every TTL-A VG time cycle. In addition, we have presented an efficient consistency method that uses hybrid-based consistency maintenance approach to switch between pulling and pushing updates, based on the importance of a data. Replacement scheme can be easily implemented herein which the CH evicts and replaces only data items presented in NLT by giving priority to the less accessed data. Other nodes in cluster can utilize the existing least frequently used (LFU-MIN) replacement algorithm, and can get help from (CH-LCT)-details to determine the less accessed data to delete. Our main goal (increasing cluster and local cache hit ratio) can be obtained by maintaining the most accessed data up to date in cluster. Adaptive TTL can also increase local cache hit ratio. If adaptive TTL would be estimated well, data will be up-to-date for the most possible timein the local cache, and through this time, all requests for this data hit. By increasing cluster-hit ratio, access delay and power consumption will be reduced, and accessing data will be more reliable in case of non-reliable links.R EFERENCES[1]Madhavarao Boddu and K. Suresh Joseph.improving Data Accessibility and Query Delay inCluster based Cooperative Caching (CBCC) inMANET using LFU-MIN. in International Journalof Computer Applications (0975 – 8887) Volume21– No.9, May 2011.[2] D. Barbara and T. Imielinski. Sleepers andWorkaholics: Caching Strategies in Mobile Environments. In MOBIDATA: An Intractive journal of mobile computing. Vol. 1 No.1. 1994. [3]H. Jin, J. Cao and S. Feng. A Selective PushAlgorithm for Cooperative Cache ConsistencyMaintenance over MANETs. In Proc.Third IFIPInt’l Conf, Embedded and Ubiquitous Computing,Dec. 2007.[4]P. Kuppusamy, K. Thirunavukkarasu and B.Kalaavathi. Review of cooperative cachingstrategies in mobile ad hoc networks. inInternational Journal of Computer Applications,2011.[5]L. Yin and Cao. G. Supporting CooperativeCaching in Ad Hoc Networks. In IEEETransactions on Mobile Computing, vol. 5, no.1,pp. 77-89, Jan. 2006.[6]J. Jung, A.W. Berger and H. Balakrishnan.Modeling TTL-based internet caches. in IEEEINFOCOM 2003, San Francisco, CA, March 2003.[7]P. Cao and C. Liu. Maintaining strong cacheconsistency in the World-Wide Web. In IEEETrans. Computers, v. 47, pp. 445–457, 1998.[8]Chankhunthod, P. Danzig, C. Neerdaels, M.Schwartz and K. Worrell. A hierarchical internetobject cache. in USENIX, pp. 13, 1996.[9]Y. Sit, F. Lau and C-L. Wang. On the cooperationof Web clients and proxy caches. in 11th Int’lConf. Parallel and Distributed Systems, pp. 264-270, July 2005.[10]V. Cate. Alex. A Global File system. In USENIX,pp. 1-12, May 1992.[11]D. Li, P. Cao and M. Dahlin. W CIP: Web CacheInvalidation Protocol. in IETF Internet Draft, March 2001.[12]Kassem Fawaz and h.Artail. DCIM: DistributedCache Invalidation Method for maintaining cacheconsistency in wireless mobile networks. in IEEE,2012.[13]N. Sabiyath Fatima and P. Sheik Abdul Khader.Efficient Data Accessibility using TTL BasedCaching Strategy in Manet. in European Journal ofScientific Research ISSN 1450-216X Vol.69 No.1pp.21-32 © Euro Journals Publishing, Inc, 2012. [14]P. Kuppusamy, K. Thirunavukkarasu and B.Kalaavathi. Cluster Based Cooperative Caching Technique in Mobile Ad Hoc Networks. inEuropean Journal of Scientific Research ISSN 1450-216X Vol.69 No.3 (2012), pp. 337-349 ©EuroJournals Publishing, Inc, 2012.[15]Anand Nayyar. Cross-Layer System for ClusterBased Data Access in MANET’S. in Special Issueof International Journal of Computer Science &Informatics (IJCSI), ISSN (PRINT), 2001.[16]Yu Huang . Cooperative cache consistencymaintenance for pervasive internet access. In State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China,2009.[17]Wenzhong Li, Edward Chan, Daoxu Chen andSanglu Lu. Performance analysis of cache consistency strategies for multi-hop wireless networks. Published in Springer Science BusinessMedia, LLC, 8 June 2012 .[18]Kuppusamy and B. Kalaavathi. Cluster BasedData Consistency for Cooperative Caching over Partitionable Mobile Adhoc Network. in A mericanJournal of Applied Sciences 9 (8): 1307-1315,2012.[19]Yuguang Fang, Zygmunt Haas, Ben Liang and Yi-Bing Lin. TTL Prediction Schemes and the Effectsof Inter-Update Time Distribution on Wireless Data Access. in Department of Electrical and Computer Engineering University of Florida, 2004.[20]Mieso K. Denko, University of Guelph, Canada.Cooperative Data Caching and Prefetching in Wireless Ad Hoc Networks. In Idea Group Inc.Volume 3, Issue 1 edited by Jairo Gutierrez © 2007.AbdulAziz Zam, received his B.Sc.from Isfahan university, Isfahan, Iranin 2009. He started Master’s degree in2010 in the same university. Currentlyhe is completing his thesis at computerdepartment, faculty of engineering. Hisresearch interests are wireless networks and telecommunications.N.Movahedinia, received his B.Sc. from Tehran University, Tehran, Iran in 1987, and his M.Sc. from Isfahan University of Technology, Isfahan, Iran in 1990 in Electrical and Communication Engineering. He got his PhD. degree from Carleton University, Ottawa, Canada in 1997, where he was a research associate at System and Computer Engineering Department, Carleton University for a short period after graduation. Currently he is an associate professor at the Computer Department, University of Isfahan. His research interests are wireless networks, signal processing in communications and Internet Technology.。