一种降低云平台能耗的有效技术(IJMECS-V10-N2-7)
- 格式:pdf
- 大小:796.75 KB
- 文档页数:9
科技成果——数据中心AI能效优化技术数据中心AI能效优化技术(AI-driven Data Center Efficiency Optimization)是一套能提高数据中心能效的人工智能技术。
它通过收集数据中心中的物理参数,如发热量、噪音大小等,然后使用人工智能算法和大数据技术分析数据,并在计算机视觉,机器学习等技术的支撑下,通过智能控制技术控制机械部件,智能调节设备空调工作状态,从而实现数据中心的能效优化。
数据中心AI能效优化技术的优势在于,它可以帮助企业减少数据中心的耗能,降低能源消耗,减少环境污染,改善工作环境,提高设备的可靠性及质量,节省企业的运营成本。
首先,数据中心AI能效优化技术的应用可以改善机械设备的热将冷却系统,改善系统的热管理效率,从而减少由机械设备发出的热量,降低机房内温度,减少尘埃,减少噪音。
此外,数据中心AI能效优化技术还可以改善节能装置,实现节电的效果,减少能源的消耗。
另外,还可以实现环境探测、机柜管理及状态监控,从而最大程度的节约机房的资源,降低机房的耗能,改善机房工作环境。
此外,数据中心AI能效优化技术还可以提高设备的可靠性及质量,通过智能数据分析的方式,可以有效识别机械设备故障的状态。
绿色计算在云计算中的节能策略是指在云计算环境中,通过采用一系列节能技术和措施,降低计算资源的能耗,从而实现环境保护和资源节约的目标。
以下是一些具体的策略:1. 优化资源配置:根据实际需求,合理分配和调度计算资源,避免资源浪费和闲置。
可以通过动态调整资源分配、使用负载均衡等技术手段来实现。
2. 选用节能设备:在云计算环境中,应选用具有节能特性的设备,如低功耗处理器、节能电源等。
同时,应定期对设备进行维护和升级,确保其正常运行和节能效果。
3. 虚拟化技术:虚拟化技术是云计算的核心技术之一,可以通过整合物理资源,实现资源的高效利用和节能减排。
虚拟化技术可以通过动态分配资源、优化内存使用等方式,进一步提高节能效果。
4. 绿色数据中心:数据中心是云计算的核心设施,应采用绿色建筑和节能设计,如使用高效制冷系统、智能照明系统等,降低数据中心的能耗。
同时,应定期对数据中心进行维护和升级,确保其正常运行和节能效果。
5. 绿色软件:绿色软件是指在软件开发过程中,注重节能减排的软件。
可以通过优化算法、减少资源占用等方式,降低软件对计算资源的消耗。
同时,应鼓励用户使用绿色软件,以减少整体能耗。
6. 能源管理平台:建立能源管理平台,实时监测和评估云计算环境的能耗情况,及时发现和解决能耗问题。
通过数据分析和技术手段,实现能源的有效管理和节约。
7. 绿色供应链:在云计算环境中,应关注供应链的环保和节能问题。
应选择具有环保和节能背景的供应商,确保所采购的设备和服务符合环保和节能要求。
同时,应与供应商共同制定环保和节能措施,共同推动绿色计算的发展。
8. 培训和宣传:加强员工培训,提高员工节能意识和技能水平,促进绿色计算文化的形成。
同时,通过宣传和教育活动,提高公众对绿色计算的认知度和支持度,推动绿色计算的普及和发展。
综上所述,绿色计算在云计算中的节能策略包括优化资源配置、选用节能设备、虚拟化技术、绿色数据中心、绿色软件、能源管理平台、绿色供应链和培训宣传等方面。
云计算数据中心基础知识考试题一、单选题(每题 5 分,共 25 分)1、以下哪个不是云计算数据中心的特点?()A 虚拟化B 高可靠性C 低扩展性D 自动化管理2、云计算数据中心中,用于存储大量数据的设备通常是()A 服务器B 存储阵列C 路由器D 防火墙3、云计算数据中心的网络架构通常采用()A 二层架构B 三层架构C 四层架构D 五层架构4、以下哪种技术可以实现云计算数据中心的资源动态分配?()A 虚拟化技术B 大数据技术C 人工智能技术D 区块链技术5、云计算数据中心的能耗主要来自于()A 服务器B 网络设备C 制冷设备D 以上都是二、多选题(每题 7 分,共 35 分)1、云计算数据中心的关键技术包括()A 虚拟化B 分布式存储C 自动化部署D 绿色节能2、以下哪些是云计算数据中心的服务模式?()A IaaSB PaaSC SaaSD DaaS3、云计算数据中心的安全防护措施包括()A 防火墙B 入侵检测C 数据加密D 访问控制4、影响云计算数据中心性能的因素有()A 网络带宽B 存储性能C 服务器配置D 应用软件优化5、云计算数据中心的运维管理包括()A 监控B 故障处理C 容量规划D 性能优化三、简答题(每题 10 分,共 20 分)1、简述云计算数据中心与传统数据中心的区别。
云计算数据中心与传统数据中心相比,存在着多方面的显著区别。
首先,在架构上,传统数据中心通常是基于物理服务器和专有网络设备构建,而云计算数据中心则大量采用虚拟化技术,将计算、存储和网络资源进行虚拟化,实现资源的灵活分配和高效利用。
在扩展性方面,传统数据中心的扩展往往较为困难,需要提前规划和采购大量的硬件设备,而云计算数据中心可以根据业务需求快速弹性地扩展资源,无需进行大规模的硬件采购和部署。
在管理模式上,云计算数据中心采用自动化管理工具,能够实现资源的自动配置、监控和优化,大大降低了运维成本和管理复杂度;传统数据中心则更多依赖人工操作,效率较低且容易出错。
数据中心节能降耗解决方案目录一、前言 (2)1.1 背景介绍 (2)1.2 解决方案目的 (3)二、数据中心能源消耗现状分析 (3)2.1 数据中心能源消耗构成 (4)2.2 数据中心能源效率现状 (5)2.3 节能降耗潜力分析 (6)三、数据中心节能降耗解决方案 (7)3.1 空调系统节能改造 (9)3.1.1 高效空调设备应用 (10)3.1.2 智能化空调控制系统 (10)3.2 供电系统节能改造 (12)3.2.1 高效UPS应用 (13)3.2.2 低压直流供电系统 (14)3.3 服务器及存储设备节能优化 (15)3.3.1 低功耗服务器推广 (16)3.3.2 存储设备节能技术 (17)3.4 网络及通信系统节能策略 (18)3.4.1 低功耗网络设备应用 (20)3.4.2 数据中心内部光纤通信优化 (20)3.5 视频监控及照明系统节能改造 (21)3.5.1 智能视频监控系统 (22)3.5.2 节能照明系统应用 (24)四、实施步骤与建议 (25)4.1 实施步骤 (26)4.2 项目实施建议 (28)五、预期效果与评估方法 (29)5.1 预期效果 (30)5.2 评估方法 (31)六、结语 (33)6.1 节能降耗的重要性 (34)6.2 持续改进与创新 (35)一、前言随着信息技术的快速发展,数据中心作为支撑各类业务的重要基础设施,其能源消耗和运营成本不断攀升,已经成为当前社会面临的一大挑战。
在这样的背景下,如何实现数据中心的节能降耗,不仅关系到企业的经济效益,也涉及到社会责任与可持续发展。
本文档旨在探讨并提供一种切实可行的数据中心节能降耗解决方案,以推动行业向绿色、低碳、高效的方向发展。
通过一系列的策略与技术手段,我们旨在降低数据中心的能耗,提高能源利用效率,从而为解决全球能源危机贡献一份力量。
1.1 背景介绍随着云计算、大数据、物联网等技术的快速发展,数据中心作为信息系统的核心基础设施,其建设和运营成本不断攀升。
专利名称:一种云平台的能耗管理方法、装置、设备及介质专利类型:发明专利
发明人:苏正伟
申请号:CN201911121557.X
申请日:20191115
公开号:CN110989824A
公开日:
20200410
专利内容由知识产权出版社提供
摘要:本申请公开了一种云平台的能耗管理方法,包括:将目标云平台中的物理主机分配至运行资源池和备用资源池;判断目标云平台中的业务负载是否小于预设阈值;若是,则利用DRS将业务负载集中分配至运行资源池中的物理主机,并将运行资源池中未承载业务负载的物理主机分配至备用资源池;对备用资源池进行更新,得到第一更新资源池;统计第一更新资源池中物理主机的数量,得到第一更新值,并判断第一更新值是否大于预设最大值;若是,则从第一更新资源池中筛选第一数量的物理主机,并将第一数量的物理主机设置为节电模式。
通过该方法可以在不影响目标云平台物理主机使用寿命的前提下,也能够降低目标云平台在运行过程中的能耗量。
申请人:北京浪潮数据技术有限公司
地址:100085 北京市海淀区上地信息路2号C栋5层
国籍:CN
代理机构:北京集佳知识产权代理有限公司
代理人:陈丽
更多信息请下载全文后查看。
专利名称:一种省电的NB-IoT芯片、装置及NB-IoT省电方法专利类型:发明专利
发明人:刘宇,范晓俊,王宏勇,伊海珂,彭晓松,李茂岗
申请号:CN202010112365.9
申请日:20200224
公开号:CN111372304A
公开日:
20200703
专利内容由知识产权出版社提供
摘要:本发明涉及NB‑IoT网络技术领域,具体为一种省电的NB‑IoT芯片、装置及NB‑IoT省电方法,该装置包括具有多个核心的处理器,还包括运行在处理器上的协议栈和物理层,其特征在于:所述协议栈和物理层分别运行在处理器的不同核心上。
所述的NB‑IoT省电方法基于上述装置,包括以下内容:在需要物理层和协议栈共同处理的数据处理任务中,物理层接收到数据后单独进行处理和分析,判断是否需要提交给协议栈处理,若是,则唤醒协议栈所在核心并且提交数据,若否,则不唤醒协议栈。
上述的数据处理任务包括寻呼信息处理、测量重选处理以及混合重传处理中的一种或多种。
本申请的一种省电的NB‑IoT芯片、装置及NB‑IoT省电方法,能够降低NB‑IoT设备的功耗,提高设备续航能力。
申请人:重庆物奇科技有限公司,上海物麒科技有限公司
地址:400000 重庆市渝北区仙桃街道数据谷中路107号14层
国籍:CN
代理机构:重庆强大凯创专利代理事务所(普通合伙)
代理人:冉剑侠
更多信息请下载全文后查看。
专利名称:一种降低分布式存储系统能耗的方法和装置专利类型:发明专利
发明人:鲍清平,金振成
申请号:CN201811570314.X
申请日:20181221
公开号:CN109814804A
公开日:
20190528
专利内容由知识产权出版社提供
摘要:本发明提供了一种降低分布式存储系统能耗的方法和装置,通过为分布式存储系统分配一组SSD,创建以该组SSD为存储资源的分布式SSD存储池,将该SSD存储池设置为以该系统中所有硬盘为存储资源的分布式硬盘存储池的cache,在对该系统的硬盘存储池的读写操作满足低能耗设置时,将所有硬盘设置为睡眠状态,使得通过SSD存储盘对硬盘存储池的读写操作转变为对SSD存储池的读写操作,由于SSD具有低能耗特点,因此可以有效降低能耗,并且对业务运行也不会产生影响。
申请人:创新科存储技术(深圳)有限公司
地址:518057 广东省深圳市南山区科技中二路深圳软件园9#楼501、502
国籍:CN
代理机构:北京德琦知识产权代理有限公司
更多信息请下载全文后查看。
专利名称:一种云计算服务器节能降耗系统专利类型:发明专利
发明人:董秀梅
申请号:CN201810803324.7
申请日:20180720
公开号:CN109218392A
公开日:
20190115
专利内容由知识产权出版社提供
摘要:本发明涉及云计算服务器技术领域,且公开了一种云计算服务器节能降耗系统,包括服务器硬件优化、服务器软件优化、直流高压供电、集中设计服务器机组和辅助制冷设备,所述服务器硬件优化首先采用定制化的硬件,定制化就是把在标准服务器里,计算数据用不到的那些设备去掉,同时利用一些高效的设备,以替代传统的老旧设备,其次,采用平衡设计方式,针对不同的数据计算量,对服务器内的设备进行不同规格的均衡设计,所述服务器软件优化包括低功耗缓存技术、优化网络结构、资源池化和单点多任务化和NodeManager技术,所述直流高压省去了两个转换阶段。
该云计算服务器节能降耗系统,通过硬件方面和软件方面的双重优化,可以有效的降低云计算服务器的能耗。
申请人:徐州海派科技有限公司
地址:221000 江苏省徐州市解放南路中国矿业大学国家大学科技园软件园A座101、129、131房间
国籍:CN
代理机构:北京盛凡智荣知识产权代理有限公司
代理人:梁永昌
更多信息请下载全文后查看。
关于虚拟云计算平台的能耗管理刍议
许珊
【期刊名称】《电子制作》
【年(卷),期】2015(000)011
【摘要】近年来,云计算正在逐渐成为热潮,并显露出了成为未来趋势的特征。
虚拟化云计算平台具有多个方面的优点,诸如灵活易用、处理能力强大、利用率高和扩展能力强悍等特点。
但是虚拟化云计算平台具有一个很大的缺陷,即能耗极高,这也是虚拟化云计算平台的基本属性所决定的,因此减少能耗就比较困难,只有努力做好能耗管理。
本文从虚拟化云计算平台能耗来源入手,简要分析其能耗管理措施。
【总页数】1页(P171-171)
【作者】许珊
【作者单位】电子科技大学中山学院 528400
【正文语种】中文
【相关文献】
1.虚拟化云计算平台的能耗管理探讨 [J], 党红恩;赵尔平;雒伟群
2.虚拟化云计算平台的能耗管理研究 [J], 朱佳轩
3.虚拟化云计算平台的能耗管理 [J], 陈续续;柴功昊
4.虚拟化云计算平台的能耗管理 [J], 张瑛;
5.浅谈虚拟化云计算平台的能耗管理 [J], 金秋
因版权原因,仅展示原文概要,查看原文内容请购买。
I.J. Modern Education and Computer Science, 2018, 2, 54-62Published Online February 2018 in MECS (/)DOI: 10.5815/ijmecs.2018.02.07An Effective Technique to Decline Energy Expenditure in Cloud PlatformsAnureet A. KaurChandigarh Engineering College, Landran / Deptt. Of Information Technology, Morinda, 140101, IndiaEmail: anu.shergill43@Dr. Bikrampal B. KaurChandigarh Engineering College, Landran / Deptt. Of Computer Science, Mohali, IndiaEmail: tech.bpk@Received: 08 July 2017; Accepted: 08 August 2017; Published: 08 February 2018Abstract—The cloud computing is the rapidly growing technology in the IT world. A vital aim of the cloud is to provide the services or resources where they are needed. From the user’s prospective convenient computing resources are limitless thatswhy the client does not worry that how many numbers of servers positioned at one site so it is the liability of the cloud service holder to have large number of resources. In cloud data-centers, huge bulk of power exhausted by different computing devices.Energy conservancy is a major concern in the cloud computing systems. From the last several years, the different number of techniques was implemented to minimize that problem but the expected results are not achieved. Now, in the proposed research work, a technique called Enhanced - ACO that is developed to achieve better offloading decisions among virtual machines when the reliability and proper utilization of resources will also be considered and will use ACO algorithm to balance load and energy consumption in cloud environment. The proposed technique also minimizes energy consumption and cost of computing resources that are used by different processes for execution in cloud. The earliest finish time and fault tolerance is evaluated to achieve the objectives of proposed work. The experimental outcomes show the better achievement of prospective model with comparison of existing one. Meanwhile, energy-awake scheduling approach with Ant colony optimization method is an assuring method to accomplish that objective.Index Terms—Task Scheduling, Energy, Cost, ACO, Cloud Simulator.I.I NTRODUCTIONCloud computing is a very rapidly growing technology today. Cloud computing provides its users to access the various cloud services and enables them to access the data storage and the computational resources with low data overhead. Cloud computing is rapidly growing so it has attracted the users to cloud. It offers its users to outsource the data resources from the remote locations when they need them. The cloud computing is fabricated of certain elements: The cloud clients, main datacenter and dispersed servers.A. ClientsThe cloud clients are, in a cloud computing framework, are commonly, the computers or machines that just lie on the counter. Although they can also be client laptops, consumer tablets, the mobiles,—all enormous operators for cloud as a result of their flexibility. Cloud clients are the equipments that the users communicate with them to maintain their data securely on the cloud platform.B. The Data-centerThe data-center is the assemblage of server machines which contain the application to which you want to sign up is stored. It may be a big area in the subbasement of your home where you can acess the web. A spreading tendency in the information technology world is virtualizing server machines, so that the software’s can be equipped by permiting different existences of virtual server machines to be adopted. By using that approach, you can use half of a dozen virtual server machines working on same real servers [4].C. Dispersed serversThese servers provide the service holder much more adjustability in the options and security concerns. For eg. The Amazon has strong cloud quick fix in server machines in all beyond the world. So if something inexact is happened at one site, inducing a breakdown or error, the service can be accessed over other side [4].D. Cloud Computing Service Models:Cloud has a broad collection of services available and organizations and the cloud users can choose these services whenever they want based on how they want to use these cloud services. Cloud computing platform has three types of services and these are: Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). These are also known as cloud services models [5]. The cloud can be classifiedin three specific models. The three service models in cloud infrastructure are given below in detail.Fig.1. Cloud Computing Service ModelsFrom the last several years, the increased growth of the cloud also maximizes the number of cloud data center at unrivalled speeds. In the intervening period of time, the energy expenditure by the data centers has kept increasingat higher rates. In cloud data-centers, huge bulk of power exhausted by different computing devices.Energy conservancy is an major concern in the cloud computing systems because it provides several important benefits such as diminishing operating costs, improves system accuracy, improve resource utilization, and environmental preservation. Therefore, the focus of cloud resource fulfillment and task scheduling has switched from performance to power efficiency.Fig.2. Energy Consumption in CloudThe other issue in cloud is called load offset that is a critical problem that becomes a big hurdle between the speedy developments of the cloud computing. The various number of users from the distinct regions demands services at high frequency in every individual second from cloud service provider [5]. Meanwhile, energy-awake scheduling approach with Ant colony optimization method is an assuring method to accomplish that objective [6]. The researcher M.Dorigo and his companions popularized the ACO is based on theexploring strategy of various ants that accelerated them through search the best shortend pathway from their home to foodstuff in 1990’s. The Macro Dorgio was the first to study about the ant behavior for computing algorithm design and further developed the ant province optimization method for the excellent solutions. When all pests are alter from their home to foodstuff, all pests has the capability to discover an best path from home to foodstuff.Although when pests moving on the way they drops a chemical stuff called metaphores on all the pathway.Pathways are incidentaly selected by the ant at the beginning.While an confined pest encounter formerly locate on routes, then the pest can reveal the pathway and conclude with higher chances to persue it.Higher rich metaphores guides a pest to select the route and maximum of pests are more fascinate because of the higher metaphore.Hence, the trail is strengthen with their owned metaphore.The chances of a pest can deseperate the finest excellent solution in distinction to various distinctive combination of solutions is percentage to the concentration of a pathway of metaphore[11]. The ant colony algorithm is the swarm intelligent algorithm influenced by the behavior of the real ant relations in the ant colonies. A collaboration of the ants in the finding the food and doing other tasks has been prioritized to achieve the ant colony behavior in the computing environments. The ants store the chemical pheromone for the path devising and following while taking a movement from nest to the source of food. With the raise in the number of the ants on a singular path, the strength of the pheromone increases on that particular path. The ants of that colony select the shortest path on the basis of this pheromone amount or value. The ant province optimization method has been applied for resolving the problem of travelling salesman or other problems where the shortest path is needed.In the proposed work the Enhanced- ACO is used to balance load and minimize energy consumption in cloud in least possible cost. In the proposed work the VM load and failure rate has been assigned as the main parameters to take the scheduling decisions using ACO. Both of the parameters has been used for the purpose of task scheduling over the given cloud resources. The virtual machine load is the parameter which defines the overall utilization of the resources of the given virtual machine. II. R ELATED W ORKL.D. Dhinesh, P. Venkata et al. (2013) the author proposed the algorithm named Honey bee behavior Algorithm to balance load in cloud environment. The proposed model worked on the basis of priorities of tasks on the VM’s to Minimize the waiting time of the tasks .In the proposed technique The tasks are removed from the overloaded VM’s .the VM’s will acts as Honey bees And submits the tasks to the under loaded VM’s. Whenever the cloud service provider receives any high priority task it has to submit to that VM, that have minimum number tasks for execution. The all VM’s are scheduled in ascending order and the tasks removed from the VM’s areSoftware as a Service Platform as a Service Infrastructure as aServiceData center energy useNetwork equipment energy useConsumer client device energy useBussiness client device energy useManufacturing energy useudsubmitted to the under loaded VM’s. The current load of all VM’s can be calculated based on the information received from the cloud datacenter [15]. S.K. Dhurandher et al. (2014) introduced a dispersed batch-based method that accomplishes effective load offsets in the cloud. The prospective method represents feature like supports assortment, scalability, low network bottleneck and vacancy of each blockage link because of it are disperse essence, and designed for multiple masters and multiple servers. In the prospective architecture, links are arranged as a batch of sub batches. The internetwork is partitioned into batches using Batch alike method however each node in the internetwork correspond to absolutely one batch and a retainer is the computation component of the inter network. The prospective method is addressed on a huge extent to cloud data-centre, to evaluate the proposed heuristic CloudSim toolkit used as simulation framework. The analysis is completed at the different criterion: load circulation on data-centres in the cloud, no of processes executed and moderate alteration time for completion of jobs [24].V.S Kushwah, S. K Goyal, P. Narwariya et al. (2014) the author proposed that flaw in the cloud filigree is a typical concern although offsets load. Although to subdue the failure, various different methods for fault tolerant are utilized. The Cloud gain unlock to a brand-new vista for usage of assets. Hence, conveniences of the cloud upgrades of the utmost relevant constituents that are envisage and facilitate the existence of the assests and offsets load. Failure resistance is a concern to treat with although facilitates QoS in a cloud, so strengthening the achievement [24].Nidhi Bansal et al. (2015) the authors have worked upon the cost performance optimization based upon the QoS driven tasks scheduling in the cloud computing environments. The authors have focused upon the cost of the computing resources (virtual machines) to schedule the given pool of the tasks over the cloud computing model. The cost optimization has been performed over the QoS-task driven task scheduling mechanism, which did not encounter the cost optimization problem earlier. The authors have shown that the earlier QoS-driven task scheduling based studies has been considered the make span, latency and load balancing. The QoS-based cost evaluation model evaluates the resource computing cost for the scheduling along with the other parameters as in their secondary precedence. The authors have been proved the efficiency of the proposed model in the terms of the cost, and resource utilization [11].S. K Goyal, M.Singh et al. (2012) proposed a new technology that is grid computing involve conjoin and correlated usage of topographically dispensed assets for motivation like huge- scale estimation and dispensed information investigation. Although to achieve the client assumptions in provision of execution and efficiency, the grid system needs appropriate and effective workload offset method to parallel circulate the workload on every computing knob in the network. However, to resolve the load offset issue in computational framework, the author represents the Ant Province Optimization method. Although in the prospective technique, the metaphore is corresponding to assests required for execution, preferably than way. The minimum and maximum of metaphore shows the workload and relies upon process status at assests. So the focused aim of the prospective technique is to assign processes to assests in such a way that offset out the workload results in increased usage of assets [22].N. Kumar, R. Rastogi et al. (2012)the author worked upon the ACO for load balancing of nodes. It is best case for effective balancing load, because in preceding work ant can move in only one direction. But in this algorithm an ant can move forward direction and backward direction. Such as an ant searches the food is called forward direction and homecoming to the nest is called backward direction. This is favorable for balanced the node speedily. This algorithm work fast because an ant can pass simultaneously in both directions for balanced the node and this algorithm also gives the superior usage of assests. The algorithm is performed on the limited parameters suchlike acceleration, internetwork overhead and failure resistance strength [25].R. Mishra et al. (2012) introduced a solution for load offsets in the cloud platforms. The workload can be centre processing capacity (CPU), internal-memory size or internetwork workload. Load Offset is the methodology of spreading the workload beyond miscellaneous links of a system to expand both assests usage and task feedback time although also escapes a condition in which any of the knobs are densely under loaded and other knobs are unavailable. Load balancing make sure that each knob in the internetwork does relatively the same volume of effort in any interval of time. Various techniques to reconcile this issue has been appear into presence alike PSO, Mix hash technique; GA and different organization based method are there in use. In the prospective work author proposed a technique relies upon Ant province optimization to reconcile the issue of workload offset in the cloud environment.But this method does not deal with the fault resistance concerns [26].III.E XPERIMENTAL D ESIGNAs per the literature survey, it has been found that the load balancing and energy expenditure is an crucial issue in cloud environment, and researchers are giving more consideration towards it so that the proper usage of resources can be done as well as improve the cloud performance with low energy consumption. Processing of computing nodes is done remotely in most of the real time cloud applications. Hence, there is an enlarged necessity for failure province to attain reliability on cloud infrastructure. Researchers have proposed various techniques based on Genetic algorithm, ACO, etc. for workload offset strategy in the cloud environment. Now, In the proposed research work, a technique called Enhanced - ACO that is developed to achieve better offloading decisions among virtual machines when the reliability and proper utilization of resources will also be considered will use ACO algorithm to balance loadand energy consumption in cloud environment. The algorithm fully depends upon the history of the pheromone value to take the further judgments for optimal solutions for any of the computational requirement. The use of artificial ants for the state of development rule for the selection of the optimal resources beyond the grid computation or the cloud environments has been proposed in the prospective work. The artificial ants have been used for the purpose of cloud computing scheduling and shortest path identification. The ant province system adopts the arbitrary -proportional rule, which is the state of transition rule used for the ant system and works on the basis of the probability or a chance to choose the optimal resource out of the k-resources for the task assignment in the cloud. The pheromone value of the resource depends upon the number of available resources, processing cost and estimated time.The proposed model is aimed at solving the problem of the scheduling in the cloud platforms using the Cloud simulator. The process sequencing is a scheme of put the runtime processes in the memory in the perfect placement or sequence in order to minimize the total tasking and communication overhead in terms of time and load respectively. The proposed technique also minimizes energy consumption and cost of computing resources that are used by different processes for execution in cloud. The earliest finish time and fault tolerance is evaluated to achieve the objectives of proposed work. The experimental outcomes show the better achievement of prospective model with comparison of existing one.This system is fully based upon the J.L. Deneubourg’s system based on argentine ants as the ant colony optimization, where the author have laid the pheromone trail on the straight and return path of the ants from the foodstuff source to the home. Although straight path has been examined by the use of VM availability and VM load, whereas the return path continuously updates the pheromone amount with each task assignment. This mechanism keeps the current data in the memory, which leads towards taking the correct decision for the optimal path planning for the task scheduling over the cloud environments. Figure 4.2.1 shows the two joins and the mathematical formula is given in equations (1) and (2).Fig.3. Mathematical Formula for ACO algorithmi i ni A B K A K A K ob )()()(Pr ++++=)1Pr (Pr =+B A ob ob (1),1δ+=+i i A A )1(1δ-+=+i i B B )(i B A i i =+ (2)Where δ is the stochastic binary variable which contains the value of either 1 or 0 assigned on the basis of the multiple probability levels (ProbA and ProbB). A is the first path moving towards the food source, which is incremented with the entry of every new ant on the path with the pheromone A and value B is incremented with 1 if the ant chooses the path B. The probabilities are rising with the span of time, which depicts the pheromone amounts.Algorithm: Enchanced Aco Algorithm1. The input parameters for the ant colonyoptimization method are initialized.2. The available VMs are thoroughly checked overthe cloud computing environment.3. The available VM list is loaded into the runtimememory with the RAM and CPU details.4. The available resource list is updated with the VMlist prepared on the step 3.}...V ..........V , V ,V ,{V VM n 43 2 1 1= (3)where VM is the virtual machine list and V 1 to V N to represent the virtual machine IDs.5. The resource capacity is calculated for eachresource and has been given as the following list:}......,,,{4321n R VM VM VM VM VM VM = (4)6. Loop is started for every resource Read and loadthe available resources on each VM and update the resource list with current values.⎰==Ni i E VM VM 1(5)where VM E gives the resource availability after calculating the resource load.TVCPU u VCPU L =(6)Where L represents the overall resource load on the particular VM, whereas the VCPU u and VCPU T gives the currently used resources and total resources available respectively.7. End the loop on step 58. Load the task list into the runtime memory.}..........,,,{4321n t t t t t T = (7)Where T vector represents the task vector and t 1 and t n represents the individual tasks.9. Calculate the length of the tasks in the form ofearliest finish time to calculate the length of the each individual task.)()(Est Eft t t i c -= (8)Where t c and t i gives the overall time length of each of the task by subtracting the estimated start time from estimated finish time.10. Subdivide the each task j and create the sub-tasklist where each sub-task is assigned with i11. Scheduling iterations are initialized for the numberof tasks or sub-tasks with i.12. Calculate the probability value of each availableVM.13. Compare the task length with probability valueand resource usage on all available VMs.14. Calculate the VM Load in the form of resourceusage percentage using the following formula, where Aj depicts the availability of the VMs.i j j TR P A *= (9)15. Calculate the failure rate of all VMs to determinethe trustworthy VMs to assign the current task. 16. Shortlist the list of available VMs which canprocess the given task i.17. Assign the task i to the resource with minimumresponse time and highest probability.18. Update the resource list with the current values ofVM load19. Update the pheromone value for each availableresource.20. End the iteration and process the tasks.IV. R ESULTS AND D ISCUSSIONSThe proposed model simulation has been prepared by using the Cloud Simulator for the task scheduling procedure testing over the virtual cloud environment. A scenario of multiple user online source has been assumed in this simulation, where the request are being received from the multiple users on same time as per it happens in the social networks like Facebook, Twitter or other online giants such as Google, Amazon, etc. The VM regions has been defined according to the failure rate and virtual machine load status which is transformed into the threshold value using the mathematical equations. Entire simulation is based on the single Time zone scenario, where all users in the given user base are projected as residents of one country or time zone. The simulation can be easily considered for the testing in the peak hours for the point of task scheduling of the samples tasks over the given bunch of resources. The task density has been assumed to be overwhelming during the peak hours, which delays the request response by adding the scheduling delay or processing delay. Hence, the task scheduling method should be enough faster to reduce thetask scheduling delay and assigned resources should be enough vigorous to process the task as fast as possible to minimize the processing delay.The design of the prospective technique can be defined in the form of main modules among the task scheduling procedure. The prospective model can be easily defined in the major four procedure model. The proposed procedure has been designed to attain the task information on the very first step, and then the task cost is calculated by investigating the task length and volume of data associated with the task. Afterwards, the number of available virtual machines (VM) is evaluated for their usability. The usability of the available virtual machines is predicted on the basis of their resource usage. The task cost and the VM availability are evaluated in the terms of resource usage and elapsed time. The VM offering the least time of the task is selected for the task execution. The VM load and failure rate has been assigned as the main parameters to take the scheduling decisions. Both of the parameters has been used for the purpose of task scheduling over the given cloud resources. The virtual machine load is the parameter which defines the overall utilization of the resources of the given virtual machine. The VM load can be used to signify the runtime availability in order to process the given task t on the given time t. The tasks running over the given VM utilize the certain amount of resources. The total percentage of the resources being used during the time t is considered as the VM load.When the virtual machines are ordered in the workload allocation pool for the process sequencing in the given cloud environment, the load monitoring on each virtual machine becomes very important step to correctly perform the task scheduling tasks. The virtual machine load or overhead is calculated on the base of different parameters like as CPU size, memory size, etc. Each VM load must be calculated on the basis of its local parameters. Any use of general parameter values can result the biased load over the given VMs. The CPU and Memory overhead or usage on the given VM considerably influences the performance of the VM’s in the process sequencing practices.Fig.4. Response time per task or transactionFig. 4. The response time describes the total time taken for the cloud platform to generate the reply to the user’s request. The proposed model has been evaluated for the results obtained from the simulation for the response time, which has been represented graphically in the following figure. The overall response time of process 100 is 0.21 seconds. The response time for process 1 is 13.11 seconds and for process 99 the response time that is 0.83 seconds. The process 8 takes 6.91 seconds to complete the task. Fig. 5. The energy consumption has been also monitored for each of the task in order to assess the overall energy consumption during the simulation. The proposed model has been designed to schedule to task over the virtual machine with the minimum energy and cost indices. The following figure shows the overall energy consumption for all of the tasks in the simulation. The process 100 consumes 0.1239joules of energy to complete the execution. The process 99 consumes 0.4897 joules of energy to complete the execution. The following figure shows the energy consumed by every individual process to complete the execution in the proposed system.Fig.5. Energy consumption per task or transactionFig. 6.The proposed model has been compared on the basis of response time, which is one of the primary performance indicators for the cloud computing models. The task scheduling models are expected to returnthe minimum values of response time in order to handle the maximum number of users every second. Fig.6. Graphical Representation of the Response time in secondsIn this comparative model the proposed model has worked expectedly, and consistently shows the robust performance by reducing the response time in comparison with the existing model.Fig. 7. The imbalance degree ofthe proposed model has been found lower than the existing models, which shows the robustness of the proposed model in handling the multiple tasks (in workflows). The proposed model has found more successful in even distribution among the given resources (VMs), than the existing model. Fig.7. Graphical Representation of the imbalance degreeThe proposed model has clearly outperformed the existing model on thebasis of the imbalance degree in comparison with the honey bee based existing model. The proposed model has performed better in nearly all of the events assigned with different number of tasks during the experimental study. Fig.8. Graphical Representation for the Make spanFig. 8. The proposed model based upon the ACO based task scheduling has been compared against the Honey bee algorithm in the existing scenario. The figure 5.3 shows the robustness of the proposed model is comparison with the existing model. The make span hasbeen recorded significantly lower in the case of proposed scenario in comparison with the honey bee algorithm based existing scenario.The proposed model has been found efficient with the lower reading than the existing model.V. C ONCLUSIONSThe cloud computing environments with larger number of user bases are always remain the difficult task to assign the task to the most suitable resources. The most suitable resources consideration can be stated for the resource with the powerful configuration to process the given tasks with quickness and accuracy. The virtual machines are considered as the resources for assigning the tasks in the form of user request and internal system data call and queries. The task assignment becomes the critical job when millions of queries or tasks are coming to the cloud platform from its users. The task assignment or resource scheduling can be categorized according to more number of tasks in less time, critical tasks first, low cost computing resources or any of the combination of these three. In this paper, we have worked upon solving the problem of processing the maximum number of tasks every second, which can definitely enhance the user satisfaction. Also the proposed model is aimed at assigning the resources (VMs) with least resource load and failure rate for the task pool. The virtual machine wills lowest possible load and lowest failure rate will be carrying high probability to reduce the overall time of task and to process it successfully that will also minimizes the amount of tasks in the waiting list. The prospective method results has been compared to the existing models with ACO based load balancing mechanism, GA, SHC and FCFS. The proposed model results have proved to be efficient enough while compared to the existing models for the cloud task processing. The results have obtained and analyzed in the terms of start time, finish time and response time. The experimental results have verified the effectiveness of the prospective system in scheduling and processing the tasks successfully over the given cloud resources.F UTURE S COPEIn the future, the resource scheduling method can be improved by using the probabilistic mechanism by using the failure rate, current load, average load and other resources properties. The probabilistic mechanism calculates the probability of the successful transaction for the given task, which ensures the correct resources allocation to maximize the success rate and to minimize the failure rate. An adaptive threshold can be computed to discard the virtual machines with higher failure rate than the threshold can be refused from the available resource list to process the critical tasks. The gray wolf optimizer (GWO) may possibly upgrade the performance of the task scheduling process in contrast with the ant colony optimization (ACO) in our system. The proposed algorithm offset the load in cloud and improves the overall performance of cloud environment. This algorithm considers Response time, energy as the major QoS parameter. In the future, the algorithm will be improved by examine other QoS parameters and also contrast the result of the proposed algorithm with new algorithms.A CKNOWLEDGMENTI acknowledge with deep feel of obligation and most trustworthy recognition, the helpful guidance and bottomless assistance rendered to me by “Dr. Bikrampal Kaur” Prof essor for their experienced and concerned guidance, useful assistance and enormous help. I have been deep sense of praise for them convict goodness and infinite passion. The proposed work will be finished under the guidance of the official guide and other experts at the campus of the Chandigarh group of Colleges, Landran, and Mohali.R EFRENCES[1]Tari, Z. “Security and Privacy in Cloud Computing,”In IEEE Cloud Computing, vol.1, issue 1, 2014, pp.54-57.[2] A. D. Keromytis, V. Misra, and D. Rubenstein. “SOS: AnArchitecture for Mitigating DDoS Attacks,” In IEEE Journal on Selected Areas in Communications, vol. 22, issue 1, January 2004, pp. 176-188.[3]Xianfeng, Y. and HongTao, L. “Load Balancing ofVirtual Machines in Cloud Computing Environment Using Improved Ant Colony Algorithm”, In International Journal of Grid and Distributed Computing, vol.8 , issue 6, 2015, pp.19-30.[4]Vaquero, L.M., Rodero-Merino, L., Caceres, J. andLindner, M. “A break in the clouds: towards a cloud definition.” In ACM SIGCOMM Computer Communication Review, vol. 39, issue 1, 2008, pp.50-55.[5]Zhao, Q., Xiong, C., Yu, C., Zhang, C. and Zhao, X. “Anew energy-aware task scheduling method for data-intensive applications in the cloud.” In Journal of Network and Computer Applications, vol. 59, 2016, pp.14-27.[6]Bharti, S. and Pattanaik, K.K. “Dynamic distributed flowscheduling with load balancing for data center networks.”In Procedia Computer Science, vol.19, 2013, pp.124-130.[7]Essa, Y.M. “A Survey of Cloud Computing FaultTolerance: Techniques and Implementation.” In International Journal of Computer Applications.” vol. 138, issue 13, 2016.[8]Zhao, W., Melliar-Smith, P.M. and Moser, L.E. “Faulttolerance middleware for cloud computing. In Cloud Computing (CLOUD).”IEEE 3rd International Conference on IEEE July 2010, pp. 67-74.[9]Dorigo, M. “Optimization, learning and naturalalgorithms”, Ph. D. Thesis, Politecnico di Milano, Italy, 1992.[10]Rahman, M., Iqbal, S. and Gao, J. “Load balancer as aservice in cloud computing.” In Service Oriented System Engineering (SOSE), IEEE 8th International Symposium on IEEE, 2014, pp. 204-211.[11]Bansal, N., Maurya, A., Kumar, T., Singh, M. and Bansal,S. “Cost perfor mance of QoS Driven task scheduling in cloud computing.” In Procedia Computer Science, vol. 57, 2015, pp.126-130.[12]Choi, S., Chung, K. and Yu, H. “Fault tolerance and QoSscheduling using CAN in mobile social cloud。