Dynamic System-wide Reconfiguration of Grid Deployments in Response to Intrusion Detections
- 格式:pdf
- 大小:347.16 KB
- 文档页数:13
QAM is a widely used multilevel modulation technique,with a variety of applications in data radio communication systems.Most existing implementations of QAM-based systems use high levels of modulation in order to meet the high data rate constraints of emerging applications.This work presents the architecture of a highly parallel QAM modulator,using MPSoC-based design flow and design methodology,which offers multirate modulation.The proposed MPSoC architecture is modular and provides dynamic reconfiguration of the QAM utilizing on-chip interconnection networks,offering high data rates(more than1 Gbps),even at low modulation levels(16-QAM).Furthermore,the proposed QAM implementation integrates a hardware-based resource allocation algorithm that can provide better throughput and fault tolerance,depending on the on-chip interconnection network congestion and run-time faults.Preliminary results from this work have been published in the Proceedings of the18th IEEE/IFIP International Conference on VLSI and System-on-Chip(VLSI-SoC2010).The current version of the work includes a detailed description of the proposed system architecture,extends the results significantly using more test cases,and investigates the impact of various design parameters.Furthermore,this work investigates the use of the hardware resource allocation algorithm as a graceful degradation mechanism,providing simulation results about the performance of the QAM in the presence of faulty components.Quadrature Amplitude Modulation(QAM)is a popular modulation scheme,widely used in various communication protocols such as Wi-Fi and Digital Video Broadcasting(DVB).The architecture of a digital QAM modulator/demodulator is typically constrained by several, often conflicting,requirements.Such requirements may include demanding throughput, high immunity to noise,flexibility for various communication standards,and low on-chip power.The majority of existing QAM implementations follow a sequential implementation approach and rely on high modulation levels in order to meet the emerging high data rate constraints.These techniques,however,are vulnerable to noise at a given transmission power,which reduces the reliable communication distance.The problem is addressed by increasing the number of modulators in a system,through emerging Software-Defined Radio (SDR)systems,which are mapped on MPSoCs in an effort to boost parallelism.These works, however,treat the QAM modulator as an individual system task,whereas it is a task that can further be optimized and designed with further parallelism in order to achieve high data rates,even at low modulation levels.Designing the QAM modulator in a parallel manner can be beneficial in many ways.Firstly, the resulting parallel streams(modulated)can be combined at the output,resulting in a system whose majority of logic runs at lower clock frequencies,while allowing for high throughput even at low modulation levels.This is particularly important as lower modulation levels are less susceptible to multipath distortion,provide power-efficiency and achieve low bit error rate(BER).Furthermore,a parallel modulation architecture can benefit multiple-input multiple-output(MIMO)communication systems,where information is sent and received over two or more antennas often shared among many ing multiple antennas at both transmitter and receiver offers significant capacity enhancement on many modern applications,including IEEE802.11n,3GPP LTE,and mobile WiMAX systems, providing increased throughput at the same channel bandwidth and transmit power.Inorder to achieve the benefit of MIMO systems,appropriate design aspects on the modulation and demodulation architectures have to be taken into consideration.It is obvious that transmitter architectures with multiple output ports,and the more complicated receiver architectures with multiple input ports,are mainly required.However,the demodulation architecture is beyond the scope of this work and is part of future work.This work presents an MPSoC implementation of the QAM modulator that can provide a modular and reconfigurable architecture to facilitate integration of the different processing units involved in QAM modulation.The work attempts to investigate how the performance of a sequential QAM modulator can be improved,by exploiting parallelism in two forms:first by developing a simple,pipelined version of the conventional QAM modulator,and second, by using design methodologies employed in present-day MPSoCs in order to map multiple QAM modulators on an underlying MPSoC interconnected via packet-based network-on-chip (NoC).Furthermore,this work presents a hardware-based resource allocation algorithm, enabling the system to further gain performance through dynamic load balancing.The resource allocation algorithm can also act as a graceful degradation mechanism,limiting the influence of run-time faults on the average system throughput.Additionally,the proposed MPSoC-based system can adopt variable data rates and protocols simultaneously,taking advantage of resource sharing mechanisms.The proposed system architecture was simulated using a high-level simulator and implemented/evaluated on an FPGA platform.Moreover, although this work currently targets QAM-based modulation scenarios,the methodology and reconfiguration mechanisms can target QAM-based demodulation scenarios as well. However,the design and implementation of an MPSoC-based demodulator was left as future work.While an MPSoC implementation of the QAM modulator is beneficial in terms of throughput, there are overheads associated with the on-chip network.As such,the MPSoC-based modulator was compared to a straightforward implementation featuring multiple QAM modulators,in an effort to identify the conditions that favor the MPSoC implementation. Comparison was carried out under variable incoming rates,system configurations and fault conditions,and simulation results showed on average double throughput rates during normal operation and~25%less throughput degradation at the presence of faulty components,at the cost of approximately35%more area,obtained from an FPGA implementation and synthesis results.The hardware overheads,which stem from the NoC and the resource allocation algorithm,are well within the typical values for NoC-based systems and are adequately balanced by the high throughput rates obtained.Most of the existing hardware implementations involving QAM modulation/demodulation follow a sequential approach and simply consider the QAM as an individual module.There has been limited design exploration,and most works allow limited reconfiguration,offering inadequate data rates when using low modulation levels.The latter has been addressed through emerging SDR implementations mapped on MPSoCs,that also treat the QAM modulation as an individual system task,integrated as part of the system,rather than focusing on optimizing the performance of the modulator.Works inuse a specific modulation type;they can,however,be extended to use higher modulation levels in order toincrease the resulting data rate.Higher modulation levels,though,involve more divisions of both amplitude and phase and can potentially introduce decoding errors at the receiver,as the symbols are very close together(for a given transmission power level)and one level of amplitude may be confused(due to the effect of noise)with a higher level,thus,distorting the received signal.In order to avoid this,it is necessary to allow for wide margins,and this can be done by increasing the available amplitude range through power amplification of the RF signal at the transmitter(to effectively spread the symbols out more);otherwise,data bits may be decoded incorrectly at the receiver,resulting in increased bit error rate(BER). However,increasing the amplitude range will operate the RF amplifiers well within their nonlinear(compression)region causing distortion.Alternative QAM implementations try to avoid the use of multipliers and sine/cosine memories,by using the CORDIC algorithm, however,still follow a sequential approach.Software-based solutions lie in designing SDR systems mapped on general purpose processors and/or digital signal processors(DSPs),and the QAM modulator is usually considered as a system task,to be scheduled on an available processing unit.Works inutilize the MPSoC design methodology to implement SDR systems,treating the modulator as an individual system task.Results in show that the problem with this approach is that several competing tasks running in parallel with QAM may hurt the performance of the modulation, making this approach inadequate for demanding wireless communications in terms of throughput and energy efficiency.Another particular issue,raised in,is the efficiency of the allocation algorithm.The allocation algorithm is implemented on a processor,which makes allocation slow.Moreover,the policies used to allocate tasks(random allocation and distance-based allocation)to processors may lead to on-chip contention and unbalanced loads at each processor,since the utilization of each processor is not taken into account.In,a hardware unit called CoreManager for run-time scheduling of tasks is used,which aims in speeding up the allocation algorithm.The conclusions stemming from motivate the use of exporting more tasks such as reconfiguration and resource allocation in hardware rather than using software running on dedicated CPUs,in an effort to reduce power consumption and improve the flexibility of the system.This work presents a reconfigurable QAM modulator using MPSoC design methodologies and an on-chip network,with an integrated hardware resource allocation mechanism for dynamic reconfiguration.The allocation algorithm takes into consideration not only the distance between partitioned blocks(hop count)but also the utilization of each block,in attempt to make the proposed MPSoC-based QAM modulator able to achieve robust performance under different incoming rates of data streams and different modulation levels. Moreover,the allocation algorithm inherently acts as a graceful degradation mechanism, limiting the influence of run-time faults on the average system throughput.we used MPSoC design methodologies to map the QAM modulator onto an MPSoC architecture,which uses an on-chip,packet-based NoC.This allows a modular, "plug-and-play"approach that permits the integration of heterogeneous processing elements, in an attempt to create a reconfigurable QAM modulator.By partitioning the QAM modulator into different stand-alone tasks mapped on Processing Elements(PEs),weown SURF.This would require a context-addressable memory search and would expand the hardware logic of each sender PE's NIRA.Since one of our objectives is scalability,we integrated the hop count inside each destination PE's packet.The source PE polls its host NI for incoming control packets,which are stored in an internal FIFO queue.During each interval T,when the source PE receives the first control packet,a second timer is activatedfor a specified number of clock cycles,W.When this timer expires,the polling is halted and a heuristic algorithm based on the received conditions is run,in order to decide the next destination PE.In the case where a control packet is not received from a source PE in the specified time interval W,this PE is not included in the algorithm.This is a key feature of the proposed MPSoC-based QAM modulator;at extremely loaded conditions,it attempts to maintain a stable data rate by finding alternative PEs which are less busy.QAM是一种广泛应用的多级调制技术,在数据无线电通信系统中应用广泛。
Control of Dynamic Systems Control of dynamic systems is a crucial aspect of engineering and technology, as it involves the management and regulation of systems that are constantly changing and evolving. This field encompasses a wide range of applications, from industrial processes and manufacturing to aerospace and automotive systems. The ability to effectively control dynamic systems is essential for ensuring safety, efficiency, and reliability in various engineering and technological domains. One of the key challenges in the control of dynamic systems is the inherent complexity and unpredictability of these systems. Dynamic systems are characterized by their continuous and time-varying behavior, making it difficult to accurately model and predict their responses to different inputs and disturbances. This complexity often requires the use of advanced control techniques and algorithms toeffectively manage and regulate dynamic systems in real-time. Another important aspect of controlling dynamic systems is the need to account for uncertainties and disturbances that can affect the system's behavior. These uncertainties can arise from various sources, such as variations in operating conditions, environmental factors, and component failures. As a result, control strategies must be robust and adaptive to ensure that the system can continue to operate safely and effectively under changing and unpredictable conditions. In addition to technical challenges, the control of dynamic systems also involves ethical and social considerations. For example, in the automotive industry, the development of autonomous vehicles raises important questions about safety, liability, and the ethical implications of delegating control to machines. Similarly, in industrial automation, the implementation of advanced control systems can have significant implications for the workforce and employment, raising concerns about job displacement and the ethical use of technology. From a practical standpoint, the control of dynamic systems also requires a multidisciplinary approach, involving expertise in engineering, mathematics, computer science, and other related fields. Engineers and technologists working in this field must be able to collaborate effectively across different disciplines to develop and implement controlsolutions that are both technically sound and practical to deploy in real-world applications. Furthermore, the control of dynamic systems also presentsopportunities for innovation and advancement in engineering and technology. As new control techniques and technologies continue to emerge, there is potential for significant improvements in the performance, efficiency, and safety of dynamic systems across various industries. This ongoing innovation is essential for addressing the evolving needs and challenges of modern society, from sustainable energy systems to advanced transportation solutions. In conclusion, the control of dynamic systems is a complex and multifaceted field that plays a critical role in engineering and technology. From technical challenges and ethical considerations to practical and interdisciplinary requirements, the control of dynamic systems requires a comprehensive and holistic approach. By addressing these various perspectives and challenges, engineers and technologists can continue to advance the state of the art in controlling dynamic systems, leading to safer, more efficient, and more reliable technologies for the benefit of society.。
2The highlights of SIMOCODE pro• E xtensive protection, monitoring and control functions, independent of the automation system • D etailed operational, service and diagnostics data – at any time or place • Safe shutdown of motors• S calable, flexible solutions for all plant configurations• V ersatile, open communication via various bus systems and protocols • I ntegration in process control systems such as SIMATIC PCS 7• Supports PROFINET system redundance and dynamic reconfigurationSIMOCODE pro offers multifunctional, solid-state full motor protection. The motormanagement system monitors, protects and controls constant-speed motors and enables the implementation of predictive maintenance. It does not wait for a prob-lem to occur before shutting down the motor, but establishes a level of transparency in advance. This avoids plant standstills and improves economic efficiency. SIMOCODE pro delivers detailed operating, service and diagnostic data from across the entire process. The engineering is simple and likewise the integration into pro-cess control systems. SIMOCODE pro communicates via PROFIBUS and PROFINET, Modbus, EtherNet / IP and OPC UA. It implements simple and economical motor management.With both the SIMOCODE pro General Performance and SIMOCODE pro High Perfor-mance device classes, we offer scalable, flexible solutions for industrial controls and plant optimization in the context of Industrie 4.0.For 30 years now, SIMOCODE pro has been controlling and moni-toring low-voltage, constant-speed motors all over the world. Wherever motors keep things running in the process industry,SIMOCODE is there. Many thousands of times over. Now even more powerful thanks to connection to the Cloud.Flexible, modular, integrated.The way modern motor management should be.3Two functionally graded device series form the core of the multifunctional SIMOCODE pro motor management sys-tem: General Performance and High Performance. The de-vices in both series incorporate all essential motor protec-tion, monitoring and control functions – including data transparency through the Cloud connection. SIMOCODE pro General Performance is your entry into modern motor management and addresses standard motor applications. SIMOCODE pro High Performance features up to five expan-sion modules and offers additional measured variables. Find out how you can take advantage of the two SIMOCODEpro device series in all areas of the process industry.SIMOCODE pro. A really strong family.SIMOCODE pro –General Performance:Ideal for the entry levelThe smart and compact motor management system for direct-on-line, reversing, and star-delta (wye-delta) starters or for controlling a motor starter protector or soft starter. The basic system includes a current measuring module and the basic unit for overload or thermis-tor motor protection, for example. Communication with the automa-tion level takes place via PROFIBUS/ PROFINET. Optional additions in-clude an operator panel and an expansion module that allows additional inputs/outputs, ground-fault detection and temperature measurement to be realized. SIMOCODE pro – High Performance:The fully professional solutionfor every motorThe SIMOCODE pro High Perfor-mance motor management systemis variable, intelligent and can beadapted individually to suit everyrequirement. The basic systemincludes a module for measuringcurrent (and optionally also volt-age), as well as a basic unit, and issuitable for removing pump block-ages, for example. Communicationwith the automation level takesplace via PROFIBUS or Modbus RTU,via Ethernet with the PROFINET orEtherNet/IP protocols, and also viaOPC UA. The optional expansionsavailable include separate current/voltage measuring modules fordry-running protection, an operatorpanel with display, a ground-faultmodule, a temperature module,standard digital modules, fail-safedigital modules and an analogmodule.SIMOCODE pro Safety:Fail-safe expansion modulesVarious modules are available forSIMOCODE pro for the extendedprotection of personnel, machinesand the environment. These guar-antee the safety-related shutdownof motors and meet all the require-ments of the standards.The advantages:• F unctional switching andfail-safe shutdown withoutmanual wiring or additionaleffort• S afety function parameterscan be flexibly configured• T ransfer of meaningful diag-nostic data to thecontrol system• L ogging of errors for detailedevaluation• Fail-safe shutdown viaPROFIsafe5S IMOCODE pro –The advantages at a glance• M inimized downtimes – with less maintenance work• E nergy and cost savings over the entire plant life cycle • S imple use • S calable solution• A choice of networked or autonomousconfigurationRemove pump blockages and increase availability.6Every water utility is familiar with the prob-lems of a blocked pump – and the possible consequences: environmental harm, damage due to flooding, and dangers to health as a result of lifting and cleaning pumps. This is compounded by the financial impact of plant downtimes. SIMOCODE pro monitors the current and active power of the pump motor – and derives the pump status from them. If a defined threshold value is exceeded, SIMOCODE pro autonomously reverses the rotational direction of the pump in order to dislodge deposits on the impeller blades, for example.Say goodbye to blocked pumps with SIMOCODE pro – the modular, com-pact motor management system that tackles the challenge by auto-matically reversing the pump. An-other benefit: SIMOCODE pro can be retrofitted in existing plants.Reliable dry-running protection is a must in many applications in the chemi-cal industry. SIMOCODE pro reliably prevents the dry running of centrifugal pumps in order to preclude hazardous situations – and completely rede-fines dry-running protection for pumps in hazardous areas with an innova-tive solution.Sensors on centrifugal pumps in hazardous areas are often prone to fault and are thus high-mainte-nance. The solution: Using active power-based dry-running detection, SIMOCODE pro monitors the active electrical power consumption of the pump motor and thus the status of the pump – without the need for additional monitoring devices or sensors to be installed. The new technology ensures reliable explosion protection in accordance with ATEX and IECEx criteria and saves costs and time for commis-sioning and maintenance.Your benefits through active power-based dry-running protection• Earlier fault detection – D irect conclusions concerning the flow rate can be drawn from the active power consumption of the pump motor – R eliable prevention of dry running of the pump and therefore less damage to the pump • Cost and time savings – N o maintenance effort due to the elimination of mechanical wear of the sensors –No additional sensor required • Reduction of hardware – N o need for additional sensors and mechanical components –Simplified engineering• Reliable monitoring of the system–Compliance with ATEX and IECEx criteria – R eliable and automatic pump shutdown in the event of inadmissible operating conditionsReliable monitoring. Dry-running protection reconceptualized.7SIMOCODE pro with OPC UABenefitsMindSphere8BenefitsSIMOCODE pro is your reliable data supplier for maximum process quality. The motor management system offers:• Data analysis and simulation • S ecure data storage and transmission • Visualization and recommendation(s)• I ncreased availability of components • Optimization of energy consumption • Maximization of process efficiency • Support of PROFINET system redundance and dynamic reconfigurationFor diagnostics and simple configuration, including in the TIA Portal: SIMOCODE ESSIMOCODE ES provides you with the software for the con-figuration, startup, operation and diagnosis of SIMOCODE pro. The software is based on the central Totally Integrated Automation Portal (TIA Portal) engineering framework, pro-viding an integrated, efficient and intuitive solution for all automation tasks. SIMOCODE ES offers you a host of advan-tages, including convenient configuration in the device view, graphical commissioning using drag and drop functions, mass engineering, presentation of signal states online, or clear measurement curves for diagnostic purposes.The convenient way to optimum process guidance: The integration of SIMOCODE pro into SIMATIC PCS 7Using standardized blocks and faceplates, SIMOCODE pro can very easily be integrated into the SIMATIC PCS 7 process con-trol system. This makes it extremely easy to integrate service and diagnostic data from the motor management system into higher-level process control systems, for example.The result: A high level of transparency throughout the plant, enabling faults to be detected at an early stage or prevented from occurring altogether. In general, the greater density of information in the control system enables you to achieve not only greater transparency, but also higher pro-cess quality.Support of redundance mechanisms and dynamic recon- figuration (device extension during ongoing operation) in-creases the plant availability.9SIMOCODE pro system overview – General PerformancePublished by Siemens AGSmart Infrastructure Electrical ProductsWerner-von-Siemens-Str. 48–50 92224 Amberg GermanyFor the U.S. published by Siemens Industry Inc.100 Technology Drive Alpharetta, GA 30005 United StatesArticle No. SIEP-B10327-00-7600 Printed in GermanyWS 09221.0 Dispo 25600SoftwareSubject to changes and errors. The information given in this document only contains generaldescriptions and/or performance features which may not always specifically reflect those described, or which may undergo modification in the course of further development of the products.The requested performance features are binding only when they are expressly agreed upon in the concluded contract.SIMOCODE is a registered trademark of Siemens AG. Any unauthorized use is prohibited. All other des-ignations in this document may represent trademarks whose use by third parties for their own purposes may violate the proprietary rights of the owner.© Siemens 2022SIMOCODE ES (TIA Portal) V18 BasicBasic functional scope including Professional Trial LicenseBoth software and documentation can be downloaded for free, see:https:///cs/ww/en/view/109811683SIMOCODE ES (TIA Portal) V18 ProfessionalFloating license for one user• L icense key on USB flash drive, SW on DVD 3ZS1322-6CC16-0YA5 • L icense key and software download 3ZS1322-6CE16-0YB5 Upgrade for SIMOCODE ES 2007 Premium 3ZS1322-6CC16-0YE5Software Update Service3ZS1322-6CC00-0YL5SIMOCODE pro block library for SIMATIC PCS 7 Version V9.1 with Advanced Process Library (APL)Engineering software V9.1 (OSD)3ZS1632-1XE04-0YA0Runtime license V9.1 (OSD)3ZS1632-2XE04-0YB0Upgrade for PCS 7 block library SIMOCODE pro V8 or V9.0 (OSD)3ZS1632-1XE04-0YE0Upgrade for PCS 7 block library SIMOCODE pro V7 (without APL)3ZS1632-1XE04-0YF0SIMOCODE pro block library for SIMATIC PCS 7 Version V9.0 with Advanced Process Library (APL)Engineering software V9.0 (OSD)3ZS1632-1XE03-0YA0Runtime license V9.0 (OSD)3ZS1632-2XE03-0YB0Upgrade for PCS 7 block library SIMOCODE pro V83ZS1632-1XX03-0YE0SIMOCODE pro block library for SIMATIC PCS 7 without Advanced Process Library (APL)。
常见的多路径管理软件Multipath I/O (多路径)在计算机存储技术里,多路径提供了容错和性能提高,在计算机系统里CPU有多条物理路径通道,块存储设备通过总线,控制器,交换设备以及桥接设备来连接。
简单举例同一台计算机里1块SCSI磁盘连接2个SCSI控制器或者磁盘连接到两个FC端口。
如果其中1个控制器,端口或交换设备故障,那操作系统就会自动切换I/O路径到冗余的控制器为应用程序使用,但这样可能会增加延迟.一些多路径软件可以利用冗余的路径提高性能,例如:Dynamic load balancing 动态负载均衡Traffic shaping 流量控制Automatic path management 自动路径管理Dynamic reconfiguration 动态设置Multipath I/O software implementations 多路径软件工具一些操作系统自带支持多路径功能,如下SGI IRIX - using the LV, XLV and later XVM volume managers (1990s and onwards)AIX - MPIO Driver, AIX 5L 5.2 (October 2002) and laterHP-UX 11.31 (2007)Linux - Device-Mapper Multipath . Linux kernel 2.6.13 (August 2005) OpenVMS V7.2 (1999) and laterSolaris Multiplexed I/O (MPxIO), Solaris 8 (February 2000) and later Windows MPIO Driver, Windows Server 2003 and Windows Server 2008 (April 2003)FreeBSD - GEOM_FOX moduleMac OS X Leopard and Mac OS X Leopard Server 10.5.2Multipath software products: (软件产品)AntemetA. Multipathing Software solution for AIX for HP EVA Disk Arrays NEC PathManagerEMC PowerPathFalconStor IPStor DynaPathFujitsu Siemens MultiPath for Linux and Windows OSFujitsu ETERNUS Multipath Driver (ETERNUSmpd) for Solaris, Windows, Linux and AIX.Hitachi HiCommand Dynamic Link Manager (HDLM)HP StorageWorks Secure PathNCR UNIX MP-RAS EMPATH for EMC Disk ArraysNCR UNIX MP-RAS RDAC for Engenio Disk ArraysONStor SDM multipathIBM System Storage Multipath Subsystem Device Driver (SDD), formerly Data Path OptimizerAccusys PathGuardInfortrend EonPathSun Multipath failover driver for Windows and AIXSun StorEdge Traffic Manager Software, included in Sun Java StorEdgeSoftware suiteLinuxmultipath-tools, used to drive the Device Mapper multipathing driver, first released on September 2003Fibreutils package for QLogic HBAsRDAC package for LSI disk controllerslpfcdriver package for Emulex HBAsVeritasVeritas Storage Foundation (VxSF)Veritas Volume Manager (VxVM)Pillar Data SystemsAxiom Path Manager for AIX, Windows, Linux, and SolarisAreca Multipath failover driver for Windows。
EP4CGX15BF14C8N© 2011 Altera Corporation. All rights reserved. ALTERA, ARRIA, CYCLONE, HARDCOPY, MAX, MEGACORE, NIOS, QUARTUS and STRATIX words and logos are trademarks of Altera Corporation and registered in the U.S. Patent and Trademark Office and in other countries. All other words and logos identified astrademarks or service marks are the property of their respective holders as described at /common/legal.html . Altera warrants performance of itssemiconductor products to current specifications in accordance with Altera's standard warranty, but reserves the right to make changes to any products andservices at any time without notice. Altera assumes no responsibility or liability arising out of the application or use of any information, product, or servicedescribed herein except as expressly agreed to in writing by Altera. Altera customers are advised to obtain the latest version of device specifications before relying on any published information and before placing orders for products or services.Cyclone IV Device Handbook,Volume 1November 2011SubscribeISO 9001:2008 Registered 1.Cyclone IV FPGA Device FamilyOverviewAltera’s new Cyclone ®IV FPGA device family extends the Cyclone FPGA series leadership in providing the market’s lowest-cost, lowest-power FPGAs, now with a transceiver variant. Cyclone IV devices are targeted to high-volume, cost-sensitive applications, enabling system designers to meet increasing bandwidth requirements while lowering costs.Built on an optimized low-power process, the Cyclone IV device family offers the following two variants:■Cyclone IV E—lowest power, high functionality with the lowest cost ■Cyclone IV GX—lowest power and lowest cost FPGAs with 3.125Gbps transceivers 1Cyclone IV E devices are offered in core voltage of 1.0V and 1.2V .f For more information, refer to the Power Requirements for Cyclone IV Deviceschapter.Providing power and cost savings without sacrificing performance, along with a low-cost integrated transceiver option, Cyclone IV devices are ideal for low-cost, small-form-factor applications in the wireless, wireline, broadcast, industrial, consumer, and communications industries.Cyclone IV Device Family FeaturesThe Cyclone IV device family offers the following features:■Low-cost, low-power FPGA fabric:■6K to 150K logic elements■Up to 6.3Mb of embedded memory■Up to 360 18 × 18 multipliers for DSP processing intensive applications ■Protocol bridging applications for under 1.5W total power1–2Chapter 1:Cyclone IV FPGA Device Family OverviewCyclone IV Device Family FeaturesCyclone IV Device Handbook,November 2011Altera CorporationVolume 1■Cyclone IV GX devices offer up to eight high-speed transceivers that provide:■Data rates up to 3.125 Gbps ■8B/10B encoder/decoder■8-bit or 10-bit physical media attachment (PMA) to physical coding sublayer (PCS) interface■Byte serializer/deserializer (SERDES)■Word aligner ■Rate matching FIFO■TX bit slipper for Common Public Radio Interface (CPRI)■Electrical idle■Dynamic channel reconfiguration allowing you to change data rates and protocols on-the-fly■Static equalization and pre-emphasis for superior signal integrity ■150 mW per channel power consumption■Flexible clocking structure to support multiple protocols in a single transceiver block■Cyclone IV GX devices offer dedicated hard IP for PCI Express (PIPE) (PCIe) Gen 1:■×1, ×2, and ×4 lane configurations ■End-point and root-port configurations ■Up to 256-byte payload ■One virtual channel ■ 2 KB retry buffer ■4 KB receiver (Rx) buffer■Cyclone IV GX devices offer a wide range of protocol support:■PCIe (PIPE) Gen 1 ×1, ×2, and ×4 (2.5 Gbps)■Gigabit Ethernet (1.25 Gbps)■CPRI (up to 3.072Gbps)■XAUI (3.125 Gbps)■Triple rate serial digital interface (SDI) (up to 2.97Gbps)■Serial RapidIO (3.125 Gbps)■Basic mode (up to 3.125 Gbps)■V-by-One (up to 3.0Gbps)■DisplayPort (2.7Gbps)■Serial Advanced Technology Attachment (SATA) (up to 3.0Gbps)■OBSAI (up to 3.072Gbps)Chapter 1:Cyclone IV FPGA Device Family Overview 1–3Device ResourcesNovember 2011Altera CorporationCyclone IV Device Handbook,Volume 1■Up to 532 user I/Os■LVDS interfaces up to 840Mbps transmitter (Tx), 875Mbps Rx ■Support for DDR2 SDRAM interfaces up to 200MHz ■Support for QDRII SRAM and DDR SDRAM up to 167MHz■Up to eight phase-locked loops (PLLs) per device ■Offered in commercial and industrial temperature gradesDevice ResourcesTable 1–1 lists Cyclone IV E device resources.Table 1–1.Resources for the Cyclone IV E Device FamilyResources E P 4C E 6E P 4C E 10E P 4C E 15E P 4C E 22E P 4C E 30E P 4C E 40E P 4C E 55E P 4C E 75E P 4C E 115Logic elements (LEs)6,27210,32015,40822,32028,84839,60055,85675,408114,480Embedded memory (Kbits)2704145045945941,1342,3402,7453,888Embedded 18 × 18 multipliers1523566666116154200266General-purpose PLLs 224444444Global Clock Networks 101020202020202020User I/O Banks888888888Maximum user I/O (1)179179343153532532374426528Note to Table 1–1:(1)The user I/Os count from pin-out files includes all general purpose I/O, dedicated clock pins, and dual purpose configuration pins. Transceiverpins and dedicated configuration pins are not included in the pin count.1–4Chapter 1:Cyclone IV FPGA Device Family OverviewDevice ResourcesCyclone IV Device Handbook,November 2011Altera CorporationVolume 1Table 1–2 lists Cyclone IV GX device resources.Table 1–2.Resources for the Cyclone IV GX Device FamilyResourcesE P 4C G X 15E P 4C G X 22E P 4C G X 30(1)E P 4C G X 30(2)E P 4C G X 50(3)E P 4C G X 75(3)E P 4C G X 110(3)E P 4C G X 150(3)Logic elements (LEs)14,40021,28029,44029,44049,88873,920109,424149,760Embedded memory (Kbits)5407561,0801,0802,5024,1585,4906,480Embedded 18 × 18 multipliers 0408080140198280360General purpose PLLs 122 4 (4) 4 (4) 4 (4) 4 (4) 4 (4)Multipurpose PLLs 2 (5) 2 (5) 2 (5) 2 (5) 4 (5) 4 (5) 4 (5) 4 (5)Global clock networks 2020203030303030High-speed transceivers (6)24448888Transceiver maximum data rate (Gbps)2.5 2.5 2.53.125 3.125 3.125 3.125 3.125PCIe (PIPE) hard IP blocks11111111User I/O banks 9 (7)9 (7)9 (7)11 (8)11 (8)11 (8)11 (8)11 (8)Maximum user I/O (9)72150150290310310475475Notes to Table 1–2:(1)Applicable for the F169 and F324 packages.(2)Applicable for the F484 package.(3)Only two multipurpose PLLs for F484 package.(4)Two of the general purpose PLLs are able to support transceiver clocking. For more information, refer to the Clock Networks and PLLs inCyclone IV Devices chapter.(5)You can use the multipurpose PLLs for general purpose clocking when they are not used to clock the transceivers. For more information, referto the Clock Networks and PLLs in Cyclone IV Devices chapter.(6)If PCIe 1, you can use the remaining transceivers in a quad for other protocols at the same or different data rates.(7)Including one configuration I/O bank and two dedicated clock input I/O banks for HSSI reference clock input.(8)Including one configuration I/O bank and four dedicated clock input I/O banks for HSSI reference clock input.(9)The user I/Os count from pin-out files includes all general purpose I/O, dedicated clock pins, and dual purpose configuration pins. Transceiverpins and dedicated configuration pins are not included in the pin count.Chapter 1:Cyclone IV FPGA Device Family Overview 1–7Cyclone IV Device Family Speed GradesNovember 2011Altera CorporationCyclone IV Device Handbook,Volume 1Cyclone IV Device Family Speed GradesTable 1–5 lists the Cyclone IV GX devices speed grades.Table 1–6 lists the Cyclone IV E devices speed grades.Table 1–5.Speed Grades for the Cyclone IV GX Device FamilyDevice N148F169F324F484F672F896EP4CGX15C7, C8, I7C6, C7, C8, I7————EP4CGX22—C6, C7, C8, I7C6, C7, C8, I7———EP4CGX30—C6, C7, C8, I7C6, C7, C8, I7C6, C7, C8, I7——EP4CGX50———C6, C7, C8, I7C6, C7, C8, I7—EP4CGX75———C6, C7, C8, I7C6, C7, C8, I7—EP4CGX110———C7, C8, I7C7, C8, I7C7, C8, I7EP4CGX150———C7, C8, I7C7, C8, I7C7, C8, I7Table 1–6.Speed Grades for the Cyclone IV E Device Family (1),(2)Device E144M164U256F256U484F484F780EP4CE6C8L, C9L, I8L C6, C7, C8, I7, A7—I7NC8L, C9L, I8L C6, C7, C8, I7, A7———EP4CE10C8L, C9L, I8L C6, C7, C8, I7, A7—I7N C8L, C9L, I8L C6, C7, C8, I7, A7———EP4CE15C8L, C9L, I8L C6, C7, C8, I7I7N I7N C8L, C9L, I8L C6, C7, C8, I7, A7—C8L, C9L, I8L C6, C7, C8, I7, A7—EP4CE22C8L, C9L, I8L C6, C7, C8, I7, A7—I7N C8L, C9L, I8L C6, C7, C8, I7, A7———EP4CE30—————C8L, C9L, I8LC6, C7, C8,I7, A7C8L, C9L, I8LC6, C7, C8, I7EP4CE40————I7N C8L, C9L, I8LC6, C7, C8,I7, A7C8L, C9L, I8LC6, C7, C8, I7EP4CE55————I7N C8L, C9L, I8L C6, C7, C8, I7C8L, C9L, I8L C6, C7, C8, I7EP4CE75————I7N C8L, C9L, I8L C6, C7, C8, I7C8L, C9L, I8L C6, C7, C8, I7EP4CE115—————C8L, C9L, I8L C7, C8, I7C8L, C9L, I8L C7, C8, I7Notes to Table 1–6:(1)C8L, C9L, and I8L speed grades are applicable for the 1.0-V core voltage.(2)C6, C7, C8, I7, and A7 speed grades are applicable for the 1.2-V core voltage.1–8Chapter 1:Cyclone IV FPGA Device Family OverviewCyclone IV Device Family ArchitectureCyclone IV Device Family ArchitectureThis section describes Cyclone IV device architecture and contains the followingtopics:■“FPGA Core Fabric”■“I/O Features”■“Clock Management”■“External Memory Interfaces”■“Configuration”■“High-Speed Transceivers (Cyclone IV GX Devices Only)”■“Hard IP for PCI Express (Cyclone IV GX Devices Only)”FPGA Core FabricCyclone IV devices leverage the same core fabric as the very successful Cyclone seriesdevices. The fabric consists of LEs, made of 4-input look up tables (LUTs), memoryblocks, and multipliers.Each Cyclone IV device M9K memory block provides 9 Kbits of embedded SRAMmemory. You can configure the M9K blocks as single port, simple dual port, or truedual port RAM, as well as FIFO buffers or ROM. They can also be configured toimplement any of the data widths in Table1–7.Table1–7.M9K Block Data Widths for Cyclone IV Device FamilyMode Data Width ConfigurationsSingle port or simple dual port ×1, ×2, ×4, ×8/9, ×16/18, and ×32/36True dual port ×1, ×2, ×4, ×8/9, and ×16/18The multiplier architecture in Cyclone IV devices is the same as in the existingCyclone series devices. The embedded multiplier blocks can implement an 18 × 18 ortwo 9 × 9 multipliers in a single block. Altera offers a complete suite of DSP IPincluding finite impulse response (FIR), fast Fourier transform (FFT), and numericallycontrolled oscillator (NCO) functions for use with the multiplier blocks. TheQuartus®II design software’s DSP Builder tool integrates MathWorks Simulink andMATLAB design environments for a streamlined DSP design flow.f For more information, refer to the Logic Elements and Logic Array Blocks in Cyclone IVDevices, Memory Blocks in Cyclone IV Devices, and Embedded Multipliers in Cyclone IVDevices chapters.Cyclone IV Device Handbook,November 2011Altera Corporation Volume 1Chapter 1:Cyclone IV FPGA Device Family Overview 1–9Cyclone IV Device Family ArchitectureNovember 2011Altera CorporationCyclone IV Device Handbook,Volume 1I/O FeaturesCyclone IV device I/O supports programmable bus hold, programmable pull-up resistors, programmable delay, programmable drive strength, programmableslew-rate control to optimize signal integrity, and hot socketing. Cyclone IV devices support calibrated on-chip series termination (Rs OCT) or driver impedance matching (Rs) for single-ended I/O standards. In Cyclone IV GX devices, the high-speedtransceiver I/Os are located on the left side of the device. The top, bottom, and right sides can implement general-purpose user I/Os.Table 1–8 lists the I/O standards that Cyclone IV devices support.The LVDS SERDES is implemented in the core of the device using logic elements.f For more information, refer to the I/O Features in Cyclone IV Devices chapter.Clock ManagementCyclone IV devices include up to 30 global clock (GCLK) networks and up to eight PLLs with five outputs per PLL to provide robust clock management and synthesis. You can dynamically reconfigure Cyclone IV device PLLs in user mode to change the clock frequency or phase.Cyclone IV GX devices support two types of PLLs: multipurpose PLLs and general-purpose PLLs:■Use multipurpose PLLs for clocking the transceiver blocks. You can also use them for general-purpose clocking when they are not used for transceiver clocking.■Use general purpose PLLs for general-purpose applications in the fabric andperiphery, such as external memory interfaces. Some of the general purpose PLLs can support transceiver clocking.f For more information, refer to the Clock Networks and PLLs in Cyclone IV Deviceschapter.External Memory InterfacesCyclone IV devices support SDR, DDR, DDR2 SDRAM, and QDRII SRAM interfaces on the top, bottom, and right sides of the device. Cyclone IV E devices also support these interfaces on the left side of the device. Interfaces may span two or more sides of the device to allow more flexible board design. The Altera ® DDR SDRAM memory interface solution consists of a PHY interface and a memory controller. Altera supplies the PHY IP and you can use it in conjunction with your own custom memorycontroller or an Altera-provided memory controller. Cyclone IV devices support the use of error correction coding (ECC) bits on DDR and DDR2 SDRAM interfaces.Table 1–8.I/O Standards Support for the Cyclone IV Device FamilyTypeI/O StandardSingle-Ended I/O LVTTL, LVCMOS, SSTL, HSTL, PCI, and PCI-XDifferential I/OSSTL, HSTL, LVPECL, BLVDS, LVDS, mini-LVDS, RSDS, and PPDSCyclone IV Device Family Architecturef For more information, refer to the External Memory Interfaces in Cyclone IV Deviceschapter.ConfigurationCyclone IV devices use SRAM cells to store configuration data. Configuration data isdownloaded to the Cyclone IV device each time the device powers up. Low-costconfiguration options include the Altera EPCS family serial flash devices andcommodity parallel flash configuration options. These options provide the flexibilityfor general-purpose applications and the ability to meet specific configuration andwake-up time requirements of the applications.Table1–9 lists which configuration schemes are supported by Cyclone IV devices.Table1–9.Configuration Schemes for Cyclone IV Device FamilyDevices Supported Configuration SchemeCyclone IV GX AS, PS, JTAG, and FPP (1)Cyclone IV E AS, AP, PS, FPP, and JTAGNote to Table1–9:(1)The FPP configuration scheme is only supported by the EP4CGX30F484 and EP4CGX50/75/110/150 devices.IEEE 1149.6 (AC JTAG) is supported on all transceiver I/O pins. All other pinssupport IEEE 1149.1 (JTAG) for boundary scan testing.f For more information, refer to the JTAG Boundary-Scan Testing for Cyclone IV Deviceschapter.For Cyclone IV GX devices to meet the PCIe 100ms wake-up time requirement, youmust use passive serial (PS) configuration mode for the EP4CGX15/22/30 devicesand use fast passive parallel (FPP) configuration mode for the EP4CGX30F484 andEP4CGX50/75/110/150 devices.f For more information, refer to the Configuration and Remote System Upgrades inCyclone IV Devices chapter.The cyclical redundancy check (CRC) error detection feature during user mode issupported in all Cyclone IV GX devices. For Cyclone IV E devices, this feature is onlysupported for the devices with the core voltage of 1.2V.f For more information about CRC error detection, refer to the SEU Mitigation inCyclone IV Devices chapter.High-Speed Transceivers (Cyclone IV GX Devices Only)Cyclone IV GX devices contain up to eight full duplex high-speed transceivers thatcan operate independently. These blocks support multiple industry-standardcommunication protocols, as well as Basic mode, which you can use to implementyour own proprietary protocols. Each transceiver channel has its own pre-emphasisand equalization circuitry, which you can set at compile time to optimize signalintegrity and reduce bit error rates. Transceiver blocks also support dynamicreconfiguration, allowing you to change data rates and protocols on-the-fly.Cyclone IV Device Handbook,November 2011Altera Corporation Volume 1Cyclone IV Device Family ArchitectureFigure1–1 shows the structure of the Cyclone IV GX transceiver.Figure1–1.Transceiver Channel for the Cyclone IV GX Devicef For more information, refer to the Cyclone IV Transceivers Architecture chapter.Hard IP for PCI Express (Cyclone IV GX Devices Only)Cyclone IV GX devices incorporate a single hard IP block for ×1, ×2, or ×4 PCIe(PIPE)in each device. This hard IP block is a complete PCIe(PIPE) protocol solution thatimplements the PHY-MAC layer, Data Link Layer, and Transaction Layerfunctionality. The hard IP for the PCIe (PIPE) block supports root-port and end-pointconfigurations. This pre-verified hard IP block reduces risk, design time, timingclosure, and verification. You can configure the block with the Quartus II software’sPCI Express Compiler, which guides you through the process step by step.f For more information, refer to the PCI Express Compiler User Guide.November 2011Altera Corporation Cyclone IV Device Handbook,Volume 1Reference and Ordering InformationCyclone IV Device Handbook,November 2011Altera CorporationVolume 1Reference and Ordering InformationFigure 1–2 shows the ordering codes for Cyclone IV GX devices.Figure 1–3 shows the ordering codes for Cyclone IV E devices.Figure 1–2.Packaging Ordering Information for the Cyclone IVGX DeviceFigure 1–3.Packaging Ordering Information for the Cyclone IVE DeviceDocument Revision HistoryNovember 2011Altera CorporationCyclone IV Device Handbook,Volume 1Document Revision HistoryTable 1–10 lists the revision history for this chapter.Table 1–10.Document Revision HistoryDateVersion ChangesNovember 20111.5■Updated “Cyclone IV Device Family Features” section.■Updated Figure 1–2 and Figure 1–3.December 2010 1.4■Updated for the Quartus II software version 10.1 release.■Added Cyclone IV E new device package information.■Updated Table 1–1, Table 1–2, Table 1–3, Table 1–5, and Table 1–6.■Updated Figure 1–3.■Minor text edits.July 2010 1.3Updated Table 1–2 to include F484 package information.March 20101.2■Updated Table 1–3 and Table 1–6.■Updated Figure 1–3.■Minor text edits.February 2010 1.1■Added Cyclone IV E devices in Table 1–1, Table 1–3, and Table 1–6 for the Quartus II software version 9.1 SP1 release.■Added the “Cyclone IV Device Family Speed Grades” and “Configuration” sections.■Added Figure 1–3 to include Cyclone IV E Device Packaging Ordering Information.■Updated Table 1–2, Table 1–4, and Table 1–5 for Cyclone IV GX devices.■Minor text edits.November 2009 1.0Initial release.Document Revision History Cyclone IV Device Handbook,November 2011Altera Corporation Volume 1EP4CGX15BF14C8N。
(1)1. Each of these areas has developed a deep DSP technology, with its own algorithms, mathematics, and specialized techniques. This combination of breath and depth makes it impossible for any one individual to master all of the DSP technology that has been developed.译文:每个研究领域都在它自身特有的算法、数学和技术的基础上更深入的开发DSP技术,从而使DSP技术在广度和深度两个方面都得到拓展,因此,任何人都不可能掌握所有现存的DSP技术。
2. The development of digital signal processing dates from the 1960’s with the u se of mainframe digital computers for number-crunching applications such as the Fast Fourier Transform (FFT), which allows the frequency spectrum of a signal to be computed rapidly.译文:数字信号处理技术源于20 世纪60 年代,彼时,大型计算机开始用于处理计算量较大运算,例如可以快速获得信号的频谱的快速傅立叶变换(FFT)等。
在本句中,The development of digital signal processing是主语,dates from 是谓语,意思是起源于历史上的某一年代。
后面以which 引导的定语从句用于修饰FFT。
3. Without it, they would be lost in the technological world.译文:没有基本的电路设计的背景(经验),他们将会被技术界淘汰4. Note that the acronym DSP can variously mean Digital Signal Processing, the term usedfor a wide range of techniques for processing signals digitally, or Digital Signal Processor, a specialized type of microprocessor chip.译文:需要注意的是,缩写DSP有多种含义,它既可以解释为“数字信号处理”,也可以解释为“数字信号处理器”,前者表示一种目前被广泛采用的数字信号处理技术,后者则表示一种专用的微处理器芯片。
【vivado】clockingwizard时钟配置
1、结构:MMCM和PLL
mixed-mode clock manager (MMCM),phase-locked loop (PLL)
这两种primitive架构不同,MMCM实现更复杂⼀些,具有更多的features。
MMCM可以实现Spread Spectrum和差分输出,最多可以出7个clock,PLL最多6个。
倍频分频的⽅式也不同。
2、动态配置:Dynamic Reconfig
允许user通过控制接⼝改变clock
3、配置接⼝:AXI4Lite和DRP
控制接⼝可以是AXI总线的,也可以是⼚家的DRP接⼝。
根据逻辑设计需要选择。
dynamic reconfiguration port (DRP)
4、其他Options
a、Phase Duty Cycle Config
相位和占⽐也可以配置,代价是资源占⽤成倍增加。
b、Write DRP registers
相当于⽤AXI接⼝直接控制DRP的寄存器,主要优点是在接⼝这块可以不使⽤DSP资源。
但是也会缺少⼀些可选配置,同时偏移地址不同。
⽐如AXI-0x200位置对主频的重新配置,在DRP-0x300中就没有。
对clkout的三项配置都⼀样。
reg配置完成了,往使能寄存器中写0x03,让配置⽣效。
我的需求:通过ps动态配置,频率档位越细越好,占⽐可变,但同时也希望资源占⽤尽量少点。
所以选择:DynamicReconfig、AXI4Lite、Phase Duty Cycle Config。
Preliminary version; to appear in 3rd High Performance Computing Conference (HPCC-07), Houston, TX, Sept 26-28, 2007.Dynamic System-wide Reconfiguration of GridDeployments in Response to Intrusion Detections Jonathan Rowanhill1, Glenn Wasson1, Zach Hill1, Jim Basney2, Yuliyan Kiryakov1, John Knight1, Anh Nguyen-Tuong1, Andrew Grimshaw1 and Marty Humphrey11 Dept of Computer Science, University of Virginia, Charlottesville VA 220942 National Center for Supercomputing Applications (NCSA), University of Illinois atUrbana-Champaign, IL USAAbstract. As Grids become increasingly relied upon as critical infrastructure, itis imperative to ensure the highly-available and secure day-to-day operation ofthe Grid infrastructure. The current approach for Grid management is generallyto have geographically-distributed system administrators contact each other byphone or email to debug Grid behavior and subsequently modify or reconfigurethe deployed Grid software. For security-related events such as the requiredpatching of vulnerable Grid software, this ad hoc process can take too muchtime, is error-prone and tedious, and thus is unlikely to completely solve theproblems. In this paper, we present the application of the ANDREAmanagement system to control Grid service functionality in near-real-time atscales of thousands of services with minimal human involvement. We showhow ANDREA can be used to better ensure the security of the Grid: Inexperiments using 11,394 Globus Toolkit v4 deployments we show theperformance of ANDREA for three increasingly-sophisticated reactions to anintruder detection: shutting down the entire Grid; incrementally eliminatingGrid service for different classes of users; and issuing and applying a patch tothe vulnerability exploited by the attacker. We believe that this work is animportant first step toward automating the general day-to-day monitoring andreconfiguration of all aspects of Grid deployments.1 IntroductionAs Grids become increasingly relied upon as critical infrastructure, it is imperative to ensure the highly-available and secure day-to-day operation of the Grid infrastructure. However, as Grid system administrators and many Grid users know, the deployed Grid software remains quite challenging to debug and manage on a day-to-day basis: Grid services can fail for no apparent reason, Grid services can appear to be running but users are unable to actually utilize them, information services report seemingly incorrect information, new versions of Grid software can need to be deployed, etc. Whenever a Grid administrator at a site determines that some part of their local Grid services is not running according to plan, and it is not immediately solvable via some local action, Grid administrators must generally contact their peers at other sites via telephone or email to debug overall Grid behavior. Email and This material is based upon work supported by the National Science Foundation under Grant No. 0426972 and through TeraGrid resources provided by NCSA. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.telephone are also heavily utilized to disseminate the availability of new versions of deployed Grid software, including those updates that contain important security patches. For such critical security-related events, this ad hoc process can take too much time, is error-prone and tedious, and thus is unlikely to properly remedy the situation.The broad challenge addressed in this paper is: How can a large collection of Grid services by dynamically managed in near-real-time to provide an acceptable level of functionality without significant human involvement? At the core of our approach is Willow [1], a large-scale feedback control architecture featuring sensors, actuators, and hierarchical control logic that runs side-by-side to the Grid and uniquely facilitates near-real-time monitoring, reconfiguration, and general management of the overall functionality of a Grid. Specifically, in this paper, we narrow our focus to the design and implementation of the integration of ANDREA [2] -- the “actuation” part of Willow-- to ensure the security of large-scale Globus Toolkit v4 deployments (GT4 [3]) in the presence of an intruder (essentially a person gaining unauthorized access to some service). We do not pursue ways in which to determine that a Grid intrusion has occurred, rather we assume that one has occurred and subsequently need to holistically react to such a breach. In simulated experiments involving 11,394 Globus Toolkit v4 deployments, run over 768 real nodes, we show the application and performance of ANDREA for three increasingly-sophisticated reactions: shutting down the entire Grid; incrementally eliminating Grid service for different classes of users; and issuing and applying a patch to the vulnerability exploited by the attacker. We believe that this work is an important first step toward automating the general day-to-day monitoring and reconfiguration of all aspects of Grid deployments.2 The ANDREA Management FrameworkWillow [1] is a comprehensive approach to management in critical distributed applications. Willow achieves its goals using a combination of (a) fault avoidance by disabling vulnerable network elements intentionally when a threat is detected or predicted, (b) fault elimination by replacing system software elements when faults are discovered, and (c) fault tolerance by reconfiguring the system if non-maskable damage occurs. The key is a reconfiguration mechanism that is combined with a general control structure in which network state is sensed, analyzed, and required changes actuated. ANDREA is the "actuation" component of Willow. Because the focus of this paper is on the integration and use of ANDREA to control Globus installations in the context of intrusion detection, we will not discuss Willow any further (a complete integration of Willow with Grid mechanisms is the subject of continuing research). That is, we do not discuss the use of Willow to detect the intruder in question here, rather we focus on the use of Willow (specifically ANDREA) to react and take corrective actions.2.1Distributed, Hierarchical Management Policy in ANDREAANDREA is a distributed, hierarchical collection of nodes that integrate at the lowest level with the controlled system, which is in this case the Globus-based Grid. The collection of ANDREA nodes exist in a loosely coupled configuration in that they are n ot aware of one another’s names, locations, or transient characteristics. This is a key point for controlling truly large-scale systems where the cost of any node maintaining a precise and up-to-date view of the remainder of the system is prohibitive (or the task is impossible). An ANDREA node advertises a set of attributes that describe important state of the application node it wraps. This is similar to a management information base (MIB) in traditional management architecture.ANDREA nodes interact with one another through delegation relationships. Each ANDREA node: (1) states a policy describing constraints on the attributes of other ANDREA nodes from which it will accept delegated tasks; and (2) describes constraints on the kinds of tasks it will accept. This is known as a delegate policy. Meanwhile, if an ANDREA node seeks to delegate a task, it states constraints on the attributes of possible delegates. This is called a delegator policy. A delegated task is given to all ANDREA nodes in the system and only those nodes for which, within a time window specified by the delegator: (1) the delegator’s attributes and the tasks’s attributes meet the requirements of the delegate’s policy; and (2) the delegate’s attributes meet the requirements of the delegator’s po licy. Details of the addressing language and underlying mechanism used by ANDREA, called Selective Notification, are provided in the work by Rowanhill et al. [4].Once an ANDREA node has delegated a task to other ANDREA nodes, there exists a loosely coupled management relationship between the nodes, because the delegator need not know the name nor locations of its delegates. It merely uses the “handle” of the management relationship to multicast a communication to its delegates. Likewise, a delegate need not know the name nor location of its delegator, and it need not be aware of any other delegates. It can communicate with the delegator using the management relationship as a handle. In addition, temporal coupling between nodes is minimized. Delegates can join a relationship at any time during the specified time window and receive all relevant communications in the correct order upon joining. This management relationship is particularly advantageous for users (or other software components) that are “injecting work” into the ANDREA system.2.2Hierarchical Workflow in ANDREAManagement is affected in ANDREA by defining workflows. A workflow is a partially ordered graph of task nodes, and a task can define a child workflow within it.A task with an internal workflow represents a composite action. A leaf task is one that contains no sub-workflow. A leaf task may be an action, a constraint, a query, or a periodic query. Each leaf task of a workflow states an intent, a token from an enumerated language known to all ANDREA nodes. A task’s intent can represent any delegated entity. A common semantic is that each intent represents a requirement on distributed system service state defined as a constraint on system variables. Each workflow defines a reason, a token from a global enumerated language stating the3purpose for the intents of the workflow. Conflicts are detected between the intents of tasks on the same blackboard, and resolved through preemptive scheduling based on the priority of reasons.An application component can introduce management requirements into a system by defining a local workflow to its ANDREA service interface. This workflow is managed by the ANDREA system so that tasks and constraints are carried out in the right order and delegated to appropriate nodes. Each ANDREA node has a conventional hierarchical workflow and blackboard model.It is impractical (and non-scaleable) for any single ANDREA node to keep track of all nodes in the system and thus be able to precisely determine which nodes have or have not complied with the intended action of a given workflow. Instead, delegates in a workflow keep aggregate determinations, in the form of histogram state. When a delegated task is completed (based on this aggregate state), the delegator rescinds the delegation. The management relationship that had linked the delegator to the delegates is terminated and eliminated and thus again fully decoupled.2.3Programming ANDREAANDREA has a unique programming model in which system configuration tasks (which are part of the workflows) are represented as constraints. The set of managed resources to which a constraint applies is described by a set of properties of the resource. Each system administrator (or automated system administration component) can issue “constraint tasks”, i.e. system configuration workloads that put the system in a state with certain properties. ANDREA will configure the system and attempt to maintain the given constraints. Conflicting constraints are typically resolved by a priority scheme among administrators. For example, suppose an administrator issued the constraint that no low priority jobs should be run on a set of Grid nodes. ANDREA will reconfigure the selected nodes, and then continue to make sure this constraint is upheld. So, if another administrator issued a command that no medium priority jobs should be run on an overlapping set of nodes, ANDREA can configure the system to maintain both constraints (i.e., no low or medium priority jobs). However, if a command to allow low priority jobs is received, there is a conflict since both constraints cannot be simultaneously met.3Using ANDREA to Reconfigure the Grid in Response to Intrusion DetectionsWhile we believe ANDREA can be used for many aspects of day-to-day Grid management, our focus in this effort is to use ANDREA to reconfigure the Grid in response to intrusion detections. We generically describe the scenario as follows: an operator has manually discovered a rootkit on a node in the Grid and relays this information to the Grid control center, which uses ANDREA to put the system into a number of different configurations on a per-node basis1:1In the future, we will use the sensing capabilities of Willow to discover such rootkits and automatically engage ANDREA (with human confirmations as warranted).Normal –nodes are accessible to all user classesShutdown –no users are allowed to access nodes (since they are consideredinfected)No low priority–no low priority users are allowed access to nodes.Eliminating this class of resource consumption reduces the dynamics of the Grid so that it is easier to assess the extent of the intrusion, without shuttingdown the entire Grid.Only high priority–no low or medium priority users are allowed access to nodes. If "no low priority"is not sufficient, this configuration furthersimplifies the Grid environment while still offering some level of service.Patch nodes– in the case that the rootkit is exploiting a known vulnerability, grid system administrators can use ANDREA to take nodes identified as vulnerable offline, patch them, and bring them back online. This will cause a percentage of the Grid nodes to be unavailable for a time, but will allow other nodes (ones without the vulnerability that allowed the break in) to remain operational.While this may seem like a simple scenario (and a simple “shutdown” solution), it is an ideal illustrator of the tasks for which ANDREA was designed. Every node in the system must be informed of the vulnerability and (as we shall see) ANDREA can provide a probabilistic description of “the grid’s” response (i.e. “the grid is now 90% patched”). While an equivalent effect of avoiding the vulnerable nodes may be achieved via the manipulation of scheduling policies in either a single global queue or via coordination of scheduling policies on multiple queues in the Grid, we believe that this is not a reasonable approach for a number of reasons. First, it is extremely unlikely that there is a single queue, and coordination of multiple queues requires sites to give up site autonomy, which is unlikely (in contrast, as will be discussed in Section 5, ANDREA can be configured to respect site autonomy). Second, we require a mechanism affecting all Grid access, not just access to a job submission queue. Third, ANDREA is part of the larger Willow functionality, which will in the longer term automate some of the sensing and control-rule aspects implied here. Solely manipulating queuing policies offers no automated solution for integrating the discovery of a rootkit and an appropriate reaction.In the remainder of this section, in the context of intrusion detection and reaction, we describe how to deploy ANDREA for the Grid (Section 3.1), the Grid-specific ANDREA workflows (Section 3.2), and the actual mechanisms executed on the controlled nodes to reconfigure the Globus Toolkit v4 Grid (Section 3.3).3.1Deployment of ANDREA for the GridAn ANDREA deployment consists of a set of ANDREA nodes, which are processes that can receive configuration messages, enforce constraints, and report status. It is up to the ANDREA system deployer to determine how many managed resources are controlled by an ANDREA node, but typical deployments have one ANDREA node per machine. In the experiments reported in this paper, each deployment of GT4 on a machine was managed by a single ANDREA node. Each ANDREA node uses an XML deployment descriptor for the various supported system configurations and the possible conflicts between those configurations. (The5mechanism by which ANDREA maps these labels into actual operations on the Grid is discussed in Section 3.3.) From this XML description of the supported configurations, ANDREA can infer that “patch nodes” mode conflicts with the “shutdown”, “no low priority” and “only high priority” modes since they are subsets of “normal” mode. Configuration tasks, i.e. “go into normal mode”, are issued with an attached priority that is used to resolve conflicts. In this way, configuration tasks can be dynamically prioritized. The current security model in the ANDREA implementation still requires receiver-side authentication and authorization checks to fully ensure that a received command is legitimate.3.2Using ANDREA Workflows for Configuring the GridSystem administrators (and potentially automated entities) inject workflows into their ANDREA nodes to manage and configure the Grid. Some of the tasks in a given workflow are delegated from the controller to all ANDREA nodes with particular attributes and acceptance of the controller’s attributes, as described prev iously. The result is the arrival of a task at a set of ANDREA nodes managing Grid components, with the members of the set based on the policies stated at controllers and managers. Once an ANDREA node receives a task, it carries it out by actuation of the managed Grid components.The status of the action, constraint, query, or periodic query is periodically sent back to the task’s delegator on the order of seconds. ANDREA’s distributed algorithms perform aggregation of this status information from all other delegates of the task. The command issuer can use these status updates to make determinations about the overall system state based on the percentage of nodes that have responded. ANDREA includes mechanisms to map task status histograms to representation of the task’s state at the commander. For example, the Grid may be considered to be in a certain configuration when 90% of the nodes have replied that they are in that configuration, as 100% responses may be unlikely or unnecessary in large systems, depending on the situation.Once a task is received by an ANDREA node, it must decide what action to take. The actual action an ANDREA node takes depends on what other active tasks are already being enforced. For example, if an ANDREA node is enforcing a “normal” constraint and a task arrives to enforce the “patch nodes” constraint, the net effect will depend on the priority of the “patch” command. If it has a higher priority then it will be enforced in place of the “normal” configuration. However, i f it has a lower priority, it will not be discarded. Instead the “patch” command will remain on the ANDREA node’s blackboard. If the “normal” configuration is withdrawn by its issuer, then the highest priority commands still on the blackboard that do not conflict with one another will be the ones that are enforced.3.3Integration of ANDREA with GT4 (A2GT4)Although each local ANDREA manager knows which constraints apply to the resources it manages, it requires a resource- (or application-) specific mechanism to enforce those constraints. In our system, enforcement is performed by a system component called ANDREA-to-GT4 (A2GT4). A2GT4 is the mechanism by whichANDREA can alter the behavior of Globus Toolkit v4 and also includes an actuator, plugged into each A NDREA node’s blackboard, defining how and when to run the mechanisms based on tasks on the blackboard and their scheduling.We developed A2GT4 based on particular scenarios. Each new scenario is additive to the existing A2GT4 mechanism (note that ANDREA itself handles conflicts between concurrent re-configurations). With regard to the specific intrusion-detection scenario, this means that we had to map each of the five identified states ("normal", "shutdown", "no low priority", "only high priority", and "patch nodes") into a particular configuration of the GT4 services. In these experiments, the A2GT4 mechanisms manipulate authorization mechanisms to control the overall performance of a GT4 services. We chose this approach because the scripts were easy to debug, easy to confirm correctness, and not prohibitive of more sophisticated approaches necessary for different scenarios (which are under development).4ExperimentsTo evaluate the performance of our Grid management methodology, we deployed 15 instances of the Globus Toolkit, each managed by a local ANDREA node, on 768 nodes of the NCSA IA-64 TeraGrid cluster. Each cluster node had two 1.3-1.5 GHz Intel Itanium 2 processors and 4-12 GB of memory, with a switched Gigabit Ethernet interconnect. Excluding approximately 1.1% of the instances that failed to correctly start up for various reasons, this single NCSA cluster allowed us to simulate a much larger Grid of 11,394 ANDREA nodes and Globus Toolkit instances (Note that it was a “simulation” because it wasn’t an actual wide-area Grid deployment, although we truly had 11,394 independently-configurable Globus deployments in our experiments). 101 of the computers each ran a Selective Notification node, forming an overlay network in the form of a tree with a branching factor of 100. Each of the ANDREA nodes was connected as a leaf to this overlay network, such that each lower level Selective Notification dispatcher supported roughly 115 ANDREA nodes. The ANDREA controller ran on a machine at the University of Virginia.We performed three experiments using the attacker/intruder scenario introduced in Section 3, with normal, no low priority, only high priority, and patch node configurations. For each experiment, we issued configuration commands from the ANDREA controller at UVA and logged the results at UVA and NCSA. At UVA, ANDREA logged the times at which commands were issued to and acknowledged by the ANDREA nodes at NCSA. At NCSA, the A2GT4 scripts logged each state change to the cluster’s file system. Over 200,000 events were recorded.4.1Experiment 1: ShutdownThe first experiment evaluated the ability of the ANDREA manager to shutdown all Grid nodes in the event of an attack or break-in. An ANDREA controller issues the command for all nodes to enter the shutdown state, prompting all ANDREA nodes to run the A2GT4 shutdown script, which denies access to all Globus Toolkit services on that node.7In Figure 1, we see that within seconds of issuance of the Shutdown commandfrom UVA, the A2GT4shutdown scripts begin running at NCSA, with all nodes shutdown 26 seconds after the command was issued. ANDREA confirms the state change of most nodes after 16 seconds, and all 11,394 nodes are confirmed shutdown after 26 seconds.4.2 Experiment 2: Shutting out User Groups/Classes to Enable BetterDamage AssessmentThe second experiment evaluates the ability of ANDREA state changes to control Grid jobs. We periodically submitted batches of Globus WS-GRAM jobs using three different user credentials, where the users and their jobs are prioritized as low, medium, and high. As the Grid comes under attack, an ANDREA controller decides to limit access to medium and high priority users only, to reduce the number of accounts to monitor while continuing to serve higher priority customers. As the intruder/attack continues, a separate ANDREA controller decides to limit access to only high priority users to further lock down the system.Figure 2 shows the number of high, medium, and low priority jobs running during the experiment. The number of running jobs follows a cyclical pattern, as a new batch of jobs is submitted while the previous batch is completing. The first ANDREA controller issued the command to deny access to low priority jobs 80 seconds into the experiment. Each ANDREA node ran the A2GT4 NoLow script,killing all running lowpriority jobs and updating the Globus authorization mechanism list to deny access to low priority users. 24 seconds later, all low priority jobs had stopped running and ANDREA acknowledged the changeto the no low priority state.While the low priority usercontinued to attempt job submissions, they were alldenied due to the access control change. At 275 seconds into the experiment, the ANDREA manager issued the command to deny access to both low and medium priority jobs. ANDREA ran the A2GT4 HighOnly script, killing all running medium priority jobs and updating the Globus access control list to allow access to only high Figure 1: Testing Basic ANDREA Control of GT4 Grid NodesFigure 2: Incrementally Eliminating Classes of Users9priority users. 16 seconds later, all medium priority jobs had stopped running and ANDREA acknowledged the change to the only high priority state. These results demonstrate how ANDREA efficiently propagated state changes initiated by the manager at UVA across the 11,394 nodes at NCSA and the A2GT4 scripts effectively reconfigured the Grid environment on the nodes.4.3 Experiment 3: Patching Vulnerable NodesThe first two experiments evaluated the functionality and performance when the ANDREA manager issued constraints defining new states that the nodes should maintain, such as shutdown, no low priority, or only high priority. The third experiment evaluates ANDREA actions, which are commands that are executed once per node, and also demonstrates the use of ANDREA's selective notification.We artificially configured 20% of the ANDREA nodes to report their operating system to be RedHat 7.3, while the other nodes report other operating systems. In thisscenario, we assume that the operatordetermines that the specific compromise arose as a result of a vulnerability in Red Hat 7.3, and the operator want to use ANDREA to issue an instruction to only those vulnerable nodes to takethemselves offline, apply the patch, and then re-register with the Grid (i.e., come back on-line). Figure 3illustrates that it took about 7 seconds for all 20% of the nodes (approx. 2250 nodes) to initiate the patch command within A2GT4. We simulated that the operation of retrieving the patch (the patch itself is not contained in the control signals provided by ANDREA, although a URL could be included), installing the patch, and then rebooting (which subsequently brings the Grid node back on-line) took 3 minutes in its entirety. Hence, these patched Grid nodes began appearing back on-line at 180 seconds into the experiments; all nodes were back on-line and reported to ANDREA within 13 seconds after that. While not directly shown in Figure 6, ANDREA's Selective Notification facilitated the "addressing" of the Patch command to only those nodes that published the property of being Red Hat 7.3 -- the actual destination (IP Address, port) of relevant nodes was handled by ANDREA. Only simulated Red Hat7.3 nodes received such a command.5 DiscussionWe believe that our experiments uniquely show a novel approach to the semi-real-time management of large-scale Grids. However, some discussion of the results is warranted. First, there is an obvious question of the base-line against which these Figure 3: Applying a Patch to Only Red Hat 7.3 Nodesresults can be compared. Unfortunately, we are aware of no Grid systems that actuate changes on system components in the automated manner at the scale of which ANDREA is capable. Intuitively, it seems that being able to issue re-configuration commands (such as “shutdown” or any other command) to over 11,000 receive rs in ~25s seems not unreasonable for many application domains (while our experiments take place in a simulated environment of millisecond-latency between nodes, we believe that in a real wide-area deployment, network propagation delays will not have an overwhelming effect on these durations, as the dominant effect is based on the algorithms and topology of the ANDREA control structure, hierarchy, and fan-out as used in these experiments).Second, t here is the question of the cost of ANDREA’s ability to scale. In other words, ANDREA’s ability to reach large numbers of nodes, addressed by properties, and aggregate responses from those nodes may make it inappropriate for smaller scale communications tasks. We are not claiming that ANDREA should be used as a replacement for current grid communication mechanisms. Instead, we wish to use ANDREA for the task of managing large-scale distributed systems such as Grids and ANDREA’s performance should be viewed in this context. So, while ANDREA would undoubtedly be slow when compared to a single grid communication (these days typically a web service call), its performance, again intuitively, seems reasonable when compared to 11,394 such calls.In general, we recognize that there are clearly a number of issues that must be addressed before our approach is ready for an operational Grid like the TeraGrid: The scope of the A2GT4 mechanism must be expanded to all functionality of the Grid node beyond those capabilities provided by GT4 (e.g., SSHD).A more comprehensive mechanism to put the managed node into each particularstate must be developed. For example, we realize that the "shutdown" mechanism we implemented is of negligible value in the unlikely event that a vulnerability inthe GRAM service allowed an attacker to bypass the GT4 authorization mechanism!A comprehensive suite of use-cases that cover the majority of day-to-dayoperations must be developed and reflected in A2GT4 and the ANDREA workflows. For example, we are currently investigating a scenario in which ahigh-importance operation that relies on the Grid (such as tornado prediction) is not meeting its soft deadlines. The corrective actions to take, particularly at the same time as an event such as an ANDREA reaction to intrusion detection, is the subject of on-going research in this project.The security of ANDREA must be further analyzed and improved. Certain operations in ANDREA are not mutually authenticated, instead relying on a trust model that in some cases oversimplifies the actual requirements of a TeraGrid-like environment. In addition to analyzing and correcting for vulnerabilities within ANDREA itself, we would like to leverage the authentication infrastructure of the Grid itself, namely GSI [16].The entire functionality of Willow (including the sensing and control logic) must be applied to a Grid setting, leveraging existing solutions. For example, regarding the sensing capability/requirement of Willow, we believe that existing tools suchas INCA [5] could be leveraged as the foundation for such operations.。