ATCA培训教材
- 格式:ppt
- 大小:722.00 KB
- 文档页数:52
ATCA基础学习目录第一部分基础知识 (3)第一章模块化通信平台(MCP) (3)第二章ATCA (5)2.1 ATCA介绍 (5)2.2 ATCA技术亮点 (6)2.3 AMC (11)第三章面向PCI Express架构的高级交换 (14)第四章电信级操作系统 (15)第五章服务可用性论坛中间件接口 (15)第六章高速互连串行总线 (16)第七章散热技术 (21)第二部分结构 (25)第一章Fabric接口 (26)第二章机箱 (29)第三章背板 (30)第三部分电源 (35)第四部分散热 (36)第五部分系统管理 (42)第六部分系统互连 (44)第七部分传输协议 (47)第一章高级交换互连(ASI) (47)第二章快速结构 (49)第一部分基础知识第一章模块化通信平台(MCP)模块化通信平台拥有无可比拟的优势,MCP支持电信设备制造商(TEM)选择一流的商业化(COTS)产品,并把它们集成到平台解决方案之中。
这一方法可缩短总体开发时间。
电信设备制造商(TEM)可以将其资源投入到那些能让服务提供商脱颖而出并获得最大价值的领域,从而支持其以较低的成本构建网络基础设施推出新型服务。
提供这些优势需要一个真正的标准化全行业解决方案架构,它应:1)拥有从网络设备提供商到解决方案厂商的广泛行业支持;2)提供具有出色互操作性和再利用性的模块化商业解决方案;3)其设计完全以满足电信行业的需求为立足点;4)为硬件平台、互连、底板交换结构、平台管理、电信级操作系统和高可用性中间件带来开放工业标准的优势。
图1 模块化通信平台的开放标准MCP模式基于主要的开放性全行业标准方案:✧高级电信计算体系结构(AdvancedTCA)是一项开放的行业规范,设计用于满足下一代电信级通信设备的要求。
AdvancedTCA代表了PICMG历史上最大型的规范方案。
✧PICMG针对ATCA所做的扩展工作包括高级夹层卡(AMC)标准。
amount of work and knowledge that have gone into the development of the full specification that is now exceeding300pages in length for the core3.0.The collective experience of nearly100companies from all parts of the Telecommunications value chain is being applied to this development in a spirit of close cooperation for mutual benefit.Sponsored by:·Support appropriate scalability of system performance andcapacityThe architecture is tailored to meet the needs of a rapidly changing communications network infrastructure.The performance, environment,and surrounding regulatory requirements of application types,which are sometimes referred to as“Central Office,”“carrier-grade,”or“service provider,”serve as a framework for the definition of the architecture.While the specification is founded on the requirements of the communications infrastructure,it is extensible to a variety of applications,environments,and geographies where highly available, highly scalable,cost effective,open architecture modular solutions are required.Mechanical overviewMechanical packaging for AdvancedTCA systems meets the functional needs for a converged telecom-compute platform and support the physical needs of central office and data center environments(see Figure1).The basic elements consist of front plug-in boards,a backplane,and rear plug-in transition modules packaged in a rackmount shelf.The board size is8U x280mm x6HP,large enough to allow a high level of integration and functionality,provide sufficient front panel space for I/O connectors and mezzanines, clearance for taller primary side and secondary side components,and support higher power and cooling levels.New features include a high density electrical interconnect architecture,direct mate rear transition module concept,and comprehensive alignment/keying scheme.The intended infrastructure environment for the shelf is:a19-inch rackmount frame,23-inch rackmount frame,or600mm x600mm ETSI frame.At the board,shelf,and frame level,careful consideration has been given to address the need for high density foot print efficient packaging,front/rear service issues,adequate cabling space,airflow, power entry,robustness,and reliability.ConnectorsConnectors are grouped into functional Zones as shown in Figure1.Each Zone has definitions for connector types and the specific pin assignments and usage are described in the relevant functional section:Zone3(Rear Panel I/O)–This Zone is specifically reserved for optional rear panel I/O connections and is intentionally flexible. These connectors may be of any type,although no electrical connections shall be made or completed until the board has been inserted and powered up.Cable bulkhead connectors may be used.·Low-level hardware management services·High-speed management services based on the TCP/IP protocol suite·In-band application managementThe first two services are described in this specification,while the in-band application-level service is left to the system implementer for definition.The purpose of the described system management services is to assure proper system health and operation of the low-level hardware and board operation.The system management system is designed to watch over the basic health of the system,report anomalies,and take corrective action when needed.The system is separated into several major components:·Distributed management processors which manage andmonitor the operation and health of each FRU in the system ·Intelligent Peripheral Management Interface(IPMI)whichprovides communications,management,and control among the distributed managers and to an external overall systemmanager·A higher-level,high-speed service for boards which needTCP/IP-based management services such as remote booting,SNMP management,remote disk services,and other IP-basedservicesSystem management allows operation of systems where all components are operated by a single owner,as well as those containing components operated by multiple operators.This latter form of operation is called multi-tenancy.IPMB-0is a logical bus comprising the aggregation of two physical IPMBs,allowing for redundant/fallback operations.Shelf ManagerThe Shelf Manager is responsible for managing the power,cooling, and interconnects associated with the FRUs in an AdvancedTCA shelf.The Shelf Manager also:routes messages between the System Manager Interface and IPMB-0,provides interfaces to systemrepositories,and responds to event messages.ShMCShelf Management Controller.The physical device responsible for communicating with the IPM Devices over IPMB-0on behalf of a Shelf Manager.System ManagerA System Manager is the highest level management entity referenced in this specification,responsible for managing one or more Systems, each comprises one or more Shelves.It is a logical entity introduced for purposes of explanation,only.Its functionality is outside the scope of this specification.Power distributionDual,redundant-48VDC feeds are provided to each frame from the battery banks.A single signal-conditioning panel,usually mounted at the top of the frame,provides filtering to minimize radiated and conducted noise and voltage ripple.The two primary feeds are typically split into a number of sub-feeds,but remain electrically isolated.These sub-feeds are then also fused and filtered to prevent downstream shorts and malfunctions from propagating beyond a single shelf or sub-shelf.Shelves requiring modest power may be fed by a single pair of feeds or by multiple pairs of feeds to keep the current per feed to a modest level.In cases where multiple feeds are used,the backplane is segmented to maintain isolation between all feeds(see Figure3).thermal environmental conditions specified by NEBS and ETSI,fan failure modes,air filter needs,and airflow directions.Simple design recommendations are given as an example.While this specification provides guidance,the system integrator has ultimate responsibility to insure all components meet the thermal requirements of the system.the future for the same reason.The thermal guidelines in this specification apply primarily to air cooling.Other cooling methods are permissible but are not covered in this specification.Boards, shelves,and frames may be either cooled by natural convection without the assistance of fans or forced convection with the assistance of fans,the choice is left to the end-user requirements. Dissipation values are also allowed to account for advancements in cooling technology or other thermal solutions such as liquid. Data transportThe PICMG3.0Data Transport framework is comprised of the physical Zone2connector as well as the mapping of signals to that connector and the routing of the those signals between boards across the backplane.The performance headroom in the connector will allow future interconnect technologies with higher signal rates to be used within the framework,and the generic signal mapping across the backplane supports a variety of system fabric topologies for connecting boards together.The Data Transport zone supports four separate interfaces providing connectivity along the backplane and between boards up to a maximum of16system slots(see Figure5):1.Base Interface2.Fabric Interface3.Update Port Interface4.Telephony Synchronization Clock InterfaceThese interfaces are mapped across a connector array comprised of up to five ZD connectors.Link capacities between boards may vary from tens of Mbits/sec up to10Gbits/sec data rates using commonly available technology.Board and backplane compatibility guidelines are defined in the PICMG3.0base specification including the mapping of10/100/1000 BaseT Ethernet onto the Base Interface.Board and backplane compatibility is also governed by the electronic keying mechanism defined for PICMG3.0systems.Subsidiary specifications will be developed to define how to overlay a specific fabric interconnect technology onto the PICMG3.0Fabric Interface physical framework to ensure compatibility of boards with common interconnects. PICMG3.1,for example,defines how to map Ethernet connections to the PICMG3.0Fabric Interface.Electronic keying uses the system information provided by boards and backplanes to ensure compatible link technologies exist between any two system boards prior to enabling the Data Transport interface for any and all links(Channels)within the backplane.The ability to deploy new interconnect technologies to the PICMG3.0 base specification over time will be limited to the signaling capacity of the defined connector,backplane substrate,and the number of pins available for board to board connectivity.The data transport connector zone defined in PICMG3.0establishes the mechanical and electrical boundaries within which the board-to-board interconnect definition is limited.The subsidiary link specificationsPICMG3.1The PICMG3.1specification will define all elements necessary for interoperability between multi-vendor PICMG3.1products.This will include,but is not limited to,specific bit rates,bit-rate negotiation, pin mapping,physical and/or logical address mapping,specific backplane topologies,mechanical/electronic keying,hot swap, powerup/initialization,fabric specific system management,signal integrity validation,low-level fault detection recovery/reporting,and options for both redundant and/or multi-fabric architectures.PICMG3.1will take advantage of the core PICMG3.0specification for its basic mechanical,thermal,electrical,power,connectivity,and system management requirements and will co-exist with the other PICMG3.0data link specifications.PICMG3.2The PICMG3.2specification will use InfiniBand physical layers and protocols in a PICMG3.0-compliant backplane to interconnect a number of circuit boards on a shelf.Several different backplane interconnection topologies are available,including dual star,quad star,and full mesh.InfiniBand signaling rates of2.5Gbits/sec(and optionally up to10Gbits/sec)are supported on each link. PICMG3.2will define how InfiniBand transport is mapped onto the PICMG3.0base.It will specify the link physical layers,protocols,and protocol mappings needed to implement multi-vendor,interoperable systems.Specification of addressing schemes,virtual lane mappings, initialization,keying,hot swap support,and hardware management will be included.PICMG3.3StarFabricThe proposed StarFabric Interconnect specification will define a redundant,point-to-point,high-speed serial interconnect within a PICMG3.0chassis providing TDM,control,cell,and packet connectivity among all slots using a single interconnect.The specification will enable system development with the StarFabric2.5 Gbit/sec and10Gbit/sec physical layers.The StarFabric interconnect provides a solution for issues faced by next-generation communication equipment vendors who want to utilize open-standard technology such as scalability,high-availability, quality-of-service,and cost.It is compatible with existing PCI software,drivers,and operating systems so that adopters can maintain existing investments while solving next-generation challenges.The StarFabric Interconnect specification will also be applicable to distributed computing systems.With its memory-mapped,load store programming model,StarFabric provides both an efficient,loosely coupled processor-to-processor communication as well as a processorto dumb I/O communication at multi-Gigabit rates.What happens next?The AdvancedTCA,or PICMG3,family of specifications completes the transition to switched serial interconnects leaving the PCI Bus legacy behind and providing a more suitable platform for high performance compute and IO boards.While the precise timeframe for completion of PICMG3.0is difficult of project,the major events along the way are well defined by PICMG’s Policies and Procedures for Specification Development:·A draft0.8specification will be presented to the currentlyconstituted subcommittee for final review·A Call for Participation for a formal Final Subcommittee Balloton the0.9specification is sent to all Executive and AssociateMembers·The specification,as modified in the course of the FinalSubcommittee Ballot will be presented to the PICMG Executive Membership for adoptionThis process could be completed by the end of calendar year2002if there are no major changes forthcoming.Two of the three subsidiary specifications,those for switched Ethernet and Infiniband,are developing nearly in parallel and should be ready at about the same time as the base spec.The mapping of StarFabric onto the AdvancedTCA platform is proceeding in earnest now that PICMG2.17has been completed and adopted by the PICMG Executive Membership.It now appears that the mapping of PCI Express onto the AdvancedTCA architecture will be defined with this specification as well.This specification is likely to be completed shortly after the base specification.There is also considerable interest in developing a new advanced mezzanine architecture suitable for use on AdvancedTCA carriers, and standalone,supporting many of the same interconnects.The sponsor community for this specification is still collecting requirements and refining its preliminary statement of work......。