当前位置:文档之家› 施耐德机房白皮书

施耐德机房白皮书

施耐德机房白皮书
施耐德机房白皮书

The Different Technologies for Cooling Data Centers

Revision 2

by Tony Evans

Introdu

tion 2

Heat removal methods 3Cooling system options 14Con lusion 15Resour es 16

Click on a section to jump to it

Contents

White Paper 59

The Different Technologies for Cooling Data Centers

Data center heat removal is one of the most essential yet least understood of all critical IT environment processes. As the latest computing equipment becomes smaller and uses the same or even more electricity than the equipment it replaced, more heat is being generated in data centers. Precision cooling and heat rejection equipment is used to collect and transport this unwanted heat energy to the outside atmosphere.

This paper explains the 13 cooling technologies and their components designed to transport heat energy from the IT environment to the outside atmosphere. The information presented here is a foundation allowing IT professionals to successfully manage the specification,

installation, and operation of IT environment cooling systems. For definitions of various terms used throughout this paper, see White Paper 11, Explanation of Cooling and Air Conditioning Terminology for IT Professionals .

How air conditioners work

White Paper 57, Fundamental Principles of Air Conditioners for Information Technology provides information regarding the nature of heat in the IT environment, operation of the refrigeration cycle, and the basic functionality of precision cooling devices and outdoor heat rejection equipment.

Cooling architectures

A cooling architecture is fundamentally described by:

1. A particular heat removal method (the subject of this paper)

2. A particular air distribution type

3. The location of the cooling unit that directly supplies cool air to the IT equipment

These three elements are briefly described below, along with their white paper references, in order to provide context around the entire data center cooling system.

Heat removal

Heat removal is the subject of this paper.

Air distribution

This is a very important part of the cooling system as air distribution to IT equipment greatly affects its overall performance. White Paper 55, The Different Types of Air Distribution for IT Environments describes the nine basic ways air is distributed to cool IT equipment in data centers and network rooms.

Location of the cooling unit

The cooling unit is defined in this paper as the device that provides cool supply air to the IT equipment. There are four basic cooling unit locations. In general, the cooling unit is separate from the other heat rejection system devices. In some cases, the entire cooling

architecture is “containerized” and located outdoors adjacent to the data center. The location of the cooling unit plays a significant role in data center design including overall cooling efficiency, IT power density (kW / rack), and IT rack space utilization. For more information see White paper 130, The Advantages of Row and Rack-Oriented Cooling Architectures for Data Centers . Introduction

Fundamental Principles of Air Conditioners for Information Technology

Link to resource

White Paper 57

Explanation of Cooling and Air Conditioning Terminology for IT Professionals

Link to resource

White Paper 11

The Advantages of Row and Rack-Oriented Cooling

Architectures for Data Centers

Link to resource

White Paper 130

The Different Types of Air

Distribution for IT Environments

Link to resource

White Paper 55

The Different Technologies for Cooling Data Centers

There are 13 fundamental heat removal methods to cool the IT environment and transport unwanted heat energy from IT equipment to the outdoors. One or more of these methods are used to cool virtually all mission critical computer rooms and data centers. Some heat removal methods relocate the components of the refrigeration cycle away from the IT environment and some add additional loops (self-contained pipelines) of water and other liquids to aid in the process.

One can think of heat removal of as the process of “moving” heat energy from the IT space to the outdoors. This “movement” may be as simple as using an air duct as a means to

“transport ” heat energy to the cooling system located outdoors. However, this “movement” is generally accomplished by using a heat exchanger to transfer heat energy from one fluid to another (e.g. from air to water). Figure 1 simplifies the 13 heat removal methods by illustrating two major heat energy “movement” points - indoor and outdoor. The “Transport fluid” in between the indoor and outdoor hand off points represents the fluid (liquid or gas) that carries the heat energy between the two points. Note that there may be more than one heat exchange occurring indoors, as in the case of the Glycol-cooled CRAC (i.e. Computer Room Air Conditioner ), or outdoors, as in the case of the chiller. Also note that each of these systems may be configured to operate in an economizer mode. The following sections provide a detailed look at the systems that incorporate these methods and describe the individual heat energy transfers.

* Note that in some cases the chiller is physically located indoors.

Chilled water system

The first row in Figure 1 depicts a computer room air handler (also known as a CRAH ) joined together with a chiller. This combination is generally known as a chilled water system. In a chilled water system the components of the refrigeration cycle are relocated from the

computer room air conditioning systems to a device called a water chiller shown in Figure 2. The function of a chiller is to produce chilled water (water refrigerated to about 8-15°C [46-Heat removal methods

Figure 1

Simplified breakdown of the 13 fundamental heat removal methods

Indoor heat exchange or

transport

Outdoor heat exchange

or transport

Air-cooled CRAC Glycol-cooled

CRAC Transport fluid Condenser

Dry cooler Air duct

Chiller*

Condenser

Chilled water Direct fresh air evaporative cooler

Indirect air evaporative cooler

Roof-top unit

Air

Refrigerant Glycol

Water-cooled

CRAC Cooling tower

Condenser water Air-cooled self-contained

Air duct

Air

Data center boundary

Dry cooler Cooling tower

CRAH

Pumped

refrigerant heat exchanger Chilled water

environment. Computer room air handlers are similar to computer room air conditioners in appearance but work differently. They cool the air (remove heat) by drawing warm air from the computer room through chilled water coils filled with circulating chilled water. Heat removed from the IT environment flows out with the (now warmer) chilled water exiting the CRAH and returning to the chiller. The chiller then removes the heat from the warmer chilled water and transfers it to another stream of circulating water called condenser water which flows through a device known as a cooling tower.

As seen in Figure 2, a cooling tower rejects heat from the IT room to the outdoor environ-ment by spraying warm condenser water onto sponge-like material (called fill ) at the top of the tower. The water spreads out and some of it evaporates away as it drips and flows to the bottom of the cooling tower (a fan is used to help speed up the evaporation by drawing air through the fill material). In the same manner as the human body is cooled by the evapora-tion of sweat, the small amount of water that evaporates from the cooling tower serves to lower the temperature of the remaining water. The cooler water at the bottom of the tower is collected and sent back into the condenser water loop via a pump package.

Condenser water loops and cooling towers are usually not installed solely for the use of water-cooled computer room air conditioning systems. They are usually part of a larger system and may also be used to reject heat from the building’s comfort air conditioning

There are three main types of chillers distinguished by their use of water or air to reject heat

? With water-cooled chillers , heat removed from the returning chilled water is rejected

to a condenser water loop for transport to the outside atmosphere. The condenser wa-ter is then cooled using a cooling tower - the final step in rejecting the heat to the out-doors. The chilled water system in Figure 2 uses a water-cooled chiller. Figure 3 shows an example of a water-cooled chiller and a cooling tower. Water-cooled chillers Figure 2

Water-cooled chilled water system

? Glycol-cooled chillers look identical to water-cooled chillers. With glycol-cooled

chillers, heat removed from the returning chilled water is rejected to a glycol loop for transport to the outside atmosphere. The glycol flows via pipes to an outdoor-mounted device called a dry cooler also known as a fluid cooler (see sidebar). Heat is rejected to the outside atmosphere as fans force outdoor air through the warm glycol-filled coil in the dry cooler. Glycol-cooled chillers are typically located indoors.

? With air-cooled chillers , heat removed from the returning chilled water is rejected to a

device called an air-cooled condenser that is typically integrated with the chiller. This type of chiller is known as a packaged chiller and can also be integrated into a cooling facility module. White paper 163, Containerized Power and Cooling Modules for Data Centers , discusses facility modules. Figure 4 shows an example of an air-cooled chill-er. Air-cooled chillers are typically located outdoors.

Advantages

? Chilled water CRAH units generally cost less, contain fewer parts, and have greater

heat removal capacity than CRAC units with the same footprint.

? Chilled water system efficiency improves greatly with increased data center capacity ? Chilled water piping loops are easily run very long distances and can service many IT

environments (or the whole building) from one chiller plant.

? Chilled water systems can be engineered to be extremely reliable.

? Can be combined with economizer modes of operation to increase efficiency. Design-ing the system to operate at higher water temperatures 12-15°C [54-59°F]) will increase the hours on economizer operation.

Disadvantages

? Chilled water systems generally have the highest capital costs for installations below

100 kW of electrical IT loads.

? Introduces an additional source of liquid into the IT environment.

Usually used

? In data centers 200 kW and larger with moderate-to-high availability requirements or as

a high availability dedicated solution. Water-cooled chilled water systems are often used to cool entire buildings where the data center may be only a small part of that building.

Containerized Power and Cooling Modules for Data Centers

Link to resource

White Paper 163

Figure 3

Example of a water-cooled chiller (left) and cooling tower (right)

Figure 4

Example of an air-cooled chiller

Water-cooled Chiller

Cooling Tower

Pumped refrigerant for chilled water systems

The second row in Figure 1 depicts a pumped refrigerant heat exchanger joined together

with a chiller. This combination is generally known as a pumped refrigerant system for chilled water systems. Concerns regarding availability and the drive toward higher densities have lead to the introduction of pumped refrigerant systems within the data center environment. These systems are typically composed of a heat exchanger and pump which isolate the cooling medium in the data center from the chilled water. However, the system could also isolate other cooling liquids such as glycol.

Typically these pumped refrigerant systems use some form of refrigerant (R-134A) or other non-conductive fluids like Flourinert that is pumped through the system without the use of a compressor. Figure 5 shows an example of a pumped refrigerant system connected to a packaged air-cooled chiller using an overhead cooling unit. Chilled water is pumped in pipes from the chiller to a heat exchanger which transfers the heat from the pumped refrigerant. The colder refrigerant returns to the cooling unit to absorb more heat and returns again to the

heat exchanger.

Advantages

? Keeps water away from IT equipment in chilled water applications

? Oil-less refrigerants and non-conductive fluids eliminate risk of mess or damage to

servers in the event of a leak.

? Efficiency of cooling system due to close proximity to servers or direct to chip level.

Disadvantages

? Higher first cost as a result of adding additional pumps and heat exchangers into the

cooling system.

Usually Used

? These systems are usually used for cooling systems that are closely coupled to the

IT equipment for applications like row and rack based high density cooling. ? Chip Level Cooling where coolant is piped directly to the server Figure 5

Example schematic drawing of a pumped refrigerant system connected to chilled water

Chiller*

Condenser

Dry cooler Cooling tower

Chilled water Pumped

refrigerant heat exchanger

Chilled Water Out

Packaged Air-Cooled Chiller

Overhead Cooling Unit

Pumped Refrigerant System

Refrigerant Out

Chilled Water In

Refrigerant

In

Pump

Air-cooled system (2-piece)

The third row in Figure 1 depicts an air-cooled CRAC joined together with a condenser. This combination is generally known as an air-cooled CRAC DX system. The “DX” designation stands for direct expansion and although this term often refers to an air-cooled system, in fact any system that uses refrigerant and an evaporator coil can be called a DX system.

Air-cooled CRAC units are widely used in IT environments of all sizes and have established themselves as the “staple” for small and medium rooms. In an air-cooled 2-piece system, half the components of the refrigeration cycle are in the CRAC and the rest are outdoors in the air-cooled condenser as shown in Figure 6. Refrigerant circulates between the indoor and outdoor components in pipes called refrigerant lines. Heat from the IT environment is “pumped” to the outdoor environment using this circulating flow of refrigerant. In this type of system the compressor resides in the CRAC unit. However, the compressor may alternative-ly reside in the condenser. When the compressor resides in the condenser the correct term for the condenser is condensing unit , and the overall system is known as a split system . Figure 7 shows an example of an air-cooled 2-piece DX system.

Advantages

? Lowest overall cost

? Easiest to maintain

Disadvantages

? Refrigerant piping must be installed in the field. Only properly engineered piping sys-tems that carefully consider the distance and change in height between the IT and out-door environments will deliver reliable performance.

? Refrigerant piping cannot be run long distances reliably and economically. ? Multiple computer room air conditioners cannot be attached to a single air-cooled

condenser.

Usually used

? In wiring closets, computer rooms and 7-200kW data centers with moderate availability

requirements. Figure 6

Air-cooled DX system (2-piece)

Condenser

Air-cooled CRAC

Refrigerant

CRAC unit

Air-cooled condenser

Glycol-cooled system

The forth row in Figure 1 depicts a glycol-cooled CRAC joined together with a dry cooler. This combination is generally known as a glycol-cooled system. This type of system locates all refrigeration cycle components in one enclosure but replaces the bulky condensing coil with a much smaller heat exchanger shown in Figure 8. The heat exchanger uses flowing glycol (a mixture of water and ethylene glycol, similar to automobile anti-freeze) to collect heat from the refrigerant and transport it away from the IT environment. Heat exchangers and glycol pipes are always smaller than condensing coils found in 2-piece air-cooled systems because the glycol mixture has the capability to collect and transport much more heat than air does. The glycol flows via pipes to a dry cooler where the heat is rejected to the outside atmosphere. A pump package (pump, motor, and protective enclosure) is used to circulate the glycol in its loop to and from the glycol-cooled CRAC and dry cooler. A glycol-cooled system is very similar in appearance to the equipment in Figure 7.

Advantages

? The entire refrigeration cycle is contained inside the CRAC unit as a factory-sealed and

tested system for highest reliability with the same floor space requirement as a two piece air-cooled system.

? Glycol pipes can run much longer distances than refrigerant lines (air-cooled split

system) and can service several CRAC units from one dry cooler and pump package.

? In cold locations, the glycol within the dry cooler can be cooled so much (below 10°C

[50°F]) that it can bypass the heat exchanger in the CRAC unit and flow directly to a specially installed economizer coil . Under these conditions, the refrigeration cycle is turned off and the air that flows through the economizer coil, now filled with cold flowing glycol, cools the IT environment. This economizer mode, also known as “free cooling ”, provides excellent operating cost reductions when used. Figure 7

Example of Air-cooled DX system (2-piece)

Figure 8

Glycol-cooled system

Dry cooler

Glycol-cooled

CRAC

Glycol

Disadvantages

? Additional required components (pump package, valves) raise capital and installation

costs when compared with air-cooled DX systems.

? Maintenance of glycol volume and quality within the system is required. ? Introduces an additional source of liquid into the IT environment.

Usually used

? In computer rooms and 30-1,000 kW data centers with moderate availability require-ments.

Water-cooled system

The fifth row in Figure 1 depicts a water-cooled CRAC joined together with a cooling tower. This combination is generally known as a water-cooled system. Water-cooled systems are very similar to glycol-cooled systems in that all refrigeration cycle components are located inside the CRAC. However, there are two important differences between a glycol-cooled system and a water-cooled system:

? A water (also called condenser water ) loop is used instead of glycol to collect and

transport heat away from the IT environment

? Heat is rejected to the outside atmosphere via a cooling tower instead of a dry cooler

as seen in Figure 9.

Advantages

? All refrigeration cycle components are contained inside the computer room air condi-tioning unit as a factory-sealed and tested system for highest reliability.

? Condenser water piping loops are easily run long distances and almost always service

many computer room air conditioning units and other devices from one cooling tower.

? In leased IT environments, usage of the building’s condenser water is generally less

expensive than chilled water (chilled water is explained in the next section).

Disadvantages

? High initial cost for cooling tower, pump, and piping systems.

? Very high maintenance costs due to frequent cleaning and water treatment require-ments.

Figure 9

Water-cooled system

Cooling tower

Water-cooled

CRAC

Condenser water

? A non-dedicated cooling tower (one used to cool the entire building) may be less relia-ble then a cooling tower dedicated to the computer room air conditioner.

Usually used

? In conjunction with other building systems in data centers 30kW and larger with mod-erate-to-high availability requirements.

Air-cooled self-contained system (1-piece)

The sixth row in Figure 1 depicts an air-cooled self-contained air conditioning unit joined together with an air duct. This combination is generally known as an air-cooled self-contained system. Self-contained systems locate all the components of the refrigeration cycle in one enclosure that is usually found in the IT environment. Heat exits the self-contained system as a stream of hot (about 49°C [120°F]) air called exhaust air. This stream of hot air must be routed away from the IT room to the outdoors or into an unconditioned space to ensure proper cooling of computer equipment as illustrated in Figure 10.

If mounted above a drop ceiling and not using condenser air inlet or outlet ducts, the hot exhaust air from the condensing coil can be rejected directly into the drop ceiling area. The building’s air conditioning system must have available capacity to handle this additional heat load. Air that is drawn through the condensing coil (becoming exhaust air) should also be supplied from outside the computer room. This will avoid creating a vacuum in the room that would allow warmer, unconditioned air to enter. Self-contained indoor systems are usually limited in capacity (up to 15kW) because of the additional space required to house all the refrigeration cycle components and the large air ducts required to manage exhaust air. Self-contained systems that mount outdoors on a building roof can be much larger in capacity but are not commonly used for precision cooling applications. Figure 11 shows an example of an air-cooled self-contained system.

Advantages

? Indoor self-contained systems have the lowest installation cost. There is nothing to

install on the roof or outside the building except for the condenser air outlet.

? All refrigeration cycle components are contained inside one unit as a factory-sealed

and tested system for highest reliability.

Disadvantages

Figure 10

Indoor air-cooled self-contained system

Air duct

Air

Air-cooled self-contained

? Air routed into and out of the IT environment for the condensing coil usually requires

ductwork and/or dropped ceiling.

? Some systems can rely on the building HVAC system to reject heat. Issues can arise

when the building HVAC system shuts down in the evening or over the weekend.

Usually used

? In wiring closets, laboratory environments and computer rooms with moderate availabil-ity requirements.

? Sometimes used to fix hot spots in data centers.

Direct fresh air evaporative cooling system

The seventh row in Figure 1 depicts an air-duct joined together with a direct fresh air evaporative cooler. This combination is generally known as a direct fresh air evaporative cooling system, sometimes referred to as direct air. A direct fresh air economizer system uses fans and louvers to draw a certain amount of cold outdoor air through filters and then directly into the data center when the outside air conditions are within specified set points. Louvers and dampers also control the amount of hot exhaust air that is exhausted to the

outdoors and mixed back into the data center supply air to maintain environmental set points (see Figure 12). The primary mode of operation for this cooling method is “economizer” or “free cooling mode” and most systems use a containerized DX air-cooled system as back-up. Although supply air is filtered, this does not completely eliminate fine particulates such as smoke and chemical gases from entering the data center.

This heat removal method is normally used with evaporative cooling whereby the outside air also passes through a wet mesh material before entering the data center (see side bar). Note that using evaporative assist increases the data center humidity because the direct fresh air into the data center passes over the evaporative medium bringing the air to satura-tion which minimizes the effectiveness of this method for data center applications. Evapora-tive assist is most beneficial in dry climates. For more humid climates, such as Singapore, evaporative assist should be evaluated based on ROI (return on investment). Figure 13 shows an example of a direct fresh air evaporative cooling system.

Figure 11

Examples of indoor air-cooled self-contained system

Air cooled self contained

Portable Self Contained

Cooling Unit

Air duct

Air

Direct fresh air evaporative cooler

Advantages

? All cooling equipment is placed outside the data center, allowing for white space to be

fully utilized for IT equipment.

? Significant cooling energy savings in dry climates (e.g. 75%) compared to systems with

no economizer mode.

Disadvantages

? May be difficult to retrofit into an existing data center.

? Subject to frequent filter changes in locations with poor air quality. ? Evaporative cooling contributes to humidity in the data center.

Usually used

? In 1,000kW data centers and larger with high power density.

Indirect air evaporative cooling system

The eighth row in Figure 1 depicts an air-duct joined together with an indirect air evaporative cooler. This combination is generally known as an indirect air evaporative cooling system, sometimes referred to as indirect air. Indirect air evaporative cooling systems use outdoor air to indirectly cool data center air when the temperature outside is lower than the temperature set point of the IT inlet air, resulting in significant energy savings. . This “economizer mode or free cooling” of operation is the primary mode of operation for this heat removal method although most do use a containerized DX air-cooled system as back-up. Fans blow cold outside air through an air-to-air heat exchanger which in turn cools the hot data center air on the other side of the heat exchanger, thereby completely isolating the data center air from the Figure 12

Example of an indirect air evaporative cooling system

Figure 13

Example of a direct fresh air evaporative cooling system

Air duct

Air

Indirect air evaporative cooler

Outdoors IT Environment

Dropped Ceiling Plenum

Hot Aisle Containment

Cold Air Supply Duct

Hot Air Exhaust

Air Filter

Mixing Chamber

Fresh Air Intake

removal method normally uses evaporative assist whereby the outside of the air-to-air heat exchanger is sprayed with water which further lowers the temperature of the outside air and thus the hot data center air. Figure 14 provides an illustration of an indirect air evaporative cooling system that uses a plate heat exchanger with evaporative assist. Figure 15 shows an example of a complete cooling system with this type of heat rejection method.

Indirect air evaporative cooling systems provide cooling capacities up to about 1,000kW.

Most units are roughly the size of a shipping container or larger. These systems mount either on a building roof or on the perimeter of the building. Some of these systems include an integrated refrigeration cycle that works in conjunction with an economizer mode. For more information on this heat removal method see White Paper 132, Economizer Modes of Data Center Cooling Systems and White Paper 136, High Efficiency Economizer-based Cooling Modules for Large Data Centers .

Advantages

? All cooling equipment is placed outside the data center, allowing for white space to be

fully utilized for IT equipment.

? Significant cooling energy savings in most climates (e.g. 75%) compared to systems

with no economizer mode.

Disadvantages

? May be difficult to retrofit into an existing data center.

Usually used

? In 1,000kW data centers and larger with high power density.

Figure 14

Indirect air economizer system

High Efficiency Economizer-based Cooling Modules for Large Data Centers

Link to resource

White Paper 132

Figure 15

Example of an indirect air evaporative cooling system

High Efficiency Economizer-based Cooling Modules for Large Data Centers

Link to resource

White Paper 136

Cold outdoor air

Hot outdoor air

Outdoors IT Environment

Data Center

Dropped Ceiling Plenum

Hot Aisle Containment

Cold Air Supply Duct

Heat Exchanger

Cold outdoor air

Hot

exhaust air

Self-contained roof-top system

The ninth row in Figure 1 depicts an air-duct joined together with a self-contained roof-top unit. This combination is generally referred to as a roof-top unit (RTU). These systems are not a typical cooling solution for new data centers. Roof-top units are basically the same as the air-cooled self-contained system described above except that they are located outdoors, typically mounted on the roof, and are much larger than the indoor systems. Roof-top units can also be designed with a direct fresh air economizer mode. Figure 16 shows an example of a roof-top unit.

Advantages

? All cooling equipment is placed outside the data center, allowing for white space to be

fully utilized for IT equipment.

? Significant cooling energy savings in mild climates compared to systems with no eco-nomizer mode.

Disadvantages

? May be difficult to retrofit into an existing data center.

Usually used

? In data centers that are part of a mixed-use facility.

Numerous options are available to facilities and IT professionals when specifying cooling

solutions. Use the following guide in conjunction with the equipment manufacturer’s technical literature. Note that options may vary based on the size and type of the solution considered.

Airflow direction - Large floor-mounted systems flow air in a downward direction (down flow) or an upward direction (up flow) and some can even flow horizontally (horizontal flow).

? Use a down flow system in a raised floor environment or in a non-raised floor environ-ment when system is mounted on a pedestal.

? Use an up flow system in an existing up flow environment

? Horizontal flow systems should be considered for IT consolidations and IT environment

renovations using a hot/cold aisle configuration.

Fire, smoke, and water detection devices provide early warning and/or automatic shut off during catastrophic events.

Use recommended in all units. Use mandatory if required by local building codes. Best used Cooling system options

Figure 16

Self-contained roof-top system

Air duct

Air

Root-top unit

The Different Technologies for Cooling Data Centers

Humidifiers are commonly located inside precision cooling devices to replace water vapor lost in the cooling process and are used to prevent IT equipment downtime due to static electrical discharge. See White Paper 58, Humidification Strategies for Data Centers and Network Rooms for more information on humidifiers and their functions.

? Use a humidifier in all computer room air conditioners and air handlers unless the room

has a properly functioning vapor barrier and central humidification system. The room must have no current high or low humidity-related problems.

Reheat systems actually add heat to conditioned cold air exiting a precision cooling device to allow the system to provide increased dehumidification of IT environment air when it’s required.

? Use a reheat system for rooms in warm, humid climates or in rooms with poor or non-existent vapor barriers.

Economizer coils use glycol to cool the IT environment in a manner similar to a chilled water system when the glycol stream is cold enough. It provides excellent operating cost reduc-tions when used.

? Use in conjunction with glycol-cooled units in cold climates.

? Use if required by local building codes (Pacific Northwest region of USA).

“Multi-cool” coils enable the use chilled water to be used in addition to the air-cooled, glycol-cooled or condenser water-cooled DX system.

? Use if building chilled water is available but is unreliable or is frequently turned off.

The 13 basic heat removal methods for data centers are primarily differentiated in the way they physically reside in the IT environment and in the way they collect and transport heat to the outside atmosphere. All of the 13 heat rejection methods possess advantages and disadvantages that cause them to be preferred for various applications. The decision on which heat rejection method to specify should be based on the uptime requirements, power density, geographic location, physical size of the IT environment to be protected, the

availability and reliability of existing building systems, and the time and money available for system design and installation. IT professionals versed in precision cooling components and heat removal methods can more effectively work with cooling professionals to ensure the specification of optimized cooling solutions that meet IT objectives.

Conclusion

Humidification Strategies for Data Centers and Network Rooms

Link to resource

White Paper 58

The Different Technologies for Cooling Data Centers

Explanation of Cooling and Air Conditioning Terminology for IT Professionals

White Paper 11

Economizer Modes of Data Center Cooling Systems

White Paper 132

High Efficiency Economizer-based Cooling Modules for Large Data Centers

White Paper 136

Containerized Power and Cooling Modules for Data Centers

White Paper 163

The Implications of Cooling Unit Location on IT Environments

White Paper 130

Fundamental Principles of Air Conditioners for Information Technology

White Paper 57

Air Distribution Architecture Options for Mission Critical Facilities

White Paper 55

Humidification Strategies for Data Centers and Network Rooms

White Paper 58

Browse all white papers

https://www.doczj.com/doc/c818226696.html,

Resources

Click on icon to link to resource

https://www.doczj.com/doc/c818226696.html,

Browse all

TradeOff Tools?

模块化机房建设方案

吉林省某电子认证服务有限公司机房建设方案 2017年10月

正文目录 1 概述 (3) 2 总体设计 (4) 2.1总体设计概况 (4) 2.2总体设计规划 (4) 2.3机房平面布局图 (5) 2.3.1 机房平面图 (5) 3 建设内容 (5) 3.1供配电系统 (5) 3.1.1 供电电源 (5) 3.1.2 电源分类 (6) 3.1.3 插座 (7) 3.1.4 电缆(电线) (7) 3.1.5 UPS (7) 3.1.6 UPS后备电池 (7) 3.2空调制冷系统 (8) 3.2.1 送风方式的选择 (8) 3.2.2 空调负荷计算方法 (8) 3.2.3 制冷方式的选择 (10) 3.2.4 自热冷节能技术 (11) 3.2.5 空调机组的安装 (13) 3.3新风系统 (15) 3.3.1 新风量计算 (15) 3.3.2 新风系统设计 (15) 3.3.3 动力环境监控系统 (23) 3.4机房综合布线系统 (28) 3.5辅助照明系统 (28) 3.5.1 辅助照明需求 (28) 3.5.2 辅助照明方案 (28)

1概述 吉林省某电子认证服务有限公司机房建设工程的机房建设位置位于大楼第16层,机房面积约123平方米,根据功能区别将机房划分为3个区域:监控区,用于人员监控机房及负责人员出入检查。模块化机房区:部署网络、服务器等设备机柜及配电、UPS供电区域。屏蔽机房区:CA核心设备区域。 我们针对某电子认证服务有限公司(以下简称“某公司”)机房建设的需求,全面、综合考虑了各种因素,充分展现了先进、完善、可靠的保障技术,完全能够满足机房系统以及工作人员对机房环境的温度、湿度、洁净度、风速度、电磁场强度、电源质量、噪音、照明、振动、防火、防盗、防雷和接地等要求。确保计算机系统充分发挥其功能、延长设备使用寿命,保证电子计算机等网络设备能够安全、可靠运行;满足工作人员操作过程的灵活、安全和方便等性能特点。 机房建设遵循较为先进、实用高效、安全可靠、节能环保的设计理念,不仅要达到国家《电子信息系统机房设计规范》[GB50174-2008] 和《电子计算机场地通用规范》(GB/T2887-2000) 规定的机房要求标准进行设计建设。同时满足国家相关消防要求。可以满足某公司当前及未来发展对机房实体环境的需求。 总体设计方案保证安全可靠,确保系统安全可靠的运行。保证计算机机房工作人员的身心健康,延长机房内各系统的使用寿命。通过采用优质产品和先进工艺,为网络信息、计算机设备、以及工作人员创造一个安全、可靠美观、舒适的工作场地。

IDC机房资源动态管理系统

安徽移动IDC机房资源动态管理系统简介 一、业务概述 互联网数据中心(Internet Data Center)简称IDC,是中国移动整合网络通信线路、 带宽资源,建立的标准化的电信级机房环境,可以为企事业单位提供服务器托管、租用、接入、运维等的一揽子服务。 IDC资源管理的效率是业务发展和运营的基础,涉及空间资源(机房/机架/机位)、 IP、、端口、带宽、存储、设备等。资源管理的范围还包括设备资源信息、设备用户信息、设备存放信息、设备端口信息。同时,IDC有别于传统机房,其承载的业务种类多,业务 系统增减及系统升级扩容频繁,因此资源是动态变化的,分配繁琐、变更复杂、记录琐碎,查询统计困难是传统的管理方式的存在的主要问题。如何提升IDC资源管理的水平,应对 复杂的多业务环境(自有业务、合作业务、内容引入、集体托管),满足互联网业务发展和IDC向服务转型的需要,改变传统的IDC管理模式,优化资源分配流程,最大化利用资源,实现多维度管控,同时满足公司低成本高效运营的要求是IDC运营的当务之急。 二、原有流程 经调研,目前全国各地IDC机房均采用传统的手工模式实现资源管理,尚未有IDC机 房采用电子化手段结合条码管理方式,实现对不同层级的设备及其资源的动态管控、值班人员的现场无线维护。目前传统的资源手工分配流程如下: A、IP地址的分配,使用电子表格记录IP地址,每个维护人员各自记录每次的IP分配变化,一段时间检查汇总一次。 B、机架、机位的分配,使用电子表格记录,在分配前,需要去机房现场查看,然后具体分配机柜、机位。 C、端口分配,没有记录,每次远程登录到网络设备上分配,分配后修改端口的别名进行记录。 D、机房托管资源的记录,使用电子表格记录。合作伙伴或客户提供托管资源清单,盘点验收后作为机房托管资源的记录。 通过上述的IDC资源管理方式,虽然解决了IDC资源登记、状态信息、资源归属的记录和查询。但在实际使用中也存在较多的问题: 1.IDC 机房内设备数量众多,传统的设备标签信息仅能记录设备归属信息,信息量小、不全面,运

施耐德电气StruxureWare数据中心管理平台屡获殊荣广受赞誉

施耐德电气StruxureWare 数据中心管理平台屡获殊荣广受赞 誉 全球能效管理专家施耐德电气凭借其数据中心基础设施管理(DCIM )软件StruxureWare 自推出后,在市场占有率方面表现优异,得到来自行业客户、专家及媒体的一致好评,并连续多年获得IT 及相关垂直行业的各种奖项。作为基建与IT 之间的桥梁,StruxureWare 不仅简化运营效率,还能够实现智能能源管理,从而节省运营成本,其在DCIM 方面的优异表现也进一步确立了施耐德电气在这一领域的技术和市场领导地位。 StruxureWare 软件的高可用性和高效率一直广受行业内 的好评,在2014 年及2015 年连续两年获得数据中心解决方 案奖(DCS Award )中的“数据中心基础管理(DCIM )年度产品” 称号。在国际数据公司IDC 发布的全球DCIM 供应商分析报告中,基于IDC 的分析以及客户的反馈,施耐德电气 被评为全球DCIM 领域的领导者。全球最具权威的IT 研究与顾问咨询公司Gartner 2014 年9 月发布的《数据中心基础设施管理工具魔力象限》报告中将StruxureWare 列为领导者象限,在前瞻性和执行能力上都以绝对的优势获得了高评分。除此之外,StruxureWare 数据中心管理平台还被中国电源学会、中国电子节能技术协会评为2013-2014 年度数据中心优秀解决

方案奖,被中国计算机用户协会UPS 分会评为2014 年用户信赖产品。 在日益复杂的数据中心环境下,DCIM 不容忽视,已被证明能够有效降低运营成本、缩短部署新服务器时间、并延长数据中心寿命。作为一款DCIM 软件,StruxureWare 能够为提升数据中心运行效率提供至关重要的决策信息。该平台可让数据中心经理全面运营、监控、分析和优化数据中心的能效一一上至整个数据中心,下至每台服务器和每个CPU , 从而大幅节约资本支出和运营支出,延长数据中心的使用寿命。该软件平台还提供最大能耗设备的即时报告,为数据中心经理提供可能需要升级、负载共享或淘汰的服务器清单,以及服务器利用率报告,从而避免服务器的盲目扩增,提高投资回报率。与销售硬件相比,StruxureWare 不仅能够极大地促进渠道合作伙伴的业务范围拓展,提供从硬件到软件的数据中心物理基础设施整体解决方案,提升其行业竞争力和对客户的整体服务能力,还能够帮助渠道客户提升业务增值空间,扩展利润增长点。 施耐德电气全球高级副总裁、IT 事业部大中华区负责人丁伟庆表示:“ DCIM 市场发展迅速,已有不少企业受益于StruxureWare 的从机柜到楼宇的整个端对端可视化功能,使数据中心全生命周期在高可用性和高效率之间取得平衡。StruxureWare 能够得到行业专家、合作伙伴及客户的肯定,我们感到备受鼓励。施耐德电气一直致力于构建新一代互联、智

互联网数据中心机房建设方案

互联网数据中心机房建设方案 经历了 ISP/ICP飞速发展,。COM公司的风靡后,一种新的服务模式一一互联网数据中心 (Internet Data Center,缩写为IDC)正悄然兴起。它在国外吸引着像AT&T AO-、IBM、Exodus、UUNET等大公司的巨资投入;国内不但四大电信运营商中国电信、中国网通、中国联通、中国吉通开始做跑马圈地,一些专业服务商如清华万博、首都在线和世纪互联等,也参与了角逐。 IDC( Internet Data Center ) - Internet 数据中心,它是传统的数据中心与 Internet 的结合,它除了具有传统的数据中心所具有的特点外,如数据集中、主机运行可靠等,还应 具有访问方式的变化、要做到7x24服务、反应速度快等。IDC是一个提供资源外包服务的 基地,它应具有非常好的机房环境、安全保证、网络带宽、主机的数量和主机的性能、大的存储数据空间、软件环境以及优秀的服务性能。 IDC作为提供资源外包服务的基地,它可以为企业和各类网站提供专业化的服务器托管、空间租用、网络批发带宽甚至ASR EC等业务。简单地理解,IDC是对入驻(Hosting) 企业、商户或网站服务器群托管的场所;是各种模式电子商务赖以安全运作的基础设施,也 是支持企业及其商业联盟(其分销商、供应商、客户等)实施价值链管理的平台。形象地说, IDC 是个高品质机房,在其建设方面,对各个方面都有很高的要求。 IDC的建设主要在如下几个方面: 网络建设 IDC 主要是靠其有一个高性能的网络为其客户提供服务,这个高性能的网络包括其 - AN、 WAh和与In ternet 接入等方面。 IDC 的网络建设主要有: - IDC 的- AN 的建设,包括其 - AN 的基础结构, - AN 的层次, - AN 的性能。 -IDC的WAN勺建设,即IDC的各分支机构之间相互连接的广域网的建设等。 -IDC的用户接入系统建设,即如何保证IDC的用户以安全、可靠的方式把数据传到 IDC 的数据中心,或对存放在IDC的用户自己的设备进行维护,这需要IDC为用户提供相应的接 入方式,如拨号接入、专线接入及VPN等。 - IDC 与 Internet 互联的建设。 -IDC的网络管理建设,由于 IDC的网络结构相当庞大而且复杂,要保证其网络不间断对外服务,而且高性能,必须有一高性能的网络管理系统。 服务器建设 IDC的服务器建设可分为多个方面,总体上分为基础服务系统服务器和应用服务系统服务器,主要有:

模块化机房技术方案设计书

实用标准文档 遵化项目模数据中心机房BM模块化数据中心解决方案 北京超特伟业科技有限公司 2016年10月

目录 1BM系统简介 (2) 2模块化数据中心技术方案 (3) 2.1项目概况及需求分析(目前得到的信息有限,待得到更多信息之后要再补充) (3) 2.2设计理念 (3) 2.2.1 设计原则 (3) 2.2.2 设计目标 (4) 2.2.3 设计依据 (4) 2.3方案配置 (5) 2.3.1 机房内模块设计及布置 (5) 2.3.1.1 北机房微模块设计及布置 (6) 2.3.2 配置清单 (8) 3艾特网能BM系统介绍 (8) 3.1服务器机柜 (9) 3.1.1 突出特点 (9) 3.1.2 服务器机柜配置 (10) 3.2封闭冷池组成及特点 (10) 3.2.1 玻璃天窗组件 (10) 3.2.2 玻璃隔离门组件 (11) 3.2.3 钢制假墙组件 (11) 3.2.4 消防联动系统组件 (12) 3.3C OOL R OW系列列间机房空调机组技术特点: (12) 3.4艾特网能配电柜产品介绍 (16) 3.4.1 艾特网能配电柜技术特点: (16) 3.4.2 精密列头柜特点: (17) 1

BM系统简介 模块化数据中心方案,是当今行业中主流及领先的应用方案,在各行业的大中小型机房中广泛应用,并受到行业专家及用户高度认可。包括腾讯、阿里、国家超算中心、各国有银行、各国家及省市政府单位、各托管IDC及云计算中心都大量应用并对其便捷性,可扩容性、低运营成本(节能高效性)、高可管理性、整洁美观给予了高度评价。 此项目考虑到机房布局,机柜数量等特定因素,同时考虑到后续扩容的便利性,支持在线扩容无需掉电及避免二次工程及施工、节能突出节约运营及维护成本、方案整体美观整洁兼容性好、维护时避免因不同厂家设备损坏出现扯皮,故障界面不清晰等问题,建议使用艾特网能iBlock 系列模块化数据中心中的BM 模块化数据中心方案。

模块化数据中心机房

模块化数据中心机房

目录 1 概述 (1) 1.1 产品定位 (1) 1.2 产品方案 (3) 1.3 产品特点 (5) 2 产品组成 (6) 2.1 机柜 (7) 2.2 密闭通道 (8) 2.3 供配电系统 (11) 2.4 制冷系统 (13) 2.5 智能监控系统 (16) 2.6 消防系统 (21) 2.7 综合布线 (22) 3 应用场景 (24) 3.1 应用场景 (24) 3.2 典型配置 (25)

1 概述 1.1 产品定位 为解决客户不断变化的业务需求与低投资、高回报之间的矛盾,满足未来的云计算、 虚拟化、刀片式服务器等高密低耗、快速部署、灵活扩展的需求,有效地提高数据中 心的工作效率,控制投资成本,满足300m2以下数据机房快速部署的需求,才推出模 块化数据中心机房。 模块化数据中心是一套完整的数据中心解决方案,集机柜、配电、制冷、监控、综合 布线、消防等系统于一体,实现了供电、制冷和管理组件的无缝集成。使模块化数据 中心实现智能、高效的运行,让客户花费最少的投资,获得更多的收益,从而降低运 营费用。 模块化数据中心主要应用于数据中心机房内,数据中心机房组成如图1-1所示。模块 化数据中心的总体布局如图1-2所示。

图1-1数据中心机房组成示意图 图1-2模块化数据中心总体布局图

1.2 产品方案 模块化数据中心密闭通道适用于新建数据中心机房和旧机房的改造,采用“密闭冷通 道”方案,也能根据需求组成“密闭热通道”方案。支持水泥地面和防静电地板安 装。 密闭冷通道 机房有精密空调,采用地板下送风且单机柜的功率不超过6kW时,能够采用密闭冷通 道方案,通过机房的精密空调来实现对密闭通道内设备的散热要求,具体如图1-3所 示。 图1-3密闭冷通道结构示意图 密闭冷通道主要是由服务器机柜、配电柜、综合布线柜、天窗和端门组成,服务器机 柜数量可按照具体客户要求配置,其布局如图1-4所示。 密闭冷通道可使冷源直接进入到IT设备,提高制冷效率。 图1-4密闭冷通道布局图

数据中心机房运维外包服务内容

数据中心机房运维外包服 务内容 Modified by JEEP on December 26th, 2020.

数据中心机房运维外包服务 1.服务范围 2. ?终端:终端设备包括台式计算机、便携式计算机、高端工作站和打印机; ?网络系统:网络系统包括技术中心局域网、广域网、互联网的维护工作; ?应用系统:应用系统包括信息门户、各类应用系统等系统的维护工作。 ?数据中心:数据中心范围内设备包括服务器、交换机、UPS、机房供电、机房空调、机房环控、机房管理; ? 3.服务方式 ?热线服务:5 × 8小时(作息制度与KE客户同步)客服服务热线; ?现场服务:安排系统、网络、安全、桌面等各类工程师实施驻场式服务已达到服务及时响应及时解决,作息时间与技术中心同步; ?机房职守:数据中心根据客户需求实施机房职守,以保障核心设备及系统的稳定运行。 ? 4.服务内容 3.1网络系统维护 IP地址维护管理 VLAN划分 网络设备配置调整及网络优化 网络系统故障诊断 网络入侵监测 网络性能及资源使用情况检查

网络广播风暴监测 网络病毒监测 临时网络布线(大型综合布线需要另外签署协议) 因特网接入服务 网络拓扑图的维护 网络设备档案建设 网络运行日志 服务维护档案 网络运行状况报告 3.2机房运行维护 ?机房后备电源运行状况监测 ?机房电源运行状况监测 ?机房空调运行状况监测 ?机房环控系统运行监测 ?弱电线路巡检和楼层弱电间巡检 ?机房安全管理,专人机房值班(根据客户要求,可提供7*24 / 5*8小时值班) 3.3应用系统服务 3.3.1应用系统客户端维护 ?应用系统客户端升级(或升级包)安装服务 ?应用系统终端软件维护服务(一线支持处理常见故障) 3.3.2服务器系统维护 服务器系统维护提供以下服务: ?服务器系统故障处理及维护 ?服务器操作系统的安装、安全设置 ?服务器系统安全设置及维护 ?系统数据备份服务 ?服务有效性检查 ?资源使用情况检查 ?网络病毒防护(需要企业购买相关的软件) ?服务器运行日志 ?数据备份日志 ?系统安全日志

模块化数据机房优势

模块化数据机房优势 我们展厅所有业务的云计算数据中心,中兴通讯云计算采用模块化机房,具有高可用度、建设周期短,方便扩容的优点。现在公交、交通、税务各部门各建设一个自己的数据中心,资源和应用都是固定对应的。一旦某个部门的设备负载过高,只能在原基础上增加设备扩容来解决,那么采用云计算数据中心,资源和应用都是共享并动态分配的,这个部门超负荷时完全可以共享其他部门空闲的资源,无需额外增加设备。因此相对传统数据机房有效降低硬件成本和维护成本40%以上,通过云计算设备和软件提高设备利用率有效降低能耗20%以上,综合降低了政府和企业IT建设的成本。 模块化数据中心解决方案致力提供业界优秀的云计算解决方案,包括云化IT设备,如服务器、存储、网络等。方案采用面向高性能、云计算的可定制化服务器,应用智能化的云计算运营管理平台,充分实现数据中心资源的统一管理、统一部署、统一监控和统一备份。 基于中兴虚拟化和云计算技术,统一的IT平台也帮助招商局实现了智能化运维管理,包括:DDC自控,智能照明,综合布线,综合安防,在发生问题之前,会通过声光、电话以及短信等报警形式实现提前预警。 “模块化数据机房已逐步成为全球数据中心建设的新趋势,中兴携手合作伙伴打造新一代数据中心基础设施,融合统一规划、按需部署、低能耗的特点,帮助客户节省超过30%的TCO,托起一个全新的云计算时代 所谓模块化数据中心,主要包括了一系列采用模块化设计的动力设备,如不间断电源、制冷系统、机架和远程监测系统等,通过简单的接口将相关模块进行组合,从而形成一个完整的数据中心。无论企业当前处于何种规模,或从事哪个行业领域,都可以按照自己的需求定制模块化数据中心,并可伴随业务发展需求,逐步扩张数据中心规模,以应对更多IT需求。这意味着企业可以在未来拥有数倍计算能力,而不会对现有数据中心的功能性或者设计

施耐德struxureware数据中心管理平台

精准定位. 实时洞察 数据中心管理平台

2 施耐德电气在中国 1987年,施耐德电气在天津成立第一家合资工厂梅兰日兰,将断路器技术带到中国,取代传统保险丝,使得中国用户用电安全性大为增强,并为断路器标准的建立作出了卓越的贡献。90年代初,施耐德电气旗下品牌奇胜率先将开关面板带入中国,结束了中国使用灯绳开关的时代。 施耐德电气的高额投资有力地支持了中国的经济建设,并为中国客户提供了先进的产品支持和完善的技术服务,中低压电器、变频器、接触器等工业产品大量运用在中国国内的经济建设中,促进了中国工业化的进程。 目前,施耐德电气在中国共建立了77个办事处,26家工厂,6个物流中心,1个研修学院,3个研发中心,1个实验室,500家分销商和遍布全国的销售网络。施耐德电气中国目前员工数近22,000人。通过与合作伙伴以及大量经销商的合作,施耐德电气为中国创造了成千上万个就业机会。 施耐德电气 能效管理平台 全球能效管理专家施耐德电气为世界100多个国家提供整体解决方案,其中在能源与基础设施、工业过程控制、楼宇自动化和数据中心与网络等市场处于世界领先地位,在住宅应用领域也拥有强大的市场能力。致力于为客户提供安全、可靠、高效的能源,施耐德电气2010年的销售额为196亿欧元,拥有超过110,000名员工。施耐德电气助您——善用其效,尽享其能! 施耐德电气 善用其效 尽享其能 凭借其对五大市场的深刻了解、对集团客户的悉心关爱,以及在能效管理领域的丰富经验,施耐德电气从一个优秀的产品和设备供应商逐步成长为整体解决方案提供商。今年,施耐德电气首次集成其在建筑楼宇、IT 、安防、电力及工业过程和设备等五大领域的专业技术和经验,将其高质量的产品和解决方案融合在一个统一的架构下,通过标准的界面为各行业客户提供一个开放、透明、节能、高效的 能效管理平台,为企业客户节省高达30%的投资成本和运营成本。

传统机房存在问题和模块化大数据中心机房

传统机房存在的问题 ●传统机房建设中存在的观念问题 ◆目前,国多数机房的设计与运营管理比较落后,系统性、可用性、可扩展性不足,且与IT设备机架 化的趋势脱节,存在较大的安全隐患,严重影响了功能的发挥。主要表现在: ◆一是规划设计与运营管理落后。主要表现在机房管理跟不上,对机房设施的建设重视不够。例 如,某企业几年前烧毁一台价值50万元的服务器,导致连续停业两天,其主要原因就是防雷系统建设规划没有跟上。 ◆二是机房的系统性、可用性不高。机房的整个系统在设计时应该均衡,不能某一部分太好,而其它 部分相对较差。影响机房可用性的因素主要包括供电系统、空调系统、监控系统和机架系统。国用户在机房的规划设计、设备选型、施工安装、运行维护等方面都存在或多或少的问题。 ◆三是不太重视扩展性的要求。用户往往在一开始建设机房的时候,不考虑以后是否会添加新的服务 器,这最终会导致在系统扩容时,机柜的配电出现很大问题,到处都是插线板、电缆线,机房的安全存在很大隐患。 ◆四是与IT设备机架化的趋势脱节。用户容易忽视机柜的配风问题,当机房建成后,普遍存在局部热 点的现象。由于设计、维护不合理,大多数机房在运行时,对机房外部都是负压,造成机房灰尘洁净度严重超标。许多常年在传统机房工作的人员,皮肤干燥,衰老加速,对女性尤其如此。 ●传统机房建设中存在的工程问题 ◆由于不重视或不知道电子设备机房在选址、设计、采购、施工等阶段的全过程质量控制,导致建成 并投入使用后的电子设备机房才出现这样或那样的问题,常见的问题有: ?温度控制不稳定:过高温度下长期工作会造成机房设备寿命降低; ?湿度控制不稳定:湿度过低易产生静电,烧机器主板; ?机房正压不达标:导致机房室外尘埃进入机房,引起设备绝缘等级降低、电路短路; ?机房噪声超标:影响机房工作人员的正常工作,容易导致听力下降; ?接地电阻大、零地电压高:可能直接烧毁机器; ?空气洁净度不好:灰尘的长时间积累可引起绝缘等级降低、电路短路,并影响工作人员身体健 康;

完整的IDC机房建设方案

数据中心建设方案 目录 综述 (2) IDC网络建设 (5) IDC网络建设 (6) IDC基础系统建设 (11) IDC应用服务系统建设 (24) IDC综合管理系统 (32) IDC计费系统 (36) IDC计费系统 (40) 技术服务 (43) IDC机房系统设计说明 (51) 一期实施内容建议 (58)

综述 经历了ISP/ICP飞速发展,.COM公司的风靡后,一种新的服务模式--互联网数据中心(Internet Data Center,缩写为IDC)正悄然兴起。它在国外吸引着像AT&T、AO- 、IBM、Exodus、UUNET等大公司的巨资投入;国内不但四大电信运营商中国电信、中国网通、中国联通、中国吉通开始做跑马圈地,一些专业服务商如清华万博、首都在线和世纪互联等,也参与了角逐。 IDC(Internet Data Center) - Internet数据中心,它是传统的数据中心与Internet的结合,它除了具有传统的数据中心所具有的特点外,如数据集中、主机运行可靠等,还应具有访问方式的变化、要做到7x24服务、反应速度快等。IDC是一个提供资源外包服务的基地,它应具有非常好的机房环境、安全保证、网络带宽、主机的数量和主机的性能、大的存储数据空间、软件环境以及优秀的服务性能。 IDC作为提供资源外包服务的基地,它可以为企业和各类网站提供专业化的服务器托管、空间租用、网络批发带宽甚至ASP、EC等业务。简单地理解,IDC是对入驻(Hosting)企业、商户或网站服务器群托管的场所;是各种模式电子商务赖以安全运作的基础设施,也是支持企业及其商业联盟(其分销商、供应商、客户等)实施价值链管理的平台。形象地说,IDC是个高品质机房,在其建设方面,对各个方面都有很高的要求。 IDC的总体结构如下图所示:

全面模块化机房建设与传统建设的区别

最全面模块化机房建设与传统建设的区别 近年来数据中心建设领域取得的进展,“模块化数据中心”无疑是一个热词。模块化数据中心因其能够提供高性价比、高可用性的建设模式,从而被众多基建厂商所熟知。众多传统数据中心基础设施厂商纷纷进入“模块化”的市场竞争中,“模块化”的理念深入人心。在我国已有不少数据中心开始采用模块化数据中心的建设模式,有的甚至步子迈得更大,采用预制模块化的方式。比如,深圳市于某工业园区的数据中心就是采用当下流行的KITOZER(开拓者)预置模块化技术。在该数据中心的建设过程中,其电力、制冷、通信电缆以及相关的环境监控等都预先部署在一个框架上,类似积木,预先完成测试,然后将这个框架直接部署到数据中心,这样数据中心的建设就如同搭积木,从而加快部署。 随着机房系统的发展,机房建设么事也是层出不穷,模块化机房建设模式相较传统建设模式具备了更多优势,在满足客户业务需求的同时创造更多价值,成为未来数据中心建设模式的新标准与方向标。下面凯利圆科技的工程师就为大家介绍一下模块化机房建设与传统建设的区别,供大家参考。 一、微模块数据简介 微模块数据中心是将传统机房的机架,空调,消防,布线,配电,监控,照明等系统集成为一体化的产品,充分发挥了模块化设计的优势,可以实现系统的快速、灵活部署,这不仅可以大幅降低建设成本,

而且能够大幅缩短数据中心的建设周期,微模块数据中心的建设周期减少至8-12周。微模块数据中心建设对机房的空间环境没有特殊要求,不需抬高地板和其它机房的专门装修。 微模块机房,嵌套式整合数据中心物理基础设施一体化,是智能高效的模块化一体式机房,能从容应对供电、消防、监控、管理、运维、扩容等几个方面带来的挑战。 包括UPS 电源、配电、机柜、监控及布线系统等功能模块,适用于政府、教育、医疗、制造、金融、能源、通信等行业的中小型数据中心、分支机构机房、集中管理的分布式机房、成长型模块化数据中心等。 二、微模数据中心解决传统数据中心面临的问题 (一)、传统数据中心面临的问题 1.建设周期长 建设周期就是机房基础设施系统规划、设计、安装和付诸运营的速度。从以往经验来看,这一般是按年来计算的。如下图2一显示了此类项目典型的时间表。因为传统的解决方案涉及大量定制过程,所以这个时间表十分冗长。根据项目建设的实际情况,通常将数据中心的基本建设周期细分为决策阶段、实施准备阶段、实施阶段和投产竣工阶段,整个建设周期大概在400天左右。

施耐德:数据中心解决方案

施耐德:数据中心解决方案 随着社会发展越来越迅速,数据中心这一应用也越来越广泛,施耐德电气针对数字化背景做了数据中心解决方案。 数据中心是全球协作的特定设备网络,用来在internet网络基础设施上传递、加速、展示、计算、存储数据信息。 施耐德在对数据中心解决方案上可以提供更高的可用性解决方案,减少人为错误、减少平均恢复时间;更好的灵活性,利用标准化的模块,可实现解决方案的快速配置、设计和安装;模块化组件可有效较低总体拥有成本,可以实现"按需购买、渐进扩展"的分步投资模式;为数据中心设计和优化的关键电力解决方案,用于满足不断增加的供电和制冷需求;强大专业的全球服务体系,提供整个生命周期的增值服务。 一个数据中心的主要目的是运行应用来处理商业和运作的组织的数据。这样的系统属于并由组织内部开发,或者从企业软件供应商那里买。像通用应用有ERP和CRM系统。一个数据中心也许只关注于操作体系结构或者也提供其他的服务。常常这些应用由多个主机构成,每个主机运行一个单一的构件。通常这种构件是数据库,文件服务器,应用服务器,中间件以及其他的各种各样的东西。数据中心也常常用于非工作站点的备份。公司也许预定被数据中心提供的服务。这常常联合备份磁带使用。备份能够将服务器本地的东西放在磁带上,然而,磁带存放场所也易受火灾和洪水的安全威胁。较大的公司也许发送他们的备份到非工作场所。这个通过回投而能够被数据中心完成。加密的备份能够通过Internet 发送到另一个数据中心,安全保存起来。为了灾难恢复,各种大的硬件供应商开

发了移动设备解决方案,能够安装并在短时间内可操作。供应商像思科系统,Sun 微系统,IBM和HP开发的系统能够用于这个目的。 现在数据中心运用越来越广泛,在数据中心产生的问题上也有了越来越多的解决方案,相信在不久的将来,绿色使用数据中心将成为必要的趋势。

数据中心机房管理制度

XX、有限公司数据中心机房管理制度 为安全、有效管理IDC机房,保障网络系统安全的应用、高效运行,特制定本规章制度,请遵照执行。 一、机房管理 1、计算机房要保持清洁、卫生,并由专人7*24负责管理和维护(包括温度、湿度、电力系统、网络设备等)。 2、严禁非机房工作人员进入机房,特殊情况需经中心值班负责人批准,并认真填写登记表后方可进入。进入机房人员应遵守机房管理制度,更换专用工作鞋;机房工作人员必须穿着工作服。非机房工作人员未经许可不得擅自上机操作和对运行设备及各种配置进行更改。 3、进入机房人员不得携带任何易燃、易爆、腐蚀性、强电磁、辐射性、流体物质等对设备正常运行构成威胁的物品。 4、建立机房登记制度,对本地局域网络、广域网的运行,建立档案。未发生故障或故障隐患时当班人员不可对中继、光纤、网线及各种网络、系统设备进行任何调试;若有故障应对所发生的故障、处理过程和结果等做好详细登记。 5、路由器、交换机和服务器以及通信设备是网络的关键设备,须放置计算机机房内,不得自行配置或更换,更不能挪作它用 6、机房工作人员应做好网络安全工作,服务器的各种帐号严格保密。并对各类操作密码定期更改,超级用户密码由系统管理员掌握。系统管理人员随时监控中心设备运行状况,发现异常情况应立即按照预案规程进行操作,并及时上报和详细记录。 7、机房工作人员应恪守保密制度,不得擅自泄露中心各种信息资料与数据。 8、做好操作系统的补丁修正工作。 9、网络、系统管理人员统一管理计算机及其相关设备,完整保存计算机及其相关设备的驱动程序、保修卡及重要随机文件。 10、计算机及其相关设备的报废需经过管理部门或专职人员鉴定,确认不符合使用要求后方可申请报废。 11、不定期对机房内设置的消防器材、监控设备进行检查,以保证其有效性。

模块化机房技术方案书

遵化项目模数据中心机房BM模块化数据中心解决方案 北京超特伟业科技有限公司 2016年10月

目录 1BM系统简介 (22) 2模块化数据中心技术方案 (33) 2.1项目概况及需求分析(目前得到的信息有限,待得到更多信息之后要再补充) (33) 2.2设计理念 (33) 2.2.1设计原则 (33) 2.2.2设计目标 (44) 2.2.3设计依据 (44) 2.3方案配置 (55) 2.3.1机房内模块设计及布置 (55) 2.3.1.1北机房微模块设计及布置 (66) 2.3.2配置清单 (88) 3艾特网能BM系统介绍 (88) 3.1服务器机柜 (99) 3.1.1突出特点 (99) 3.1.2服务器机柜配置 (1010) 3.2封闭冷池组成及特点 (1010) 3.2.1玻璃天窗组件 (1010) 3.2.2玻璃隔离门组件 (1111) 3.2.3钢制假墙组件 (1111) 3.2.4消防联动系统组件 (1212) 3.3C OOL R OW系列列间机房空调机组技术特点: (1212) 3.4艾特网能配电柜产品介绍 (1616) 3.4.1艾特网能配电柜技术特点: (1616) 3.4.2精密列头柜特点: (1717) 1

BM系统简介 模块化数据中心方案,是当今行业中主流及领先的应用方案,在各行业的大中小型机房中广泛应用,并受到行业专家及用户高度认可。包括腾讯、阿里、国家超算中心、各国有银行、各国家及省市政府单位、各托管IDC及云计算中心都大量应用并对其便捷性,可扩容性、低运营成本(节能高效性)、高可管理性、整洁美观给予了高度评价。 此项目考虑到机房布局,机柜数量等特定因素,同时考虑到后续扩容的便利性,支持在线扩容无需掉电及避免二次工程及施工、节能突出节约运营及维护成本、方案整体美观整洁兼容性好、维护时避免因不同厂家设备损坏出现扯皮,故障界面不清晰等问题,建议使用艾特网能iBlock 系列模块化数据中心中的BM 模块化数据中心方案。

数据中心机房系统的设计方案

数据中心机房系统的设 计方案 1.机房设计方案 1.1概述 1.1.1概述 随着现代科学技术的不断发展,尤其是随着现代建筑技术、现代通信技术、现代控制技术、现代仪器仪表技术和现代计算机技术的不断更新、发展、完善和整合,计算机机房智能化的趋势已经越来越明显。 现在的智能计算机机房,已正在脱离仅仅依靠房屋、管道等硬件来评价其质量的简单模式,而渐渐走向以各项系统水平来重点评价计算机机房综合水平的模式。 数据中心机房作为数据功能的主要场所,如果没有一个设计优化、配置合理、性能稳定的计算机机房系统,就很难保证计算机机房应有的定位、水平和档次。此外,一个好的计算机机房系统,不仅能够保证日常工作的正常运转,大大提高工作效率,而且对于计算机机房的安全、等特殊要求同样能够给予有力保障。 作为一家在电子工程领域里已成熟发展多年的公司,我们首先要感贵方能够提供这次难得的机会,展示我公司多年技术实践的成果,为中心机房建设工程提供一套完善的解决方案。 本技术建议书,是在充分考虑大楼具体实际的基础上,就今后管理所涉及的、必不可少的计算机机房系统所作的设计。 1.1.2工程概述说明 数据中心机房按B级机房标准设计。本次将该机房分割成主机房、电源室、监控室。本工程包括含UPS系统、精密空调系统、机柜系统、供配电系统、综合布线系统、防雷接地系统、消防系统、门禁系统、视频监控系统、环境监测系统以及装饰装修等机房工程的方案设计与施工。

1.1.3设计原则 1 规性原则 中心机房在规划、设计、建设过程中,应遵循国家有关法律、法规,严格执行国家有关规标准。 2 可靠性原则 为保证机房的设备与系统的正常运行,机房的电源、网络布线、空调系统等应具有极高的可靠性,确保电力及空调供应的连续性,决不能出现单点故障,机房应具有抵御自然灾害如地震、水灾、火灾、鼠虫害等的能力。因此对机房及屏蔽系统的规划、结构设计、配电系统设计、日常维护等各方面都进行专业可靠的设计与施工。 3 先进性原则 机房应采用国际先进的设备与技术,满足当前机房的业务需求,兼顾未来的发展扩充,适应信息化的快速发展和高速的数据传输需要,是整个系统在一段时间保持技术先进且适应信息化快速发展的需要。 4 安全性原则 在机房各项系统的规划和建设上应特别注意其安全、方面措施的应用和实施。 5 环境保护原则 环保和发展永远是人类所要面对的课题,作为21世纪的计算机机房建设,更应充分体现环保的意识,加强环保措施,采用绿色环保材料,使其真正体现现代化机房的风。 1.1.4建设容实施 1 工程概况 工程名称:数据中心机房 2 工程围: 机房建设项目主要容包括:机房装饰装修、机房电气及防雷接地、、机房综合布线系统、机房消防系统、机房安防系统、机房环境监控报警系统、UPS不间断电源系统、精密空调系统、机柜、设备承重及设备通道处理、UPS电源室等。

数据中心机房UPS系统的维护与管理

数据中心机房UPS系统的维护与管理 目前,在金融、航空、电信等行业,多台大型主机及大量的服务器、路由器、交换机、磁带库和磁盘阵列机等被集中地安装在同一IDC或MDC机房内,进行网络、通信、信息实时处理。研究证明,计算机和通信系统的可用性,在很大程度上要受电源质量的影响。一个大型企业,深知自己成功与否取决于计算机系统,供电故障只要出现几分钟,就可能出现诸如有损形象、丢失合同、丢失用户、终止用户服务、丢失运行的数据等等致命后果,据美国《幸福》杂志统计,金融业务每宕机一小时造成的损失就达700万美元以上,显而易见,稳定、可靠、纯净的电源是数据中心机房各种设备连续、正常、高效运行的重要前提。 数据中心机房安全不间断供电(UPS:Uninterruptible Power Supply)系统经过多年发展,在其性能指标完全满足计算机网络设备要求的情况下,真正能为用户带来价值的是其可用性。供电系统可用性包含:供电系统中设备的可靠性、可管理性和可维护性。可靠性高、便于管理、故障后可快速修复等,都意味着给用户更多的正常使用时间,把故障后不可用时间降到最低限度。本文作者结合多年工商银行大型UPS维护工作实践,就如何提高供电系统可用性、将不安全隐患消灭在萌芽状态,确保数据中心机房真正的不间断运行进行探讨。 一、数据中心机房UPS供电系统本身可靠性。 全集中模式下,计算机系统是由计算机及网络等电子设备组成,这些电子设备对电源系统有着严格的技术标准和要求。UPS供电系统,通常是指UPS主机、蓄电池、静态旁路开关及其他接入接出辅助设备和环节组成,要让网络数据中心机房真正具备365×24h连续不断运行,仅靠工作效率高、输出能力强的UPS本身无法达到,UPS系统中的输入输出配电柜、并机柜、断路器开关和传输电缆等辅助设备都是单点故障瓶颈,一旦出现问题,就必然造成系统停电故障,因此要实现接近零故障的供电,必须是一个有高度容错功能的冗余供电系统。现代IDC机房供电系统的硬件配置示意如图1所示。一般是由2路/多路市电源组成冗余式市电系统+备用发电机组+1台/多台自动切换开关(ATS: Automatic Transfer Switch)+防雷击抗瞬态浪涌抑制器和UPS供电系统来共同组成。这基本上是一个“永远不会停电”的配电系统,允许执行“不停电”的维护和检修操作,可将其可用性提高到99.999%,每年网络机房停机时间低于5.26分钟。 二、数据中心机房UPS供电系统维护管理实践。 据统计,40%-50%的计算机故障是因为电源的故障和干扰造成的。而目前大型数据中心机房选用的UPS在性能和可靠性指标(例如工作效率、输出能力、平均无故障时间和使用

机房传统UPS与模块化UPS有什么不同

机房传统UPS与模块化UPS有什么不同? UPS,即不间断电源,是将蓄电池与主机相连接,通过主机逆变器等模块电路将直流电转换成市电的系统设备。UPS电源是许多机房的动力保证,保证了供电的连续性,保证了供电系统的安全性,UPS电源时刻发挥着重要的安全保障作用。如今,模块化UPS已经成为很多企业的新宠,市场份额逐渐增加,机房传统UPS与模块化UPS有什么不同呢? 一、模块化UPS具有良好的性价比 可以降低企业的购买成本,按需购买,量身定做,扩容方便,模块化UPS 能使供电系统随着需求而自动变化,满足企业的不同需求,降低企业成本,实现生产能力提高,加强市场竞争力,这样的优势是常规UPS所不具备的,并且,还可以通过热插拔来降低和缩短系统的修复和安装的时间,便于维护。 二、模块化UPS相对于传统UPS在外部结构上就有所不同 它能够将所有模块安装在标准机柜中,节省占地面积和空间,便于安装使用与维护,从设计原理方面来讲,模块UPS本身就是一台UPS,包括整流器、静态旁路开关及附属的控制电路等。 三、模块化最大的优点是能够提高系统的可靠性和可用性, 一个模块出现故障并不会影响其他模块的正常工作,其可热插拔性能够大大缩短系统的安装和修复时间,可见,模块化UPS电源的系统结构极具弹性,功率模块的设计概念是在系统运行时可随意移除和安装而不影响系统的运行及输出,使投资规划实现“随需扩展”,让用户随业务发展实现“动态成长”,既满足了后期设备的随需扩展,又降低了初期购置成本。

企业机房在预计UPS容量时,时常会出现低估或高预计等情况,模块化UPS 电源有效的解决以上问题,帮助用户在未来发展方向尚不明确的情况下分阶段进行建设和投资。当用户负载需要增加时,只需根据规划阶段性的增加功率模块,模块化设计在重新配置功率以满足不断变化的业务需求方面,提供了极大的灵活性。在安装、升级、重新配置或移动模块化系统时,其独立组件、标准接口以及简单的操作既节省了时间又节省了费用。

XX数据中心机房系统方案简介

XX数据中心机房系统方案简介

工程简介 一、项目概况: 1、项目名称:XXX数据枢纽中心项目XX期机房工程 2、工程地点:XXX区 3、建筑面积:约21500平方米;分动力楼(2层)和机房楼(3层); 二、项目规模: 项目内容项目内容 主机房面积5800UPS配置39台(400kVA:36台,200kVA:3台) 主机房房间12电池备电时间10分钟 机柜数约2200配电列头柜144台 单柜功率4kW冷水机组5台(变频,850RT,4+1配置,约11957kW) 变压器设计总容量共约32000kVA柴油发电机组9+1台,主用功率1800kW, 10kV的高压发电机14台(2台2500kVA,4台2000kVA,8台 1600kVA,5台1250kVA) 蓄冷罐容量 1台,300立方米,满足系统满负荷连续运行10 分钟 分两期建设:柴油机一期建设5台(4+1),冷机建设3台(2+1) ****该数据中心用于租赁,分期、按需建设。

建筑及功能分割基本情况 平面图 ****动力与IT 区域分开,利于管理、维护,层高适中,设计合理。 动力楼 剖面图 IT 配电室 动力楼机房楼1、柱间距8.4米2、层高7.5米 1、柱间距9米 2、1层层高7.5米,2层层高6.4米,3层层高5.6米 1\冷冻站2\高低压配电室3\动力配电室 柴发室1层 2层屋顶1层 2层3层屋顶 IT 机房IT 机房机房楼

电池室******供配电满足A 级机房架构标准,且做到A 、B 物理隔离 10kVA 配电动力配电 动力配电冷冻站 电池室电池室IT-2-1A IT-2-1B IT-3-1A IT-3-1B 电池室IT-2-2A IT-3-2A IT-2-2B IT-3-2B IT-2-3A IT-3-3A IT-2-3B IT-3-3B 电池室 公用L-TA 公共L-TB

IDC数据中心机房

I D C数据中心机房 文件排版存档编号:[UYTR-OUPT28-KBNTL98-UYNN208]

D C是好几种英文组合的缩写其中包括 一、互联网数据中心InternetDataCenter,(互联网数据中心) 二、国际数据公司InternationalDataCorporation(国际数据公司) 三、点火用诊断接头IgnitionDiagnosticConnector 其中应用最广泛的是InternetDataCenter,也就是互联网数据中心 机房设计集建筑、结构、电气、暖通空调、给排水、消防、网络、智能化等多个专业技术于一体,应具有“良好的安全性能,可靠而且不能间断”的特点。 数据中心的等级分为A级、B级和C级: A级为容错型 A级别的数据中心要求支撑系统有足够的容量和能力规避任何计划性动作导致的重要负荷停机风险。同时容错功能要求支撑系统有能力避免至少1次非计划性的故障或事件导致的重要负荷停机风险,这要求至少两个实时有效地配送路由,N+N是典型的系统架构。对于电气系统,两个独立的(N+1)UPS是一定要设置的。但根据消防电气规范的规定,火灾时允许消防电力系统强切。A机房要求所有的机房设备双路容错供电。同时应注意A机房支撑设备必须与机房IT设备的特性相匹配。 B级为冗余型 B级别的数据中心允许支撑系统设备任何计划性的动作而不会导致机房设备的任何服务中断。计划性的动作包括规划好的定期的维护、保养、元器件更换、设备扩容

或减容、系统或设备测试等等。大型数据中心会安装冷冻水系统,要求双路或环路供水。当其他路由执行维护或测试动作时,必须保证工作路由具有足够的容量和能力支撑系统的正常运行。非计划性动作诸如操作错误,设备自身故障等导致数据中心中断是可以接受的。 C级为基本型 C数据中心可以接受数据业务的计划性和非计划性中断。要求提供计算机配电和冷却系统,但不一定要求高架地板、UPS或者发电机组。如果没有UPS或发电机系统,那么这将是一个单回路系统并将产生多处单点故障。在年度检修和维护时,这类系统将完全宕机,遇紧急状态时宕机的频率会更高,同时操作故障或设备自身故障也会导致系统中断。 一般情况下: 基本上银行系统的机房多按A级标准做; 政府部门及企业根据客户要求按照B级来设计; 事业单位及教育部门可根据C级设计。 机房其功能间比较详细划分的话,大概的功能间划分为:设备区域和辅助区域;设备区域分:主机区、小型机区、网络区、UPS电源室、打印机区、介质库、空调区等; 辅助区域分:监控室、测试室、值班室、气体灭火器材间、新风室、参观走廊、缓冲间等;

相关主题
文本预览
相关文档 最新文档