TIA-942 is a data center design standard that provides guidelines for key areas like spaces, cabling, electrical systems, cooling, and tier classifications. It defines five functional space areas and recommends separating them where possible. The standard also covers best practices for racks and cabinets, structured cabling layouts, electrical considerations, and choosing appropriate cooling based on calculated heat loads. It establishes a four-tier system for classifying data centers based on resilience and capacity of mechanical, electrical, and plumbing systems. Proper implementation of TIA-942 helps standardize designs and allows facilities to be reliably compared.
* TIA – Telecommunications Industry Association * Focus on TIA-942 data standards and some of the best practices surrounding a data center. * If you get a chance to go through this document, you notice that it is fairly simple and applies a lot of common sense; probably, at the end of this review you will say.. Hmmm I know that – the TIA puts structure to random common sense.
* Wikipedia.org: A data center or datacenter is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices. businessdictionary.com: Computer facility designed for continuous use by several users, and well equipped with hardware, software, peripherals, power conditioning and backup, communication equipment, security systems, etc. So what make one data center different from another: Levels of redundancy (cooling, electrical, connectivity, etc.) Capacity (space, cooling, electrical, network connectivity etc.) Monitoring and notification Staffing to maintain the facility
The increased demands on enterprise data centers stem from New business realities, Increased energy costs, Deploy and manage applications that require higher availability and increased service levels for uptime and responsiveness Regulatory compliance requirements for data retention and security Implement green computing practices, both of which reduce costs by lowering data center power consumption. Expanding volumes of data managing highly complex and wildly heterogeneous environments Q: By show of hands, how many people actually track cost of data center operations, including energy costs?
The first attempt of providing some level of standardization was to provide a tier system – a system that specifies the availability and reliability of a data center
Q What is the difference between Uptime and TIA-942? The Uptime Institute Established in 1995 Not a standards body Widely referenced in the data center construction industry Uptime’s method includes four tiers; Provides a high level guideline but does not provide specific design details for each Tier. TIA 942 Established by a standards body TIA and recognized by ANSI TIA 942 tier system is based on Uptime Institute’s Tier Performance Standards. Although a standard, tier system is provided as “informative and not considered to be requirements of this Standard”. Provide specific design criteria for designers build to a specific tier level and allows data center owners to evaluate their own design Comparison of the three Uptime method and Syska method , do not provide details needed to articulate the differences between levels. The TIA-942, provides specific details at every tier level and across a wide range of elements including telecom, architectural, electrical, mechanical, monitoring, and operations. Objective of standard: Provide requirements & guidelines for the design & installation of a data center or computer room. Standard to used in data center design / building development process Provides input during the construction process and cuts across the multidisciplinary design efforts; By addressing the multi-disciplinary aspects, promotes cooperation in the design and construction Do have to mention that it is a little heavy on the telecommunications standards than others – given its origin. Provides more details that the other two. For example, the TIA-942 specifies that a tier 2 data center should have two access provider entrance pathways that are at least 20 m (66 ft) apart. Syska also specifies that a tier 2 data center should have two entrance pathways but adds no other detail. Audience: Primarily intended for use by CIO, Data Center Operations Manager, Infrastructure Engineers (servers / network / cabling), Facilitate communications with architects, facility management
ANSI/BICSI-002 "Data Center Design Standard & Recommended Practices" These 21 areas can be boiled down to one of these 8 core areas Sizing and selection: Design Process, Space Planning, Site Selection Cabling infrastructure and administration: cabinets and racks, cabling pathways, cabling systems, cabling field testing Architectural and structural considerations: Architectural, Structural, Commissioning Security and Fire Protection: Fire protection, Security, building automation, Electrical, Grounding, and Mechanical Systems: Electrical, HVAC/Mechanical, Applications Distances: Redundancy, information technology, Maintenance Access Provider Coordination and Demarcation: access providers, telecom space, telecom administration, Operations (?): Maintenance The number one data center planning issue is - Heat Mitigation (cooling)
The 5 areas of focus for today will be Data Center Spaces. Data Center Cabling Electrical Cooling Tier System
According to TIA-942, a data center should include the following key functional areas: • One or more Entrance Rooms • Main Distribution Area (MDA) • One or more Horizontal Distribution Areas (HDA) • Equipment Distribution Area (EDA) • An optional Zone Distribution Area (ZDA) • Backbone and Horizontal Cabling
Entrance Room Analogy: “Entrance Facility” Main Distribution Area (MDA) Analogy: “Equipment Room” Horizontal Distribution Area (HDA) Analogy: “Telecom Room” Zone Distribution Area (ZDA) Analogy: “Consolidation Point” Equipment Distribution Area (EDA) Analogy: “Work Area” Entrance Room (ER) : Location of interface with campus and carrier entrance facilities Location for access provider equipment, demarcation points and interface with other campus locations. ER is connected to the data center MDA through backbone cabling. TR’s Main Distribution Area (MDA) Centralized portion of the backbone cabling Providing connectivity between equipment rooms, entrance facilities, horizontal cross-connects, and intermediate cross-connects. Can have core aggregation switches / routers Horizontal Distribution Area (HDA) Main transition point between backbone and horizontal cabling and houses the LAN, SAN and KVM switches that connect to the active equipment (servers, mainframes, storage devices). Location of horizontal cross-connect (HC); HDA houses cross connects and active equipment (switches) for connecting to the equipment distribution area or Zone Distribution Area (if available) and storage area network (SAN). * Per the TIA-942 standard, both the MDA and HDA require separate racks for fiber, UTP and coax cable Zone Distribution Area (ZDA) Optional ZDA acts as a consolidation point within the horizontal cabling run between the HDA and EDA. ZDA cannot contain any cross connects or active equipment. Equipment Distribution Area (EDA) * Where equipment cabinets and racks house the switches and servers and where the horizontal cabling from the HDA (or ZDA if used) is terminated at patch panels
Advantages of a ZDA Reduces pathway congestion Limits data center disruption from the MDA and eases implementation of MACs Enables a modular solution for a “pay-as-you-grow” approach Simple to deploy and/or redeploy if needed typically does not contain active electronics, but with Top of Rack topologies, I think it would qualify as a ZDA.
Location: Avoid locations that are restricted by building components that limit expansion such as elevators, core, outside walls, or other fixed building walls. Accessibility for the delivery of large equipment to the equipment room should be provided EMI: Sources: electronics devices that transmit data over a medium EMI can couple itself onto data lines and corrupt data packets being transmitted on that medium. may cause corruption of the data that is being transmitted and stored. Floor Loading: Lbf / Sq ft: A pound-foot is a unit of torque (a vector). One pound-foot is the torque created by one pound force acting at a perpendicular distance of one foot from a pivot point.
Signal Ref. Grid: The intent of the signal reference grid is to establish an equipotential ground plane where everything connected rises and falls together in the event of an electrical disturbance, from whatever source. Electronic equipment is affected when there is a potential difference between devices. An equipotential grid significantly reduces potential differences, thus reducing current flow thereby eliminating the adverse affect on logic circuits SRG not required with modern IT equipment; The advent of Ethernet and fiber data interfaces have dramatically reduced the susceptibility of IT equipment to noise and transients, particularly when compared with the ground referenced IT interface technologies of 20 years ago. The installation of an SRG is not harmful, other than the associated cost and delay. * Recommend UPS equipment outside of the main data center – 13 – 18% of heat generated from UPS
Higher equipment failures at top of the rack In the EDA, racks and cabinets should be arranged in a hot aisle/cold aisle configuration to encourage airflow and reduce heat
Unstructured Cabling or ad-hoc cabling : Installing cabling when you need it – primarily serves as a single use cable Structured Cabling : an organized reusable and flexible cabling system Very large emphasis on Cabling in the document Multidisciplinary Design Considerations in cabling Horizontal cabling Backbone cabling Cross-connect in the entrance room or main distribution area Main cross-connect (MC) in the main distribution area Horizontal cross-connect (HC) in the telecommunications room, horizontal distribution area or main distribution area Zone outlet or consolidation point in the zone distribution area; and Outlet in the equipment distribution area. Backbone Cabling: Provides connections between telecommunications closets, equipment rooms, and entrance facilities. Includes cabling from MDA to ER, HDA, TR; Optional cabling between HDAs allowed Consists of the transmission media (optical fiber cable), Further be classified as interbuilding backbone (cabling between buildings), or intrabuilding backbone (cabling within a building). Horizontal cabling: Simply put – patch panel to wall outlet; Connect between a horizontal cross connect to the outlet in the EDA or ZDA Max of one consolidation point in a ZDA Max distance of 90m/295ft reduced where total patch cord lengths > 10m
Bold are the recommendations per TIA-942
• Racks and Cabinets –Single Rack loaded with Blade Servers –30kW power • 10-30 seconds required for backup generators to start results in overheated electronics” • Industry experts Recommend Maximum 15-20kW per rack allows backup generators to start up without overheated electronics” Common Bonding Network: The set of metallic components that are intentionally or incidentally interconnected to provide the principal means for effecting bonding and grounding inside a telecommunications building. These components include: structural steel or reinforcing rods, metallic plumbing, ac power conduit, cable racks, and bonding conductors. The CBN is connected to the exterior grounding electrode system
Can use nameplate specifications @ approximately 60 – 75% Multiple physically separate connections to public power grid substations Intelligent PDUs are able to provide management systems information about power consumption at the rack or even device level; provide remote power cycling Dual A-B cording: In-rack PDUs should make multiple circuits available so that redundant power supplies (designated A and B) for devices can be corded to separate circuits. Some A-B cording strategies call for both circuits to be on UPS while others call for one power supply to be on house power while the other is on UPS. Each is a function of resilience and availability.
Design Implications The vast majority of existing data centers designs do not correctly address the above factors and suffer from unexpected capacity limitations, inadequate redundancy, and poor efficiency.
The vast majority of existing data centers designs do not correctly address the above factors and suffer from unexpected capacity limitations, inadequate redundancy, and poor efficiency. • It takes about 160cfm for 1kW of heat or 2500 cfm for 18kW of heat • An average perforated floor tile will disperse 250-300 cfm •“ Equipment on the upper 2/3 of the rack fail twice as often as equipment on the bottom 1/3 of the rack” 1. Determine the Critical Load and Heat Load Equipment + other loads such as lighting, people, etc Can use nameplate specifications @ approximately 60 – 75% As a very general rule-of-thumb, consider no less than 1-ton (12,000 BTU/Hr / 3,516 watts) per 400 square-feet of IT equipment floor space Factor in x% for growth 2. Determine Power Requirements on a per RLU Basis Rack or cabinet foot print area since all manufacturers produce cabinets of generally the same size Rack location is the specific location on the data center floor where services that can accommodate power, cooling, physical space, network connectivity, functional capacity, and rack weight requirements are delivered. Services delivered to the rack location are specified in units of measure, such as watts or Btus, thus forming the term rack location unit The reality is that a computer room usually deploys a mix of varying RLU power densities throughout its overall area. Help with site layout also - Knowing the RLUs for power and cooling enable the data center manager to adjust the physical design, the power and cooling equipment, and rack configurations within the facility to meet the systems' requirements. 3. Determine CFM – movement of air Effective cooling is accomplished by providing both the proper temperature and an adequate quantity of air to the load General problems – improper positioning of equipment heat exhaust being the intake for another equipment, solid doors on cabinets, Be aware of equipment airflow – some use side to side, most use front to back Blank Unused Rack Positions * Large data center would require Computational Fluid Dynamic (CFD) Modeling • It takes about 160cfm for 1kW of heat or 2500 cfm for 18kW of heat • An average perforated floor tile will disperse 250-300 cfm •“ Equipment on the upper 2/3 of the rack fail twice as often as equipment on the bottom 1/3 of the rack”
Because of the significant risk of electrical fires in a data center, installing a comprehensive fire detection and suppression system is mission-critical for protecting life and property, as well as ensuring quick operational recovery Fire Suppression: Halon 1301 (no longer recommended or in production)
8 primary criteria MEP: Mechanical , Electrical, and Plumbing What is initial watts / square foot What is ultimate watts / square foot
Not All Data Centers Have to be Tier-4 Choosing an optimal criticality is a balance between a business’s cost of downtime and a data center’s total cost of ownership Choices may be limited depending on whether a new data center is being built, or changes are being made to an existing one Existing data center projects (i.e. retrofit), choosing a criticality is limited to the constraints of the existing structure Identify the major constraints to see if they can be addressed or acceptable risk to the business. If the constraint could be removed view alternate strategies such as alternate location
http://www.webopedia.com/TERM/d/data_center_tiers.htm A four tier system that provides a simple and effective means for identifying different data center site infrastructure design topologies. The Uptime Institute's tiered classification system is an industry standard approach to site infrastructure functionality addresses common benchmarking standard needs. The four tiers, as classified by The Uptime Institute include the following: Tier 1 : composed of a single path for power and cooling distribution, without redundant components, providing 99.671% availability. Tier II : composed of a single path for power and cooling distribution, with redundant components, providing 99.741% availability Tier III : composed of multiple active power and cooling distribution paths, but only one path active, has redundant components, and is concurrently maintainable, providing 99.982% availability Tier IV : composed of multiple active power and cooling distribution paths, has redundant components, and is fault tolerant, providing 99.995% availability.
As stated earlier, some of this may not be any ‘earth shattering’ information, but this standard functions as a collection point for a lot of the common sense type of activities related to the data center. Not difficult to understand – but implementation can be fairly complex