Raised floor design isssues involving CMP

July 1, 2001
Q: I have a question about your response in the May 2001 column concerning running a 600-pair PE-rated outside-plant cable exposed through a "plenum air return" rated space under a raised floor

I have received a lot of mail regarding my May 2001 column. Thanks to all who commented. The following is typical of the inquiries.

Q: I have a question about your response in the May 2001 column concerning running a 600-pair PE-rated outside-plant cable exposed through a "plenum air return" rated space under a raised floor. Doesn't 800-53(a) answer the question point blank? It reads, "Cables installed in ducts, plenums, and other floor spaces used for environmental air shall be type CMP." I do not know of a 600-pair PE-rated outside-plant cable that is CMP, so rigid or IMC conduit would be required, would it not?
Tom Rause, RCDD
MasTec Enterprise Network Services
Raleigh, NC

A: While there are outdoor/indoor CMP-rated optical-fiber cables available, you are certainly correct that there are no large-count outside-plant copper cables available. The National Electrical Code 1999 Article 800-30 sees to that.

You are also correct that Article 800-53(a) says, "Plenum. Cables installed in ducts, plenums, and other spaces used for environmental air shall be type CMP." From this statement, it is a logical assumption that if the space under the floor in a "main computer room" were used as a plenum air return, then CMP would be required. But that is not always the case. The Information Technology Equipment Rooms covered in Article 645 are an exception.

First, a description. An Information Technology Equipment Room is an enclosed area, with one or more means of entry, that contains computer-based business and industrial equipment designed to comply with NFPA 75-1999, Standard for the Protection of Electronic Computer/Data Processing Equipment, and NEC Article 645-2.

Article 645-2 lists the following requirements to qualify as an Information Technology Equipment Room:

  • Provide some means to disconnect power to all electronic equipment and all dedicated HVAC systems serving the ITER, and to cause all required fire/smoke dampers to close;
  • Either provide a dedicated HVAC system or smoke/fire dampers at the point of penetration of the room;
  • Install only listed IT equipment;
  • Only occupied by IT maintenance and operation staff;
  • Separated from other occupancies by fire-resistant-rated walls, floors, and ceilings with protected openings;
  • Comply with the applicable building code.

Sounds like a data center to me.

Article 645-5(d) allows installation of communications cables under a raised floor, provided that raised floor is of "suitable construction" (whatever that means to you and your authority-having-jurisdiction); that the area under the floor is "accessible" (anything other than one solid piece should qualify); that the underfloor ventilation is used only for the ITER (the dampers should cover this one); and that the openings in the raised floor for cable protect against abrasions and minimize the entrance of debris beneath the floor.

Sounds like removable floor panels with grommets, which are also handy for keeping the cold air under the floor.

Article 645-5(d)(5)(c) lists the cable types suitable for use under raised floors. Surprise! For communications, it is CM and MP from Article 800-53. Everyone always assumes that CMP is required.

Article 645-6 says, "Cables extending beyond the information technology equipment room shall be subject to the applicable requirements of this Code." And Article 645-6 FPN (Fine Print Note) says, "ellipse for communications circuits, refer to Article 800." Hence, we are back where we started with Article 800-50 Exception No. 3.

Bottom line: If a cable originates and terminates within the ITER, it is covered under Article 645. But if a cable only originates or terminates within the ITER, it is covered under Article 800.

Thoughts on data-center design

Data centers contain a company's most valuable asset-its data. But does any of this sound familiar?

  • The power goes out, and a generator is ordered.
  • A circuit-breaker trips, and the distribution system is redesigned.
  • A lightning strike chars a few power supplies, and a lightning-protection system is installed.

All good things, but proactive rather than reactive implementations would have saved the downtime and repair costs.

If you have ever designed a data center, you know that planning is crucial, with diversification and redundancy being key to a good data-center design. While all the proverbial eggs are in one basket, try to avoid stacking them on top of each other.

To prevent a single failure from affecting service:

  • Obtain power from two separate power grids;
  • Specify a generator to provide backup power;
  • Install uninterruptible power supplies to prevent spikes and provide a seamless supply of electricity during a switchover from main to reserve power;
  • Specify redundant downflow discharge modular cooling units;
  • Use multiple telecom providers for Internet connectivity;
  • Cluster servers logically but not physically. Distribute servers within the same cluster throughout the ITER. Should a problem affect a rack or section of racks, the cluster would continue to operate;
  • Decentralize cabling within the ITER for the same reason.

And to increase system reliability:

  • Keep it simple. Complexity, which tends to cost more initially and is more expensive to maintain, is the enemy of reliability.
  • Reduce the opportunity for human error. Complacent humans can be the data center's worst enemy.

To minimize risk from physical security violations:

  • Specify card access control;
  • Specify CCTV monitoring;
  • Specify physical crash barriers on outside entry doors.

To minimize risk of incidental damage from fire-suppression systems:

  • Specify a dry-pipe preaction sprinkler fire protection system;
  • Specify a very early smoke detection apparatus (VESDA) system.

And if you feel that survivability of your data center is at least as important as the snack bar and restrooms in your building, then you need to specifically require CMP cabling, because the NEC does not.

Did you know that 99.99% system availability means that you are down 52 minutes per year? 24/7 operation of the ITER dictates redundancy for not only the IT equipment, but also the cooling and power systems, such as switchgear, generators and UPS modules, and batteries-which require maintenance.

If you plan to use the underfloor space for cabling as well as cooling, consider that IT equipment is getting physically smaller, creating additional space within the equipment racks in the ITER that can accommodate more IT equipment.

Or can it? The new, smaller equipment produces the same amount of heat as the older, larger equipment. Hence, more equipment demands more power and produces more heat load, which requires more cooling, which will require even more power and more space.

And then there is the question of where to put the cabling. If the underfloor area was minimally sized, more cabling would reduce the airflow, just when additional cooling is required.

Just where we want the most protection, the NEC requires the least. Many designers mistakenly assume that the NEC protects equipment. Not true. The NEC protects human life. From this perspective, you can see why CMP or even CMR is not a requirement in a room filled with equipment that would be completely isolated-by means of fire-rated walls, floor, ceiling, and dampered or fire-stopped penetrations-from the occupied portion of the building during a fire event.

Solution: the designer must specify the type of cabling to be installed in the data center. Otherwise, you likely will get the minimum required by code.

Click here to enlarge image

Donna Ballast is a communications analyst at The University of Texas at Austin and a BICSI registered communications distribution designer (RCDD). Questions can be sent to her at Cabling Installation & Maintenance or at PO Drawer 7580, The University of Texas, Austin, TX 78713; e-mail: [email protected].

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond