With blades front and center, efficiency is common objective

March 1, 2008
Blade servers, including those with marketing slogans of “out with wires,” heavily impact wiring and wire management.

Blade servers, including those with marketing slogans of “out with wires,” heavily impact wiring and wire management.

by Patrick McLaughlin

When was the last time you heard about a sprawling data center in which components were well spaced out, heat generation was under control, and management did not have to give a second thought to power-supply issues?

IBM’s BladeCenter includes connectivity components within the chassis that sharply reduce cables in server-to-switch connections, but do not remove cables from the data center entirely.
Click here to enlarge image

Facilities of that type probably exist somewhere, but they certainly are not the examples that get discussed in the technology trade media. Publications ranging from this one, which covers the network’s first layer, to those that pay attention to higher-layer network functions, constantly delve into the compact, cramped, heat-generating, power-craving data centers that endlessly challenge managers to gain efficiencies wherever and however possible.

In these data centers that we hear so much about, the blade server appears to be the epicenter of activity. Blades epitomize the density and heat generation that are the real story for many data center managers today. Those blades’ density and compact heat generation also set the tone for best practices within the data centers in which they reside.

Understanding how and why blades have such an impact on data center management first requires an understanding of what blades are. Speaking of blade providers in general, because the products offered from different vendors bear many similarities, Scott Tease, marketing manager for IBM’s blade products (www.ibm.com/bladecenter), says, “We’ve taken a 1U server—the pizza-box style—and pulled out everything that is not necessary for a single node’s operation. We pulled out elements such as the power supply, fan, CD/DVD drive, and server management, leaving the server’s basic necessities.”

Those elements that were pulled out of the server, Tease says, are shared in a chassis: “For example, a chassis that hosts up to 14 blades will have four power supplies. The concept is all about sharing, and can be compared to virtualizing hardware. It’s a smarter way to put hardware together that does not involve redundant parts.”

The key derivative of the blade-and-chassis architecture is efficiency. “That includes energy efficiency,” Tease explains. “Everything you don’t have to run saves energy, such as DVD drives, floppy drives, and systems management. While density is nice to have, it is not the main driver of the adoption of blades. One of the main drivers is energy efficiency.”

Using an example of a company that needs 84 servers, Tease continues, “Using blades can reduce power consumption. Blades will consume a minimum of 35% less power than 84 separate 1U servers. You can also fit all 84 blades in a single rack, whereas 1U servers will take up two full racks. So, density is a second benefit of blades.”

Eliminating cables?

A third benefit of blades, and one that could initially strike fear into the hearts of professionals whose livelihoods revolve around the installation of cabling systems, is that these compact servers significantly reduce the number of cables installed during setup.

Not long ago, IBM ran a television advertising campaign that included the slogan “Out with wires,” in which data center managers were haunted by (and able to vanquish, thanks to blades) a snarled ball of cable. Though evidently gone from TV commercials, the snarled ball is alive and well at ibm.com/blades.

“Traditionally, seven cables per server have to be plugged in, routed, and plugged into a switch—two power cables, two Ethernet cables, two storage cables, and a systems-management connection,” Tease notes. “A bladed environment cuts that number down by about 80% while providing the same connectivity and functionality,” as the fully wired environment.

The chassis’ midplane is the cable-reducing entity, acting as the point at which the blade’s connections are made to a switch. Blades are loaded into the chassis’ front, at which time they are connected to the midplane. Switches are loaded into the chassis’ rear, also connected to the midplane, and the server-to-switch connections are made without wires. “Think of the midplane as cabling designed onto a PC card,” Tease summarizes.

The chassis is such an anchor in the IBM BladeCenter environment that Tease uses the word “disposable” to comparatively describe the servers. “The way IBM has implemented blades, the chassis is treated as the infra-structure,” he explains. “We don’t consider the chassis and the blade as the same type of purchase. Blades are almost disposable, and the chassis almost becomes part of the rack. Once the installer gets the chassis installed the first time, the user can get a blade and run it for a couple years. When the user is upgrading servers, the installer can literally walk up to the rack, pull out the old blade, put in the new blade, and they’re done. All the power and switching remain the same.”

IBM’s BladeCenter E Chassis just celebrated its fifth year on the market and still accounts for approximately 40% of chassis sales, underscoring the chassis’ vitality in these computing environments. An IBM presentation entitled “Why haven’t you moved to IBM BladeCenter?” includes a slide that boasts the BladeCenter chassis eliminates as many as 112 cables: 42 Ethernet cables, 28 power cables, 28 fiber-optic (storage) cables, and 14 KVM cables.

IBM’s, and other blade providers’, elimination of cables from these server-to-switch connections is trumpeted as a benefit because it allows data center managers to avoid the perils and challenges of cabling management that have been chronicled in this publication and elsewhere. In particular, IBM’s presentation says the cable elimination lets data center managers realize benefits ranging from improved serviceability to speedier installation, better airflow, and reduced points of failure.

The cabling perspective

While IBM’s BladeCenter midplane eliminates the need for cables between blades and switches, data centers are still packed with switch-to-switch connections via copper and fiber cabling—as anyone who has ever opened a tile from a raised floor and found a sea of cabling can attest. Concerning cabling as a whole in the data center, ADC’s (www.adc.com) John Schmidt says, “As a cabling manufacturer, when we look at the data center, we see that virtualization is huge. Blade servers reduce cable, as we all have seen on television commercials. One result is that because you have more processor speed relying on a single connection, it drives the need for higher-performing cable.”

Schmidt adds, “Through virtualization especially, several applications either stay up or go down with a single connection. We find users that spend significant amounts of money on their infrastructure are likely to spend the money for higher-end cabling—Category 6, Category 6A, or fiber.”

Schmidt further explains, “Another interesting impact for structured cabling is that users end up with higher-end processors in servers, and a higher density of them. So, on a square-foot basis, you have more processing than you would in an environment of 1U servers. Therefore, that processing power is producing more heat. Blades are more energy efficient than running 1U servers, but the blade itself is throwing a lot of heat.”

The challenge, Schmidt says, is to create cabinets and other cable-management equipment that allow sufficient airflow. “Promoting better airflow has been one of our key messages to the market,” he notes. “As an industry, we’re trying to reduce the size of cabling and improve cable management. Improper management will restrict airflow. Today, you see a lot of information from manufacturers of cabinets about improved airflow. Formerly, cooling was as simple a matter as blasting more CRACUs [computer-room air-conditioning units] through the raised floor.”

Today, many data center managers find that increasing CRACU output is not only inefficient, it’s sometimes impossible because they simply do not have the power to put more cooling equipment online. Given these facts of life in modern data centers, increasing emphasis is being placed on better management—including cable management.

IBM’s Tease concurs that cable management is a challenge: “If you have four or five cables coming from each server, you have to find homes and paths for them. If a bunch of cables are sitting on the back of a server, that server will have to work harder to get air from the front end to the rear.” Such a situation only increases the server’s heat generation.

He adds, however, that the heat-producing stigma often slapped on blades must be put into perspective. It is true that a blade produces more heat than a 1U server—heat that requires a certain amount of power in order to cool. The blade itself also consumes more power than a 1U server does.

“The power-consumption argument is one we run into commonly,” Tease says. “At a rack level, the blade will consume more power than a 1U. But the blade is double the density.” In other words, on a per-port basis, the blade consumes lesspower than a 1U server. He re-emphasizes that one of the key reasons users are buying blades today is energy efficiency.

“Very few clients are limited by space availability,” he says. “Very many have to stop installation when they hit a certain power and cooling level.”

The upshot for data center managers in blade-heavy environments it that the port density also equates to heat density. Blades are more efficient, yes. But the heat they gen-erate is localized, and must be dealt with accordingly.

As this publication reported in May 2007, some believe that blades’ localized heat generation ultimately will be managed by localized cooling. At last year’s AFCOM Data Center World conference, keynoter Christian Belady—then of HP and now of Microsoft—predicted an evolution toward localized, “on-the-hotspot” type cooling in blade environments that hedescribed as “closely coupled cooling.”

Toward greater efficiency

Tease states that IBM has continuously looked for ways to drive energy efficiency and “green” initiatives. “Three years ago, we looked at every aspect of the server that consumed power,” he recalls. “The processor consumes ‘x’ percent; the memory consumes ‘x’ percent. And we’re going to knock down each piece of that pie, one-by-one, until we have driven out every bit of power we can.” Tease cites solid-state disk drives as an example, stating they consume 98% less power thantraditional spinning disks.

As every aspect of data center management strives toward greater efficiency, blades such as IBM’s are marketed on their reduction or elimination of cables in server-to-switch connections. Yet cabling remains abundant within data centers, and the management of that cabling, to a great extent, stems from the presence of those blade servers.

PATRICK McLAUGHLIN is chief editor of Cabling Installation & Maintenance.

Sponsored Recommendations

Fiber solutions that drive Equinix performance

Aug. 25, 2023
CommScope and Equinix work hand in hand to provide client connectivity across the globe

Smart Building Smart Cabling Case Study

Aug. 3, 2022
Located in the Shenzhen Nanshan Science and Technology Park is a building known as the citys landmark for new science and technology the Tencent Binhai Building (TBB).

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...