Structured cabling's role in data center, network efficiency

Sept. 1, 2008
Efforts at energy efficiency, specifically including server virtualization, are unveiling anew the need for high-performance, high-bandwidth structured cabling systems.

Efforts at energy efficiency, specifically including server virtualization, are unveiling anew the need for high-performance, high-bandwidth structured cabling systems.

The networking industry produces more carbon emissions than the entire country of Argentina. By 2010, it will consume enough power to require 10 new coal-fired power plants in the United States alone. Its power consumption doubled between 2000 and 2006, and will double again by 2011.

The culprit in all this carbon emission is not the usual suspects one would associate with massive energy consumption—it's data centers. Massive arrays of servers, storage area networks (SANs), and Ethernet switches all needing to be powered and cooled to manage, store, and route the electronic data that we produce at an increasing rate every day.

Unlike many industries, however,ours is getting ahead of the curve in finding innovative solutions to solve these problems. Google's philanthropic arm recently announced a massive investment in green technology to find more efficient means of powering, cooling, and storing information. Certainly, Google's credo, "Don't be evil," was part of the reason for the funding; but equally important was the bottom line. Being "green" in the data center environment can provide companies with a strategic cost advantage over the competition.

Being green in this industry goes beyond reduce/reuse/recycle and more toward maximizing efficiency through innovative technology. Think of the value to be gained by a consumer-products company that can more efficiently mine consumer behavior and pricing data than can its competitors. Or the financial company that can use data analysis to more accurately predict which loans are likely to default. Or, the Internet company that can provide faster downloads, 100% uptime, and better search queries.

Efficiency's practical benefits

In the modern-day competitive environment, there is a tangible value on being able to more efficiently create, store, and extract data from our networks. There is also acceleration to the speed at which these changes need to happen. In 2000, the United States consumed about 25 petabytes of bandwidth per month. Last year, five companies/applications combined to consume more than 25 petabytes: YouTube, MySpace, Yahoo, iTunes, and Xbox Live. These applications require more servers, storage devices, and other networking equipment to be viable and competitive. Between 2000 and 2010, the number of servers in the United States alone will triple to 15.8 million—even with technologies like server virtualization that are intended to dramatically reduce the number of servers in use.

At the same time, we are seeing significant increases in the cost to power and cool these devices. These factors have created the perfect storm: Demand for more storage and processing power increases the number of devices as well as the power consumed per device. As processors become more powerful, they generate more heat. To operate reliably, the processors must be cooled, thereby increasing the cooling costs of the data center. More cooling requires even more power, on top of the additional power provided to the servers and devices.

When 40- and 100-Gbit/sec Ethernet systems arrive in data centers, they will be connected via parallel optics using MPO connectors, and requiring significant fiber-management planning.
Click here to enlarge image

Increased demand for power stresses an already maxed-out power grid, increasing the cost of that energy and creating the need for more power plants, thus producing more carbon emissions. The press reports on the dangers of carbon emissions, and these reports get downloaded, e-mailed, and archived—on more servers. It is a vicious cycle, but the fact remains that data centers are a necessity of modern life, which is why so many are proactively working to improve efficiencies.

Power, power, power

Location is a key determining factor in the efficiency and profitability of a data center. In the past, access to affordable real estate was the key. Today, access to affordable, reliable power, coupled with real estate costs, is far more important. Approximately 70% of the power produced for a data center is lost through the power generation and distribution process. Giving data centers close access to renewable sources of power can help reduce this loss.

That is why we have seen so much activity and press concerning more rural and unusual locations for data centers. Access to hydroelectric power on the Columbia River has created a boom of data center activity in Central Washington. Iceland has recently been promoting itself as an ideal place for data-center expansion because of access to geothermal power, as well as a climate that is ideal for environmental cooling (i.e. using the ambient outside temperature of the surroundings to help cool the data center.)

At the server level, two main areas of innovation have become mainstream. First is the idea of dramatically increasing processor efficiency. New quad-core technology from Intel and AMD is cited as reducing power consumption by up to 44%. More-efficient processors also produce less heat, which means cooling costs are also reduced. A second method to improve efficiency in the server is by reducing the total number of servers required. It is obvious that a single server is going to draw much less power than several servers, but the challenge has been how to reliably do the work of several servers on just one.

A data center with virtualized servers demands redundant high-reliability, high-bandwidth cabling like Augmented Category 6 because each virtualized server affects many more users than each traditional server had affected.
Click here to enlarge image

By using server virtualization, however, one can host several applications on virtual machines within one physical machine. Maximizing the utilization of the physical machine by running multiple virtual machines increases the data center's overall efficiency, reducing the number of idle servers that still consume significant power. Server virtualization also makes the data center more reliable by making it easier to port applications from one server to another within the same data center or offsite at a disaster-recovery site.

Most of the work concerning infrastructure-based effi-ciency has focused on power and cooling. Everything from direct current (DC) powering of equipment to liquid cooling of processors has been explored. Still today, most data centers are running alternating-current (AC) power and computer room air conditioners (CRACs) to cool their equipment. Raised-floor environments allow cool air to be directed to perforated tiles in the data center's cold aisles, through which air can be drawn and then fed to cool active electronics.

This setup, however, has created the opportunity for many bad habits. When placed incorrectly, perforated tiles allow cold air to escape in useless areas, and tend to lower the airflow in areas that need cool air. Poor segregation of hot aisles from cold aisles allows warm air to re-circulate and raise inlet temperatures. Improper cable management around active equipment decreases airflow, requiring active-equipment fans to work harder, and potentially reducing reliability through increased temperatures on the active equipment.

Structured cabling a lynchpin

Structured cabling certainly has its place in contributing to the making of a green data center. Good cable-management prac-tices improve passive airflow, but beyond this it is also important to look at what we can accomplish via higher bandwidth. Virtual servers and blade servers are more-efficient per unit of storage than traditional equipment, but now with a significant portion of data being stored on a single device, the need for higher-bandwidth cabling to this device is critically important.

For server virtualization, in which several applications may be running simultaneously on a single machine, having high-bandwidth and redundant cabling is a must—a virtualized server affects many more users than does a traditional server that spends much of its time idle. That is a major reason why the industry is seeing much higher demand for higher-bandwidth technologies, such as Category 6, Augmented Category 6, laser-optimized fiber, and singlemode fiber.

Work being done by the IEEE 802.3ba working group on 40- and 100-Gbit/sec Ethernet is likely to result in a massive increase of active fiber optics for multimode systems due to parallel optics. Theses topologies are likely to use 12-fiber MPO connectors as well as cable assemblies with four active transmit and four active receive fibers per assembly for 40-Gbit/sec Ethernet. For 100-Gibt/sec Ethernet, parallel-optics topologies will use 10 active transmit and 10 active receive fibers spread over two MPO connectors.

This architecture requires planning for a massive increase in fiber density, which has implications on cable management,fiber raceway, and connectivity. Planning for a greater concentration of fiber also creates challenges concerning how to deploy the systems. Typical fiber panels designed for low-density fiber in a LAN telecommunications room are not suitable for futuredata-center growth. Fiber frames designed specifically for thedata center will be required for these data centers in the future.

Ultimately, innovation will prevail over the challenges that face the data center industry. More-efficient processing, virtualization, power and cooling, coupled with sustainable energy sources nearer the point of use will ultimately allow data center capacity to continue to grow while reducing the impact from an energy perspective.

It is critical to remember that structured cabling—copper and fiber—will continue to play a key role via improved cable management and increased bandwidth, allowing users to store, manage, and access information within the data center.

John Schmidt is senior product manager, business development for ADC's TrueNet Structured Cabling Systems (www.adc.com).


Dual-core processors

A dual-core processor is a CPU with two separate cores on the same die, each with its own cache. It is the equivalent of getting two microprocessors in one. In a single-core or traditional processor, the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far slower than the speed of the CPU. The situation is compounded when multitasking. In this case, the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted and performance suffers.

In a dual-core processor, each core handles incomingdata strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing, the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.

To use a dual-core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threading technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT, the software will only recognize one core. AdobePhotoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.

A dual-core processor is different from a multi-processor system. In the latter, there are two separate CPUs with their own resources. In the former, resources are shared and the cores reside on the same chip. A multi-processor system is faster than a system with a dual core processor, while adual-core system is faster than a single-core system, all else being equal.

An attractive value of dual-core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the averageuser, the difference in performance will be most noticeable in multitasking until more software is SMT-aware. Servers running multiple dual-core processors will see an appreciable increase in performance.

Multi-core processors are the goal and as technology shrinks, there is more "real-estate" available on the die. In the fall of 2004, Bill Siu of Intel predicted that current accommodating motherboards would be here to stay until four-core CPUs eventually force a changeover to incorporate a new memory controller that will be required for handling four or more cores.
Source: wisegeek.com

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...