Designing, manufacturing, and installing fiber connectivity solutions for future-readiness

Jan. 29, 2018
Data Center Systems installed fiber-optic cabling for a global energy company’s headquarters relocation and data center consolidation.
How DCS guided a global energy company through a headquarters and data center installation project.

By Kevin Ehringer, Data Center Systems

In 2012 a global energy company announced its plans to relocate its United States headquarters and consolidate numerous data centers in the process. The new 30,000-square-foot data center came with a few critical requirements; it needed to support legacy equipment while providing a foundation for increased capacity and futuretechnologies.

The company sought out a partner that could expertly design the data center’s fiber management solution; manufacture all hardware and cable components; and install, test, and certify the end-to-end cable infrastructure—all within a limited time frame in a highly regulated industry.

That’s when they turned to Data Center Systems (DCS), a Dallas-based company that designs, manufactures, and installs fiber-connectivity solutions. DCS designs and manufactures all its products in the U.S.—a detail that the energy company did not overlook. The end-user’s project manager commented, “DCS went up against the big dogs in the business and beat them decisively.”

Custom design and manufacturing process

The project called for creative design, innovative components, flexibility, and speed. DCS spent time educating the client on structured cabling and link-loss budgets, length limitations, and how the infrastructure DCS designed would support the data center’s future needs.

Though the client already had a design laid out, DCS stepped in and made recommendations that would prove paramount in the success of the data center’sconnectivity.

“Our plan was to design for the future,” said Don Mokry, technical account manager for DCS. “Also, manageability—we not only design our products with a logical cable management infrastructure, we train our clients on the best practices in patching, cable routing, and the importance of using the correct-length patch cords to avoid clutter. We always have to keep that on the forefront of our minds when designing.”

A total of 104 trunk cables provide 568 MTP connections, equaling 3,408 individual ports. The trunks support 10-, 40-, and 100-Gbit/sec Ethernet; 8- and 16-Gbit/sec Fibre Channel; and 8- and 16-Gbit/sec FICON.

The design needed to support the following systems: mainframes; application and cloud servers; storage area network (SAN) and other data storage devices with fiber support for 4- and 8-Gbit Fibre Channel and a future move to 16-Gbit FC; and open systems with fiber support for 10- and 40-Gbit Ethernet, and eventually for 100-Gbit Ethernet.

Tying these devices together required three custom-manufactured DCS Multi-Bay Patching Locations (CPLs). Instead of the client going with the standard open-relay racks with vertical wire managers that can be bought off the market, DCS customized the central patch so that it fit against their wall. Equipped with a series of open-frame racks with vertical cable management, the custom Multi-Bays have lockable, accordion doors for security, which can be open for patching, moves, adds, and changes. The one-sided design proved superior over the normal double-sided design because there are no obstacles between racks, and as the data center grows, more bays easily can be added.

Here is a Brocade DCX 8510-8 (8-Gbit/sec Fibre Channel) backbone switch, connected using DCS staggered trunk cables.

Also on the custom design lineup: a 10U Mimic Panel. If a customer puts in a big director-class switch, most vendors might offer a 4U panel that can provide 144 channels. But what happens if, for example, the customer needs 568 channels to be represented at its central patch? While other companies provide multiple 4Us, DCS doesn’t see the logic in that. That’s why we custom manufactured a 10U design, which is a much more logical pattern that matched the client’s switch. The result: Less documentation is needed and fewer cut sheets are needed. The 10U panel complemented the director-class switch, and instead of hooking it up to four different panels any time the client wanted to make an add, move, or a change and determining which of those four panels to go to, DCS simplified it by providing a 10U panel.

“It’s a much more logical and physical pattern that matches their switch,” Mokry said.

In total, 104 trunk cables provided 568 MTP connections, equaling 3,408 individual ports. At the CPL, DCS’s own Switch Mimic Enclosures and adapter panels visually replicate the port on the supported switches. Connections on every switch represented at the CPL can be identified quickly. Moves, adds, and changes made at the Switch Mimic panel control the actual switch with no need to physically touch the switches.

One-of-a-kind installation

With design and manufacturing completed, DCS moved along to the installation phase, which presented its own unique challenges. DCS discovered standard shipping vendors couldn’t be used to deliver the equipment to the data center because it was just one unlabeled building of many others under construction on the 50-acre campus. To deliver their hardware and cables on time, DCS rented trucks to hand-carry and deliver the components to the site themselves.

Any new construction in an industry as highly regulated as the energy industry is going to come with requirements that are rarely seen in a data center project. Because the new headquarters campus was still being built, cranes and heavy equipment were scattered in multiple places throughout the site. Every person—technician, salesperson, anyone—who planned to visit the site had to successfully complete an eight-hour safety-training class held offsite.

“Typically we go into data centers that are ready-set, and that type of construction wouldn’t normally be taking place around it,” Mokry said. That didn’t faze the team; they got the chance to see firsthand the process of how a company’s headquarters is built, and that includes delivering product to a data center that wasn’t quite 100-percent functional. Even though the data center’s freight elevator had just become operational and the air conditioning wasn’t set up throughout the entire facility, DCS moved forward with the installation because a delay in the data center would affect the entire construction project.

Because the client had no experience with 40-Gbit Ethernet, DCS trained several employees over a three-month period on protocols, maintenance, and other unique aspects of 40G. For starters, there is a male and female side, so DCS taught the client where to plug certain ends of the cable, and how to route it.

In the end, the installation was completed approximately 45 days ahead of schedule. DCS provided the following.

  • 10-, 40- and 100-Gbit-capable MTP trunks for Ethernet

  • 8- and 16-Gbit-capable MTP trunks for Fibre Channel

  • 8- and 16-Gbit-capable trunks for FICON

  • Middle-of-row tray-mounted frames with MTP cassettes for zone access points

  • Custom 6U and 10U Switch Mimic Adapter Panels

The client’s first 40-Gbit switch install brought with it an audience, as other onsite vendors witnessed the data center’s equipment going live for the first time. With zero errors, the successful install pleasantly surprised other vendors, who had not seen an infrastructure of the same scale come online 100-percent. Typically a 10- to 20-percent failure rate is expected. “They said they had never seen things come together so nicely,” Mokry added.

An ongoing relationship

Though the initial install was completed approximately five years ago, the client is still consolidating data centers to this day as they continue to acquire more companies—and that has led to a continued partnership with DCS.

In 2012, DCS designed an infrastructure that supported 4-Gbit; since then, the client has moved to 8-Gbit with plans to switch to 16-Gbit soon. Mokry noted, “The technology has grown over the years, but the infrastructure we put in place five years ago is still able to handle that growth and speed. That’s because of the link limitations we put in place when we designed the infrastructure for their data center. In the back of our minds, we’re always planning for the future and envisioning what 16-Gbit and 32-Gbit is going to be able to do.”

Another key component was dB loss, or the power of the light that’s transmitted over the fiber. “We didn’t arbitrarily just start throwing fronts in one end of the data hall to the other end,” Mokry said. “If we had done that, they wouldn’t have been able to transmit 16-Gbit like they are now.”

As the client’s company continues to grow, they’ll continue to need more servers, switches, and storage—creating an ongoing need for more connectivity. Their partner, DCS, will continue to plan, design, and manufacture for the future of connectivity, which as we know, is limitless.

Kevin Ehringer is founder and chief technology officer of Data Center Systems (DCS), which he founded in 2002. He serves on the board of directors of the Fibre Channel Industry Association.

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...