Top of the class in campus networking

July 1, 2010
George Mason University’s core data center, 10 years in the making, ties together numerous information technology systems.

George Mason University’s core data center, 10 years in the making, ties together numerous information technology systems.

By Carol Everett Oliver, RCDD, ESS, Berk-Tek, a Nexans company

At George Mason University (“Mason”), construction is on a constant upsurge. Since the university’s inception 50 years ago to becoming the largest university in the Commonwealth of Virginia today, they are continuously expanding to accommodate their growing population, which is currently over 32,000 students from all 50 states and 125 different countries.

Mason began as a branch of the University of Virginia in 1957. But by 1972, it separated to become its own four-year, degree-granting institution with four distributed campuses. By 2008 U.S. News and World Report credited Mason as the number one up-and-coming university, primarily due to the ongoing structural transformation and innovations. Their ten-plus-year construction budget investment exceeds $500 million.

At the top of the university’s plans and advancement policies has been its leadership’s determination to stay on top of the data and telecommunications evolutionary movement. “Mason recognized back in the early ’80s that networking and technology systems should be its own separate entity and not fall under facilities,” states John Hanks, advisory network engineer, network engineering and technology for the Technology Systems Division (TSD), Information Technology Unit (ITU). Mason jumped on the fast track of network improvement, much to the credit of Hanks, who has been with the ITU since 1981 and overseen the changes and upgrades.

The culmination of all these networking upgrades is a new, efficient data center to be completed this year. Ten years in the making, this data center will become the main core, interconnecting the other four centers to tie in all of the buildings and will be located in a new three-story facility. “The cabling infrastructure has evolved with the network through the years and is now formalized in our building standards for all ITU projects, has become the cornerstone of Mason’s technological future,” states David Bellinghoven, manager of network infrastructure for TSD Network Engineering and Technology ITU.

Network revolution

To understand the importance of this well-planned and executed data center on a university campus, it is critical to understand that its roots stem from a long and winding road of network revolution. Mason became a “poster-child” by setting a good example for other universities that have undergone similar network morphing.

John Hanks, advisory network engineer for the technology systems division, points out to David Bellinghoven, manager of the network infrastructure, locations of the ongoing expansion projects on George Mason University’s Fairfax campus.

Before the term “network” even existed, Mason’s first computer system in the early 1980s consisted of punch cards that were batched and transported to William & Mary’s IBM system. The first “network” included remote terminals that traveled around from building to building, were plugged in, data was input, and the terminals were then returned to the mainframe where information was downloaded.

One of the major milestones in their telecommunications history was to separate the networking group from administration and facilities. The division was named “Academic Data Computing,” which eventually became the Information Technology Unit (ITU), encompassing both voice and data.

The first campus broadband system, which was tied with hard-line coaxial and serial connections, was designed in-house in the mid ’80s. About the same time, Ethernet was born and the campus broadband connected to Ethernet bridges, housed in their computer room. This became Mason’s first data center in Thompson Hall, the sixth oldest building. The first implementation of the Internet was in 1987 over SURAnet—Southeastern Universities Research Association—network. This was the first TCP/IP network to sell commercial connections, becoming the first and largest Internet Service Provider (ISP). At that time they were running 10Base2 and 10Base5 connections to the science and technology buildings from Thompson Hall.

Standards and structured cabling

In the early ’90s, Mason’s ITU division installed a Token Ring-over-fiber system to connect some of the Fairfax campus buildings. At that time, all the horizontal cabling systems for voice, data and video were over proprietary systems. In 1993 the ITU division presented a proposal to totally revamp their cabling systems, both backbone and horizontal, to incorporate a standardized structured cabling system designed and installed according to the newly developed TIA/EIA-568 standards. They commissioned a $12 million project from the Commonwealth of Virginia in 1994 to rewire the entire campus.

To alleviate overcrowding the cabinets and racks within the 250 copper connections of Berk-Tek’s LANmark-2000 enhanced Category 6 cable between the LAN/WAN and servers, the cable is run through the overhead pathway rack system from Legrand | Ortronics, which includes overhead patch panels.

“We wanted 100 percent control of our maintenance holes and our communication cores,” notes Hanks. “When we did the rewiring and new infrastructure, part of the agreement was that only IT personnel or contractors could go into the telecom room and that IT would provide all the backbone services, interconnection and a network that would run better than ever… we’ve kept that promise,” he adds.

The duct banks included all new underground conduits encased in concrete. “This was a big change since we were previously running cables in old HTHW pipes with other wiring and utilities. We then became a resource to other departments with an IT-controlled duct system,” states Bellinghoven.

The structured cabling overhaul included installing all multimode and singlemode fiber in east/west rings to every building and setting up two communication cores, where all the copper and fiber collapsed. Eventually this would expand to be four communication cores with total redundancy. The new active equipment included switches, PBX systems and routers and an upgrade of the Internet connection to quad T1 lines.

Before the overhaul, the horizontal cabling was a mix of Category 3 installed for voice and multiple Category 5 outlets for data (10Base-T and 100Base-T). “The horizontal cable runs were installed when the data bandwidth was rapidly growing from Category 3 to Category 5,” states Hanks. Since then, the horizontal cabling has been a standard of three Category 5 outlets used for both voice and data.

By 2001, they had formulated their own cabling standards, paralleling industry standards, which included specifics on the cable and connectivity type, all active and passive hardware, telecom room layouts, and installation best practices and procedures. “Standards help keep things consistent,” states Michael Mauck, senior network engineer, network engineering and technology, TSD/ITU. “[The current] spec, based around the MasterFormat Division 27, is so tight that they can give it to the general contractor and there should be no questions,” explains Renee Skafte, manufacturer’s rep with Network Products, Inc. (www.npiconnect.com).

The specifications and standards were “set in stone” through CAD documents and called for the cabling system to be the warranted NetClear GT2 cabling solution from Berk-Tek, a Nexans Company (www.berktek.com), and Legrand | Ortronics (www.ortronics.com) for the horizontal runs from the TRs to the device outlets. This system incorporates Berk-Tek’s LANmark-1000 enhanced Category 6 cable for data and voice applications and terminates into Ortronics Clarity modular 110 patch panels in the telecom rooms. “The spec is specific down to the most minute detail, including drawings of the workstation outlet layout,” states Greg Luce, network infrastructure engineer, who joined the ITU eight years ago specifically to formalize the standards onto the CAD system.

“The real differentiator is the NetClear warranty that includes a channel performance better than the TIA standards. This sets the bar high and becomes gold to us. With all the construction, moves, adds and changes, the 25-year NetClear warranty on all our copper and fiber not only assures a quality product for years to come, but also guarantees the workmanship due the required certification training of the installers,” explains Bellinghoven. “We want to make sure we have quality installation because we have to own up to our client, which are ultimately the taxpayers and citizens of the Commonwealth of Virginia,” adds Hanks.

Back to the future

Since the standards were set in place, campus construction has exploded. In the last year alone, there were 27 projects, including renovations and additions on the Fairfax campus. In the past six months, this campus has constructed an additional 800,000 square feet in six new buildings.

They have upgraded to an even higher grade of Category 6. Berk-Tek’s LANmark-2000 is part of the NetClear GT3 Premium Category 6 solution that provides a total usable bandwidth beyond 400 MHz and guarantees lab-verified horizontal copper channel performance 8 dB above all Category 6 crosstalk and 6 dB above all Category 6 return loss requirements.

The focal point in the last decade has been on the design and construction of the new 50,000 square foot building, known as the Aquia Building. This building allocates 25,000 square-feet for ITU, including 8,000 square-feet for a new university data center. The original plans included renovating the existing data center, which had been located in Thompson Hall for over 30 years. It wasn’t the space that forced them out, but rather that they ran out of power. “The focus shifted to a permanent facility that would house a state-of-the art, modular data center that could take us through the next 30 years,” notes Hanks. So with a budget of $22 million (almost twice that of the entire infrastructure budget from the ’90s), the three-story facility was built to house the data center and to provide an additional 25,000 square feet of swing space.

The data center includes four sections – an enterprise LAN/ WAN system, a server support area, dedicated system services specifically for research, and a “co-lo” area to support other departments within the campus. All of this is powered by a 750-kVAC uninterruptible power supply (UPS), backed up by a 1,500-kVAC generator.

Whiting Turner was the general contractor who awarded the cabling contract to Vision Technologies (www.visiontechnologiesinc.net), which carries the state contract, and they delegated the outside plant (OSP) termination to New Century Services (www.newcenturyservicesinc.com). Bob Hanson of Vision Technologies headed up the structured cabling teams, both inside and outside plant. Rick Castiglia, RCCD, president of New Century has been installing the campus’s OSP cabling since the infrastructure was first installed in the ’90s and still has all of the original drawings. The outside plant cable into the data center includes multiple 144-fiber trunk cables. The fiber is terminated into high-density 72-port Ortronics OptiMo FC Series Fiber cabinets housed in Mighty Mo cable management racks.

Within the data center, Vision installed a variety of fiber runs between the switches and copper connections between the servers. “In dealing with a mix-and-match situation, we recommended Berk-Tek’s pre-terminated fiber assemblies because they have the flexibility to have LC connectors on one end and MTP on the other without changing the construction of the glass,” explains Skafte. “In addition, these assemblies are pretested, so this reduces the cost of installation and increases the reliability of the cable,” she adds.

To alleviate overcrowding the cabinets with the 250 copper connections, NPI recommended that they use the Mighty Mo overhead pathway rack system from Legrand | Ortronics. “This was a big contributor to efficient space utilization, which was key for this data center,” states Skafte. “These overhead pathway racks were a unique solution as they do not take up valuable rack space because they are mounted above the racks and attached to the tray.”

“This data center is set to propel us forward and to maintain the growth we continue to experience,” states Hanks. “In my 30 years here I’ve seen this campus evolve from a mainframe environment, to a distributed network, back to a mainframe-type with the data center as its core,” he observes. “With our solid NetClear infrastructure and standards in place, we are assured that we’ve done things right and in the process built a foundation for things to come,” he adds.

Carol Everett Oliver, RCDD, ESS is marketing analyst with Berk-Tek, a Nexans company (www.berktek.com).

More CIM Articles

Sponsored Recommendations

Power up your system integration with Pulse Power - the game-changing power delivery system

May 10, 2023
Pulse Power is a novel power delivery system that allows System Integrators to safely provide significant power, over long distances, to remote equipment. It is a Class 4 power...

The Agile and Efficient Digital Building

May 9, 2023
This ebook explores how intelligent building solutions can help businesses improve network infrastructure management and optimize data center operations in enterprise buildings...

400G in the Data Center

Aug. 3, 2022
WHATS NEXT FOR THE DATA CENTER: 400G and Beyond

Network Monitoring- Why Tap Modules?

May 1, 2023
EDGE™ and EDGE8® tap modules enable passive optical tapping of the network while reducing downtime and link loss and increasing rack space utilization and density. Unlike other...