WiFi woes causing trouble for iPad users
Wireless local area network connectivity issues have been a slight drawback to what otherwise has been an as-expected blockbuster debut for Apple’s iPad. Within a couple days of the device’s availability, Apple’s online support forum was flooded with discussion threads about weak WiFi signals.
The manufacturer released two support documents specifically for iPad WiFi access. In the document entitled “Troubleshooting WiFi networks and connections,” the company advised users to move closer to the router or hotspot, along with other techniques. Some individuals posting to the forum scoffed at that as a possible solution. One said, “Bear in mind that the all-metal enclosure of the iPad may be what’s blocking the signal. There’s a reason the iPod Touch has a plastic window in the back for the WiFi antenna, and the MBP’s antenna is housed within the plastic portion of the hinge.”
Soon thereafter, news organizations began reporting that incompatibility issues with the iPad’s WiFi connectivity were the reason Israeli officials began seizing the devices from anyone trying to enter the country with them.
The Christian Science Monitor reported this statement from the Israeli government: “The iPad device sold exclusively today in the United States operates at broadcast power levels [over its WiFi modem] compatible with American standards. As the Israeli regulations in the area of WiFi are similar to European standards, which are different from American standards, which permit broadcasting at lower power, therefore the broadcast levels of the device prevent approving its use in Israel.”
Perhaps that’s why it’s called disruptive technology.
Patch cord lays path for 100-Gig connectivity
The 100G Migration Patch Cord from Corning Cable Systems has a name that pretty well describes its function. The cord is meant to facilitate the conversion from 10- to 100-Gbit/sec Ethernet transmission in cabling systems designed around a 12-fiber MTP connector interface. According to the manufacturer, because 100GBase-SR10 Ethernet multimode fiber electronics use a 24-fiber connector, the 100G Migration Patch Cord eliminates the need for recabling or other major network modifications in 100-Gig-ready 12-fiber systems.
The company further explained that today many data center infrastructures are migrating to 12-fiber cabling systems that use array or MTP connectors, which allow for greater density in the backbone and horizontal than other interface styles do. MTP-to-LC modules break out these 12-fiber MTP connectors into duplex LC connectors that are used for duplex fiber serial transmission, such as 1- and 10-Gbit Ethernet.
With Corning Cable Systems’ LANscape Pretium EDGE and Plug & Play Universal systems, users can migrate from 10- to 40-Gbit/sec Ethernet by replacing the MTP-to-LC module with an MTP adapter and a 12-fiber MTP patch cord. Migration from 40- to 100-Gbit/sec Ethernet can be achieved on the same 12-fiber system by replacing the 12-fiber MTP patch cord with the 100G Migration Patch Cord, which has a dual-12-fiber-MTP to 24-fiber-MTP design.
The cord features a round cable and ClearCurve bend-insensitive multimode fiber.
Uptime Institute goes off on TIA-942
As part of an ongoing effort to expose myths and misconceptions about its Data Center Tier Classification System, The Uptime Institute recently took issue with the notion that the TIA-942 Telecommunications Infrastructure Standard for Data Centers is a guideline for tier classifications.
“The similarities between the Uptime Institute Tiers and TIA-942 stop at the surface,” the group said in its latest round of Tier Myths and Misconceptions documents. “Uptime Institute Tiers is functionally disconnected from TIA-942,” it continued. “The core objective of Uptime Institute Tiers is to guide a design topology that will deliver high levels of availability, as dictated by the owner’s business case. Uptime Institute Tiers evaluates data centers by their capability to allow maintenance and to withstand a fault. Uptime Institute Tiers is not available in checklist form.”
Jonathan Jew, co-editor of the upcoming revision to TIA-942 and author of the article that begins on page 7 of this issue, concurs with TUI’s assertion. He explains, “The TIA-942 Tiering scheme was initially developed based on the concept of four tiers originally developed by TUI because we wanted to acknowledge that their scheme was in fact the most widely used for evaluating data center reliability, and they had very useful definitions associated with each tier.
“While TIA has remained with prescriptive definitions for each tier, TUI has decided to move to a functional approach. In the TIA scheme we might recommend a certain design solution, while TUI would be more open to various solutions as long as the result provided the desired level of availability.”
He continues, “The TIA’s scheme is open to evaluate the relative security and availability levels of a data center. However, just selecting the right pieces still does not guarantee the desired level of availability. It still requires competent engineers to design a system that functions properly. Because [TUI’s] system is based on function rather than components, their system can’t be completely put down in table form.”
This TIA-942 commentary is one of five myths and misconceptions The Uptime Institute is trying to squelch with its most recent “mythbusting” effort. These five most recent myths are international-based, the organization says. In addition to the TIA-942 myth, others are that TUI’s Tier Classification System is U.S.-centric; the system requires an emergency power off (EPO) button; the system requires raised floors; and Tier III and IV data centers require the engine-generator plant to be operational at all times.
“During recent visits in Latin America, Europe, Russia, Africa, and Asia, Uptime Institute encountered particular tier myths and misconceptions,” TUI said in an email in which it also listed the TIA-942 and four other myths. “These myths have taken attention away from the fundamental concepts of the Tier Classification System. The result has been shortfalls in design topology despite adequate budgeting. These shortfalls put the data center’s ongoing uptime at risk.”