If 10-Mbit/sec Ethernet causes congestion, do we have to go to a higher bandwidth like 100 Mbits/sec just to survive?
Molex Premise Networks
People are talking about the need for more data speed to their desk, what with so-called killer applications like multimedia, high-definition graphics, videoconferencing, and massive relational databases. The common belief is that 10-megabit-per-second Ethernet is too slow and causing congestion on our networks and that we all have to go to 100 Mbits/sec or Gigabit Ethernet just to function as a normal office. Just how much data speed do we really need to our workstations?
The first thing we need to look at is bandwidth. The simplest way to find out how much bandwidth you actually use is to hire a network analyzer, drop it on your network for a week, and look at how much it is. Then apply growth models to that to see where you`re going. It`s not that difficult to do. A few forecasting models detailed below will enable you to determine what your bandwidth needs really are.
First, let`s start with where the data is coming from and going to: you and me. In most cases, there is a human being at the end of the horizontal cabling. The cabling plugs into a PC or some other device, but a human is inputting or extracting data on the network.
What is the bandwidth of a human? You`ve got five senses: sight, sound, smell, taste, and touch. Your number one sense is sight. If you look at any of the models of the brain, your visual areas in the cortex far outweigh any other sensory area. The human is a device that works very well optically. The visual sensory system is basically a pattern-recognition device that uses edge-detection pattern recognition. You see changes. Have you ever looked at a tree and all of sudden you saw a bird move, and your eyes go right to that bird? You didn`t notice the bird was there until it moved. You sensed the changes in the pattern.
Those pattern changes are used in all our compression algorithms. When I started taking a look at sight and how much data was really involved, I tried to analyze it with the rods and cones in the eye and finally gave up and decided to take a look at applications instead. High-definition television (HDTV) is a good place to start.
I remember a while back in 1991 at the Telcom Show in Geneva, I walked into the booth of OKI Semiconductor, which, at that time, was a part owner of mod-tap (now Molex Premise Networks) and a channel partner in Japan. I noticed a couple of fish tanks along one side of its display. One of them didn`t look quite right. I went up to it and found that one was a fish tank and the other was a television. And until such time as I had the right aspect, I hadn`t noticed that they were demonstrating HDTV. From a normal angle, you couldn`t tell the difference between the fish tank with fish in it and the television that had a video of a fish tank--it was that good. HDTV, or digital television (DTV), transmits data at 1.6 gigabits per second, which means at 1 Gbit/sec, I can`t tell the difference between it and real life. And my eyes don`t respond that well. My visual system can`t load that much data.
I`ve also seen a demo in Japan where there were two TVs side by side. One of them had an HDTV signal raw, and the other had a compressed HDTV signal compressed at 200 to 1, down to an 8-Mbit/sec signal, and I couldn`t tell which one was which. Now, how do you get that kind of compression? If our eyes resolved that signal up front, we would see individual pictures--not a moving picture--coming up. So I`m throwing more information at the eye than it can absorb.
HDTV is 1400 lines, 1200 columns, 30-hertz scan rate, and 24 bits of color. 24 bits of color equals 16 million colors. What would you possibly need 16 million colors for? Sixteen million colors are used for surface modeling, like on a cad/cam workstation. If I want to show the curved surface of a blue ball, I change the shade of blue, and your eyes see that as a curved surface. I can change it and make the surfaces a different curve by changing the way the gradations go.
But that`s only still pictures. I can get the same effect if I`m updating at a 30-Hz scan rate by using lesser numbers of shades and switching one shade to the other, every 30th of a second. So one of the basic compression algorithms is to drop the 24 bits of color down to 16 bits of color, which is a 256-to-1 compression, and it`s down dramatically. What that means is if you freeze-frame your HDTV picture, it may not be as good a quality of picture. It won`t work with a cad/cam terminal because they use solid still pictures.
But getting back to my analysis, if I can`t tell the difference between 8 Mbits/sec of compressed HDTV signal and a 1.6-Gbit/sec raw HDTV signal, my visual capabilities are 8 megabits or less. Let`s round it up to 10 Mbits. Since you have two eyes for binary vision, that means you have an effective capacity of 20 Mbits/sec for your visual senses.
Sound: A compact disc plays 650 megabytes in 74 minutes, or 70 Mbits/min, which is 1.2 Mbits/sec. This rate doesn`t represent a lot of data, and that`s for the range of 20 Hz to 20 kilohertz. Make it 2 Mbits to make it easier.
What about the other senses--smell, taste, and touch? As I`m trying to analyze these things, the only way I have to model them, at this point, is by looking at the latest technology. Using a petscan, where they`ve got the brain all wired up, and they look at what kind of activities they see in it, they found that the visual cortex is, by far, larger than any of the rest of these.
What I`m going to suggest then is that they are substantially less than the visual bandwidth. And if I make them effectively the same as the visual senses, I get at most 50 Mbits/sec. Actually, it`s closer to about 30 Mbits/sec, but let`s be generous. In other words, I put a little fiber-optic jack right in the side of your neck and dump 50 Mbits/sec down it, we would have you on virtual reality: sight, sound, smell, taste, and touch. If, in the end, I`m sending information out for a person to use, and I can use all those capabilities, my machine is going to be feeding it out at 50 Mbits/sec or less.
Now I`m going to want to come into the machine faster than that because I want to buffer something--kind of like a CD player while you`re driving along the road and it bounces. If you had one a few years ago, you lost the music for a second while it found itself again. But today, they buffer a few seconds of music, so when you hit the bump, it comes out of that buffer while it finds its place back on the musical point where it was lost. It would be awful if you wanted virtual reality and you lost your connection and had to come back into reality again.
So effectively, I want something faster than 50 Mbits/sec and buffer it, if there is a human being at the end. The output of my device then to the person running it is going to be less than 50 Mbits/sec. Anything I`m doing more than that is supporting the machine, not the person upon it. Now, I can give you 1 Gbit/sec to your desktop over Category 5 unshielded twisted-pair cable.
Current network bandwidth
By horizontal-cabling segment, bandwidth limitation is in the equipment, not the cable. I can give you more bandwidth over the cable you currently have installed in your office than you can physically ever use. Now, if you put a machine in there, the machine can use more, but machines are pretty easy to get along with when it comes to taking time to give them the data. I can dump the data in the machine over the course of an hour, but how fast I want to get it out of the machine to me is what makes the critical issue.
Again, you can do all the analysis in the world, and you`ll find that most networks` individual channels are operating at such a small portion of the time. And when they`re operating, they`re bursting data through at a 100-kilobit-per-second to 1-Mbit/sec rate. Effectively your current network is pretty slow. Put a network analyzer on and see for yourself. You will be surprised how little bandwidth they are using on most desktops. Even high-end engineering workstations doing things like software compiling are generally operating at less than 1-Mbit/sec average bandwidth. Average, remember. Lots of data going through, little tiny pieces of time, and lots of empty spaces in between.
So, we need to model this network protocol migration up through all these different networks, including Gigabit Ethernet and 2.4-Gbit Asynchronous Transfer Mode (ATM).
How do we do that? We`re going to model it, and the model I still use is Moore`s Law. Gordon Moore was one of the founders of a little company called Intel. In the early 1970s, he was asked by the U.S. government to model what the growth of the semiconductor industry was going to be. He came up with "Moore`s Law," which said that the density of an integrated circuit doubles every 18 months. That means your processor gets twice as powerful and your memory gets twice as dense. One of the first corollaries is for the same processor power, the same memory power for about half as much. Moore`s Law worked and has done so for the past 20 years.
Take one example: This is my 19th PC. My first machine was an Apple 2 computer in 1980 that had 16K of memory and a 160-kilobyte hard drive. This machine carries 160 MB of RAM and a 5-GB hard drive. That`s more than 10,000 times more powerful than my first machine, in about 18 years. Run the numbers and it works. So that means that in 15 years, I will get 10 doublings or 1000 times increase. And if you design your cabling plant for 15 years, you would have to design it with 1000 times the data-carrying capacity that you`ve got on it today. So you can put a network analyzer on it and find out what you`re doing today and apply Moore`s Law to it and it becomes a pretty good model.
Now with all models, you want to test them, so I decided to test it on modem speed. I went to Hayes, and in 1981, you could buy a 300-bit/sec modem from Hayes for about $15,000. And in 1981, we were really happy to get those 300-bit/sec modems, because that meant that I could have someone typing in one office and the computer was going back and forth (a high-speed typist runs about 80 bits/sec). I could support terminal service over a dial-up phone line 300 bits/sec.
Today, we have 56-kbit/sec modems--that`s 187 times faster in 16 years. That looks like a lot less than Moore`s Law! But the price has dropped from $15,000 to $200. What can you buy for $15,000 today? I can get a 748-kbit/sec modem; 300 to 748 is about 3000 to 1 and that`s 16 years. If we took 15 years to get 1000, then 16-and-a-half years gives me 2000. Guess what? Moore`s Law pops out again.
Another model is transmission-performance warranties. In 1975, mod-tap guaranteed 9.6 kbits/sec over twisted-pair cabling. People said that we were crazy. "Everybody knows that you can`t run RS-232 signals for more than 15 meters. It won`t work," they said. Today, we guarantee 1 Gbit/sec. Not over the same twisted-pair as back then, though. It is an improved twisted-pair, but it actually costs about the same per meter. Run the numbers: We`re talking on the order of 22 years and 100,000 times--Moore`s Law works again.
I believe Moore`s Law will continue to work. Intel made an announcement a few months back, saying they could accelerate it now and double the growth rate to a doubling every nine months. I think they can do that in the laboratory, but they can`t manufacture things that fast. They can`t jump those manufacturing technologies that fast. It is too expensive to build facilities at that rate. A fabrication plant for semiconductors now costs about $5 billion to build, and it takes five years to do it.
U.S. Internet traffic
Let`s include one other piece of information, just to understand bandwidth. The U.S. Internet (most people don`t understand this) is not run over the telephone system backbone--it has its own backbone. There are four sites that are called peering sites. The Internet service providers come into these sites, which are linked together by high-speed backbones that bypass the existing phone system. Each one of those peering sites and backbones is run by a particular company. The largest is MCI, and the second largest is US Sprint. MCI carries 60% of the Internet backbone--at least at this point of time. In December 1996, it carried 350 terabits of traffic for the month. Divide it out and you get 131 Mbits/sec, meaning I could drop MCI`s entire Internet backbone on your desk with 155-Mbit/sec ATM. That kind of backbone and bandwidth is phenomenal.
Forget about bandwidth as a problem to the desktop. Bandwidth is going to be a problem on backbone. The real problem we`ve all got with backbone is public switched networks. If you can give me phone lines into my home at just 1 Mbit/sec, I`m going to be really happy.
This article is reprinted from the February/March 1999 edition of Cabling Installation & Maintenance Australia/New Zealand.
Paul Andres is managing director of Molex Premise Networks World Corp., a Molex Inc. company.