Actually, it comes from the design of the CSMA/CD protocol used by Ethernet. That stands for "Carrier-Sense Multiple Access with Collision Detect." If you visualize the original Ethernet which had transcievers tapped off of a coax backbone cable, it works like this: When a controller has a packet to transmit, it listens for a carrier on the coax. If there is no carrier, it transmits immediately. Otherwise, it waits for the other transmission to complete, waits the specified inter-packet time, and begins transmitting. While transmitting, it listens for others attempting to transmit at the same time. If it detects this condition, it continues to transmit for the specified minimum packet length, and then terminates the packet.

At that point, it begins to execute the Truncated Binary Exponential Backoff algorithm. Which basically means it waits a random amount of time before attempting to retransmit the packet.

The issue with distance is the collision detection scheme. At the bit rates Ethernet operates at, you've transmitted a lot of bits before the first bit you transmitted reaches the other end of the cable. If you were to transmit a rather short packet, you might complete transmitting the packet before the first bit you transmitted reaches the other end of the Ethernet. Someone at the other end could then, seeing a clear ether, begin transmitting. He would collide with your packet, destroying it. But, since you already finished transmitting it before he stepped on it, you won't realize that your packet wasn't delivered. Your packet is therefore lost.

Lost packets are a really bad thing. Higher-level protocols, like TCP, are designed to be able to recover from lost packets, but a large amount of bandwidth is lost in the process.

So, Ethernet was designed to prevent undetectable collisions. It does this by defining a "slot time", which is essentially the round-trip delay through a maximum-length Ethernet LAN. The spec then defines a minimum packet size that is longer than a slot time. This guarantees that a controller on one end of a network will see a collision caused by a controller at the far end of the network, thus allowing it to enter the backoff algorithm, and assure that the packet is eventually successfully transmitted.

By exceeding the maximum cable length, you may create a situation where collisions result in lost packets, resulting in a substantial loss in network performance.

But wait, the situation is even more complicated than that...

A hub essentially emulates the old Ethernet coax, so my foregoing description applies to anything connected to a hub.

A switch, on the other hand, operates on completely different principles. The connection to a switch is an unshared, point-to-point connection between the switch and the device at the other end. Most switches use a full-duplex version of Ethernet that doesn't involve collision detection. So what I wrote doesn't apply.

Unless... the device connected to the switch is a hub. In which case it does apply.




[This message has been edited by SolarPowered (edited 04-22-2005).]