Why 96 bits of silent time?

I know that in Ethernet, we must wait for 96 bit times of silent time before starting to send a frame. But why 96 bit times? Is there any specific reason for this number?

Reply to
typingcat
Loading thread data ...

snipped-for-privacy@gmail.com wrote in part:

Sounds like homework! We don't do that around here, but I'll give you a _big_ hint: ethernet originally was shared medium [coax, aka yellow garden hose] of carefully limited overall length.

-- Robert

Reply to
Robert Redelmeier

If that is a homework question, for which the instructor expects a reasoned answer based on Ethernet parameters, then the instructor is somewhat misguided. Furthermore, your "hint" implies that the 96 bit interframe gap is somehow related to coaxial cable propagation delays, which is incorrect.

The 96 bit "silent time" was chosen as a reasonable value to allow Ethernet devices to "recover" after receiving a frame. In addition to allowing the physical layer to "settle" (i.e., dissipate stored charge and return to a quiescent state), controllers generally need to perform a variety of housekeeping functions following frame reception, including: initiating DMA transfer of the received frame to host memory, allocating a new buffer for receipt of the next frame, updating management statistics counters, etc. By imposing a minimum "silent time" between frames, we greatly simplified controller design.

During the development of the 10 Mb/s Ethernet, we (the network architects) simply asked the controller designers how much time they thought they needed to perform all of these tasks. 9.6 usec (96 bit times at 10 Mb/s) was a number that all agreed was adequate, and that lended itself easily to a digital counter with a 100 ns clock. The number was completely unrelated to anything having to do with Ethernet propagation delays, collision timings, frame lengths, etc. It was only related to silicon and firmware performance in the controller.

And now you know "the rest of the story."

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Thank you for your answer. It was not a homework assignment but the instructor just asked us this question during the session and told us that we should find out the reason by ourselves. Mr. Seifert, you seem to be one of the original designers of Ethernet and know very much about it. It is an honor to get a reply a professional like you. Actually I also had thought that it is something related to propagation delays. Because in worst case, a terminal at an end of the line could start sending a frame when a terminal at the opposite end of the line has already started sending a frame but yet it doesn't made to the terminal. But then, there could be repeaters which elongate the total length of the line thus increase propagation delays...

Anyways, the bottom l> >

Reply to
typingcat

Correct. The "problem" you allude to (i.e., the requirement that stations be able to detect collisions regardless of the relative time when they start their transmissions) is solved by ensuring that the minimum frame transmission time is always longer than the maximum round-trip propagation delay of the network (including repeaters); thus, the "collision event" always occurs while transmitting, regardless of when individual stations begin sending.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.