IFG at 1000Mb/s

There is a note in section 4.4.2.4 of ieee802.3-2002 saying that "The spacing between two noncolliding packets, from the last bit of the FCS field of the first packet to the first bit of the preamble of the second packet, can have a minimum value of 64 BT (bit times), as measured at the GMII receive signals at the DTE. This InterFrameGap shrinkage may be caused by variable network delays, added preamble bits, and clock tolerances."

Does that mean that a receiving port must be able to cope with a minimum IFG of 8 bytes?

Reply to
adrian.uliana
Loading thread data ...

Strictly speaking, yes. In practice, however, Gigabit Ethernet connections are always simple point-to-point links between an end station and a switch/router (or a pair of end stations), and operate in full-duplex mode. In this configuration, there should not be any IFG shrinkage. However, it is (at least theoretically) possible to build a Gigabit Ethernet repeater and operate a link in half-duplex mode, in which case significant IFG shrinkage could occur. A station in such a configuration must be able to deal with a shorter IFG than that generated by the original sending station.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Thanks Rich, I am evaluating some test equipment and there is some varition in the minimum IFG. Some use 12,

11 or 8. I also need to know what I can support.

I also have another question on the same issue. Does that mean the maximum allowable number of bytes a device can strip from the IFG at any one time is also 4 bytes? (Assuming worst case scenario of 100%BW and an IFG of

12).
Reply to
adrian.uliana

What do mean by "strip bytes from the IFG"? Any transmitting station (end station or bridge/switch) must enforce the minimum 96-bit IFG before all transmitted frames. IFG shrinkage occurs through repeaters; I have never seen one of those for Gigabit Ethernet.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

By strip, I meant shrink.

So, if you had a switch that was only transfering traffic between 2 ports at 100% BW, wouldn't almost every IFG by 12 but at some point it'd have to shrink it to 11 to account for the clock differences on the 2 ports to prevent it's internal buffers from overflowing?

(I have seen a 10GbE switch shrink the IFG from 12 to 5 bytes not even at 100%BW but I realise that it is a whole different mechanism.)

btw - thanks for the input..

Reply to
adrian.uliana

Interestingly, in 1000BASE-T, the two devices at the ends of the twisted pair link use the *same* clock. One is a master, one is a slave; the slave locks frequency to the master. This is necessary because each device must know the absolute phase relationship between its own signals and those of the other device in order to normalize out its own signals, and crosstalk among the pairs. In the most common configuration, the central device (switch or router) will be the master for all of its ports, and all of the devices connected to the switch will be in common frequency lock. As a result, there will not be any "bit-slippage" due to clock frequency differences in this configuration.

Unfortunately, this is a special case. It is possible to have one (or more) of the devices attached to the switch port be the master. In addition, there may be switch-to-switch connections, and both switches cannot be the master for all of their ports. Also, there is no frequency-lock between link partners in 1000BASE-X (fiber). Ultimately a switch will experience "bit-slippage".

Let's assume (as in your hypothetical) that there is sustained, 100% offered load between a pair of ports whose clocks are not exactly identical. (And by definition, two independent oscillators will never be at the *exact* same frequency.) The worst-case tolerance of the clock is

+/- 100 ppm, or 0.01%. The switch will have some amount of frame buffering; gigabit switches tend to have lots of buffer (for performance reasons), but let's say the switch allocates 16KB (about 10 maximum frame lengths) per port.

Given two devices with worst-case clock tolerances (one is 100 ppm fast, the other 100 ppm slow), there will be a 0.02% difference. Assume that the fast device is the one sending, the slow device is receiving. (If it's the other way around, there is no problem.)

The bit-slippage over a single maximum-length frame will be ~2.4 bits. If the traffic is sustained, we will overflow the 16KB buffers after ~54,000 frames, or in ~650 ms (at 1000 Mb/s). At that point, unless it takes some other action, the switch will overflow the buffer and drop a frame. This will allow some of the buffer to empty out, and the switch can go back to forwarding that traffic load.

Question: How can the switch tell (absent some special test equipment) that the buffer overflow was due to clock speed differences? Buffer overflow can occur for a lot of reasons; the most common is multiple traffic streams converging to a single output port. The switch can't measure the clock tolerance; if it could, that implies it has a handy timing reference that is a whole lot better than +/- 100 ppm!

Answer: It can't. It drops the frame, the same as for any buffer overflow condition. It does NOT "shrink the IFG," in part because it doesn't realize that this is the source of the problem, and in part because the standard doesn't allow it to--a sending station must enforce the 96 bit IFG; it cannot use a shorter time period. Switches are stations; they have a MAC entity for each of their ports. They must obey the IFG requirements.

In practice, this problem does not arise. First, it is highly unlikely to see sustained, continuous traffic flow at full wire speed for a period long enough to create this effect. Moreover, the data transfer would have to be between a pair of devices with worst-case (or near-worst-case) clock offsets, and the sending station would have to be the "fast" one.

Even if this situation arose, the fact that the switch dropped an occasional frame (1 in 54,000, in the example above) would invoke end-to-end flow control. The loss of a frame will (ultimately) result in a lack of an acknowledgment for that data. TCP (or some other reliable transport) would reduce its offered load in response to the missing ACK; it presumes that data was discarded due to congestion (which is correct, sort of) and slows down to prevent further loss.

This is exactly what the switch needs; by slowing down, we are no longer in the "continuous stream data" configuration, and the buffers don't overflow. That is, dropping a frame is the "right" thing to do. This is a *control loop*, and dropping a frame provides the feedback to keep the loop stable. How many of us thought, when we studied control theory as an undergraduate, that this stuff was applicable to computer networks as well as radar and servomotors? It's the exact same math, the same problems, and (for the most part) the same classes of solutions. Stability, convergence, over/underdamping, oscillatory modes--all of these things arise in the context of communications protocols.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Rich Seifert wrote: (snip)

In many cases it takes very little feedback to phase lock two oscillators.

The old story is of Huygens putting pendulum clocks on the same wall and having them lock.

The TTL 74S124 dual VCO, it is supposedly impossible to run the two at different frequencies as they will lock anyway.

LASER gyroscopes using beams in opposite directions through fiber optics will lock, and miss counts, due to a very small number of impurities in the fiber.

Women living in the same house will phase lock their menstrual cycle.

Many more examples that I didn't think of.

The question, then, is whether two oscillators really are independent.

-- glen

Reply to
glen herrmannsfeldt

If they're phase locked, they're not independent.

Reply to
James Knott

As far as the OP's hypothetical is concerned, the question is whether you can guarantee that the two oscillators are *not* independent. If they lock, so much the better; the problem doesn't arise. If they don't, you have to know what to do when the problem does arise.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.