Ethernet, IP, And Caching....A Bit Of History, Background, And Insights

I wrote the subject article awhile back and am sharing it here for any members working with Telecom/IT issues for their organizations.

Here's a snippet:

"By design, we have a basic way of dealing with bandwidth hogs both on the local area network and in the wide area network. This goes all the way back to the earliest days of IP, back before the Internet was a common resource. The performance issues are still the same, and in most cases bandwidth is not the culprit thanks to that original work."

formatting link
God Bless, Michael Lemm FreedomFire Communications
formatting link

Reply to
FreedomFireCom
Loading thread data ...

formatting link
Not sure about the emphasis placed on priority queueing. Early Ethernet and IP used best effort queueing almost exclusively -- same priority for every frame or packet. Ethernet "priority" wasn't even available until quite a bit later, more like the 1990s, with IEEE 802.1p, eventually folded into 802.1Q.

If there was any mechanism to prevent hogging of bandwidth, my thinking is that it was simply the fact that no single machine was typically fast enough to grab the entire L2 link. As machines became faster, so did the L2 network. I really don't know what intrinsic mechanism existed at L2, especially, to prevent bandwidth hogging.

Matter of fact, before the time of L2 "switches," it was possible to hog the bandwidth, by violating the backoff requirements of the CSMA/CD protocol ("Ethernet capture," it was called). And there was another shared Ethernet protocol proposed, the name escapes me, to solve this Ethernet capture vulnerability. But of course, switched Ethernet appeared on the scene and all these worries were obsoleted.

All I'm saying is that I think we all "muddled through" a lot of this. I don't think it had all been figured out ahead of time with uncanny forethought.

Bert

Reply to
Albert Manfredi

Well, the original method was to use CSMA/CD, which allocates bandwidth fairly among all contending stations. (Yes, I know that "capture effect" can create short-term "bandwidth hogging," but even with capture effect, the *long-term* bandwidth is distributed evenly, since all stations are capable of "capturing" the shared channel at any given time.) Of course, the advent of switching and full-duplex eliminated the CSMA/CD congestion-control algorithm, and that was part of the original impetus for 802.3x flow control.

Ethernet capture is not a *violation* of the CSMA/CD algorithm; it is an artifact of the *proper implementation* of that algorithm when stations are capable of saturating the available channel capacity. The fact that capture was not "discovered" until later stems from the fact that, for a long time, conventional network devices were unable to saturate the capacity of an Ethernet. Of course, capture problems are reduced by higher Ethernet capacity, and eliminated by full-duplex operation.

BLAM: Binary Logarithmic Arbitration Method

I agree with your conclusion.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Yes, I figured after posting that I had confused two effects.

One is the Ethernet capture, as you describe.

Another was that some NICs, IIRC, would not wait the required interframe spacing time after carrier sense (9.6 usec), so these NICs would get preferential treatment. As quickly as possible after the line got quiet, they'd butt in and then capture the Ethernet until they sent out all the frames in their buffer.

These hogging effects, along with the absence of any L2 concept of priority with the original Ethernet, made me question the notion that hogging prevention had ever been assured from the start.

That's the one. Thanks.

Bert

Reply to
Albert Manfredi

There wasn't. IMHO, the OP was trolling for hit to his article advertising caching proxies (instead just look up `squid`).

Unfortunately, this does nothing for bandwidth hogs (but does help tame stampeding herds of students, etc).

Solving bandwidth hogs (individual abusers) is a more difficult problem, perhaps addressible by delaying ACK packets and allowing the senders TCP throttling to do the work. Of course this won't work for UDP, but most firewalls drop inbound UDP, and probably should outbound.

-- Robert

Reply to
Robert Redelmeier

Rich Seifert wrote: (snip)

Well, what does happen with full duplex and without flow control?

I think we did this before, but it will depend on whether the switch does input queuing or output queuing. Whoever transmits more will have more data in the queue. I guess it is more fair, but not so obvious.

-- glen

Reply to
glen herrmannsfeldt

The purpose of a MAC algorithm (e.g., CSMA/CD) is to allocate shared channel capacity among all stations wishing to send frames at a given time. CSMA/CD is intended to be "fair," in the sense that all stations are considered equal with regard to their right to transmit (i.e., no priority or preferred access is provided). Capture effect is an artifact of CSMA/CD wherein a single station (or a set of stations) can effectively prevent other stations from transmitting for a period of time, in violation of the above-stated fairness doctrine.

With full-duplex, no station or set of stations can, by their actions, prevent any other station from transmitting, since each full-duplex station has its own, private Ethernet; there is no shared channel to allocate. Thus, capture effect is eliminated; the ability of a station to saturate its channel does not prevent other stations from sending frames in their queues.

Whether all of these frames get to their intended destinations will depend on a variety of factors, including switch and backbone network congestion (to which you allude). However, this was true even in the case of shared Ethernet with capture effect, although I grant there may be greater switch congestion in certain full-duplex configurations.

802.3x flow control was designed to prevent buffer overflow in input-queued switches; i.e., it protects the resources of the *switch*, as opposed to the "transmission rights" of the attached stations. As we learned through experience, output queueing makes more sense for switch design, and link-level flow control is inadequate as a general mechanism to prevent network congestion. Thus, we generally disable 802.3x flow control and allow end-to-end mechanisms (e.g., TCP flow control) to provide feedback to data-sourcing applications in the event of network and/or switch congestion.

If an application does not use flow control (e.g., UDP "transport" with no application flow control), then it gets what it deserves (or needs, if the application can survive periods of significant message loss).

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Rich Seifert wrote in part:

I can see the need for something like TCP flow control to discover and co-operate under mid-stream restrictions.

But couldn't local switch congestion be handled by a retasked JAM signal to invoke the vestigal backoff, even when the switched channel was clear, but the destination port was known not to be? Or would this interfere with cascades?

-- Robert

Reply to
Robert Redelmeier

Rich Seifert wrote: (snip regarding fair access and full duplex ethernet)

As I was writing the previous reply, it seemed that input queued switches would be more fair, in that each port would have an equal amount of buffer space. For an output queued switch, whoever gets more data out gets more of the buffer space.

Well, NFS has been going toward TCP, but NFS/UDP is maybe still pretty popular. Other than NFS there aren't so many high data rate UDP based protocols.

-- glen

Reply to
glen herrmannsfeldt

I suppose that one could step back and view this as a more general network asset allocation problem. So if one views the L2 network as a cloud, the general problem is how to allocate L2 assets "fairly" among hosts plugged into this cloud.

If the cloud internally consists of just one comm channel, the L2 protocol used with Ethernet is CSMA/CD. The protocol could have been something more complicated, like a rotating token (e.g. 802.4).

If the L2 cloud consists of multiple channels instead of just one, the flows through these multiple channels are allowed to merge. So the problem of allocating L2 assets has not really gone away. For example, at edge routers or at servers, flows will merge. The problem that CSMA/CD was dispatched to solve may have become less critical, just because there are more channels overall, but it continues to exist.

I agree, but perhaps this is just an implementation detail. If switches or shared buses are considered to be nothing more than the details of the inner workings of the L2 cloud, maybe one can perceive the L2 problem to be solved to remain the same. Only the techniques which can be applied are different.

I'd say that in a full duplex, switched Ethernet, on average the congestion problem becomes distributed among multiple channels. Because of this, an adequate mechanism to allocate L2 assets fairly may be to be to allocate buffer space in the switches, rather than mandating use of the flow control protocol. I see the difference between switched and shared Ethernet access control or flow control to be more a matter of degree than something fundamentally different.

Bert

Reply to
Albert Manfredi

If there is no flowcontrol in the UDP application, and nothing in the lower-layers, then I think it is actually the case that the UDP application gets whatever it wants, not necessarily whatever it "deserves" since it will just blast as much as it can.

Or did I miss some context?

rick jones

Reply to
Rick Jones

I think Rich meant that the UDP application will witness dropped packets, which is "what it deserves" because it didn't care enough to incorporate some sort of flow control (at L2 or at L7).

UDP "gets whatever it wants" in the sense that, without flow control, it will force any TCP traffic to back off and give UDP the room. But ultimately, the TCP will not lose data, while UDP eventually will. Sometimes, that's okay too.

Bert

Reply to
Albert Manfredi

Very few applications *want* their messages to be discarded without notification. However, this is what the application *deserves* if it chooses to use UDP without any flow control within the application itself. Yes, the device gets to *send* as much as it wants, as fast as it wants, but that doesn't mean that the messages get through, or that the application "works" in the sense that it accomplishes its desired function.

That said, some applications are robust enough to survive occasional (or even severe) message discard without notification; those applications get exactly what they *need* by using UDP without flow control.

I hope that makes my comment clearer.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Exactly; this reiterates what I said in another post in this thread (which I wrote before looking at Bert's always-on-the-point response).

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Absent some explicit flow control protocol (such as 802.3x PAUSE), there is no way to "Jam" a full-duplex channel. Contrary to common belief, there is no particular "Jam" signal, even in half-duplex Ethernet. "Jam" simply means continuing to send data for a period of time. Since full-duplex devices *expect* incoming data while they are transmitting, you can't use data to backpressure a full-duplex device.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

NFS/UDP with 8K UDP packets is fairly sensitive to random packet loss. It depends on fragment assembly which fails if even one is lost.

Switch buffer overflow would seem less random, though.

As far as I understand it, NFS/UDP uses a fixed time out and retransmission, usually adapted for LANs. It doesn't work so well for WANs.

-- glen

Reply to
glen herrmannsfeldt

(snip)

I would have to look it up to be sure, but I don't believe this is true, not counting the previously discussed capture effect. As far as I know, after the IFG, any station ready to transmit does, even if others try to sneak in early.

Note that there is IFG loss specified for repeaters. If there was an advantage to shorter IFG, repeaters would have it.

After a collision and the appropriate backoff time each host ready to send in the appropriate slot does so, without additional carrier sense. This is necessary to avoid the advantage to one host having a slightly faster clock. I believe the same is true after a successful transmission for the same reason.

A host could cheat by using the wrong exponential backoff formula, for example. It might take a while before anyone found out.

-- glen

Reply to
glen herrmannsfeldt

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.