Ethernet under high load

I was trying to understand the general behavior of LAN with MAC's that have CD capability i.e. Ethernet. One of the papers that I looked at "JiaWang and Srinivasan Keshav. Efficient and accurate ethernet simulation. In LCN '99: Proceedings of the 24th Annual IEEE Conference on Local Computer Networks, page 182,Washington, DC, USA, 1999.IEEE Computer Society."

mentions that even under high load the throughput doesnot decay i.e remains stable. Is that correct for all Ethernet? it seems counter intuitive as with more contenders, even if nodes can detect collision, it would still lead to a larger time time any one gets through?

A naive simulation (matlab) showed that the response of the network does remain stable but with a few nodes getting to hog the channel completely i.e. packet startvation. But I am not sure if this is a correct understanding of Ethernet's performance. Can some one please comment?

Thanktyou.

Reply to
Affan
Loading thread data ...

This is a shared Ethernet, where CSMA/CD is used to gain access. So I think the Matlab simulation is good.

The short version of what happens is that in a heavily loaded shared Ethernet, a few nodes will gain access and send frames. Other nodes will sense a carrier or collisions, and back off and try again. If a node has been unsuccessful at sending a frame, it backs off a little longer (on average) for each failed attempt, until the 10th attempt, then use the long max wait of the 10th attempt for the remaining 5 attempts. (Nodes try up to 15 times, then give up and discard that frame, and something at a higher OSI layer can worry about what to do next.)

But the thing is, the nodes that were successful the first time will use short retry periods on their next attempts. They think everything is okay. So because they retry so quickly, they will in effect get preferential treatment comnpared to the other nodes waiting patiently with their (on average) longer retry intervals.

So the total load on the Ethernet itself will be right up there, but many of the hosts connected to that Ethernet will see very poor service or even dropped frames.

Aside from the situation of some nodes having a lot of data to send, there is also the possibility of a large number of nodes on the network, each having some data to transmit. The effect is similar. The larger the number of nodes, the more likely it is that two or more will make attempts at exactly the same time, or at least will make attempts while another node is sending a frame. Either will result in either a carrier sense or a collision detect, and will cause a longer backoff wait (on average). The closer you get to 1024 nodes per Ethernet, the more likely it will be for nodes to have the same retry wait, and to experience collisions. At 1024 nodes, you will be guaranteed that collisions will always occur for some nodes.

Still, even in this case, there will be some nodes that get frames through will relatively short waits, so the Ethernet itself will experience a lot of throughput.

Clause 4.2.3 of IEEE 802.3-2005 explains the mechanism of half and full duplex frame transmission.

Bert

Reply to
Albert Manfredi

(snip)

The result, though, is that such nodes will get a burst of data out, but overall, assuming all hosts have the same amount to send, it is fair. Also, it is likely more efficient. Consider a traffic light that let only one car from each direction each green. (Not counting traffic meters.) It is more efficient to let a group of cars through on each green.

In the more usual case of a small number of hosts with a lot of data to send, it works pretty well.

(snip)

I have seen this before, and I don't believe it. First, even with

1024 nodes it would be unusual for all to have data to send at the same time. Even more, though, nothing special happens at 1024 nodes.

Consider that 1024 hosts do want to send at the same time. At maximum backoff each chooses a random number, and has probability 1/1024 of choosing one. The probability that none choose one is (1023/1024)**1024, about 0.367, in which case two or more is the winner. The probability that a given host is the only one to choose one is (1/1024)*(1023/1024)**1023, about 0.368/1024 or 0.00036. The probability that only one host chooses one is (1023/1024)**1023, or about 0.3680. The probability that none choose one, such that two or more is the winner is, as above, 0.3677. The probability that only one chooses two and wins is about 0.3680*0.3677 or 0.135. The probability that none choose one or two is (1022/1024)**1024, about 0.3677**2 or 0.135. The probability that only one host chooses the lowest value, then, is about (0.3680)/(1-0.3677) or 0.582. That is (1023/1024)**1023/(1-(1023/1024)**1024) or

For 1025 hosts, then, the probability that only one host has the winning backoff count is slightly worse, at (1023/1024)**1024/(1-(1023/1024)**1025), about 0.581.

With each additional host it gets a little worse, but there is no discontinuity at 1024 hosts trying to send at the same time.

-- glen

Reply to
glen herrmannsfeldt

Affan wrote in part:

This is the fairly well-known "Ethernet capture effect" [qv]..

I think it's helpful to understand ethernet in a historical context: it was developed as a simple competitor to complex, directive networks like Token-Ring. The basic idea was to provide _lots_ of bandwidth (more than any one station could saturate) so channel arbitation could be simplified.

This historical model has largely been violated as CPU speed increases have vastly outpaced network (and other periperal) speeds. Fortunately, most ethernets have gone "switched" and no longer experience collisions (although various analogous unpleasant things can happen inside the switch electronics).

In particular, a fast station might have more data to send before the waiting stations back-off timers expire. Software beating hardware. Adjusting the ethernet parameters would have various nasty side-effects (smaller allowable diameter), so hasn't been done.

-- Robert

Reply to
Robert Redelmeier

I'm fairly certain that TR came after Ethernet. Wikipedia seems to back me up but I don't consider this a fully authoritative source. :)

Reply to
DLR

DLR wrote in part:

Commercially, both were launched & competing in the early 1980s. Concepts date back further. ARCNET was also around.

-- Robert

Reply to
Robert Redelmeier

the main thing to remember about a simulation is it is exactly that - even where it simulates the network characteristics accurately, the offered load tends to be more complex.

a long time ago there were a lot of big sprawling Ethernets (mainly built by people starting with a clean design and then just too much growth).

up to 100s of devices in 1 big collision domain were common, but you usually needed bridges once you got to 500+ - just to cope with the physical limits of Ethernet (pre twisted pair - all that thin Ethernet co-ax....)

30 to 40% working load was also common - although application timeouts could hurt if the load got higher, printers would usually misbehave above a threshold and so on.

the application set often conspired to spread the load more evenly - esp as individual machines / servers couldnt keep up with the Ethernet.

Reply to
stephen

Thanks to every one replying. I understand the ethernet capture effect but was more concerned to know the validity of stabel network throughput at high load i.e. overall the number of packets that get trasfered on the network remains constant once saturation is reached. What I am understanding is that this is a correct behavior. To my mind, this is entirely due to the CD capability of ether net, as collision avoidance MAC's donot provide any such stability.

Do you agree?

Thanks.

-A

Reply to
Affan

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.