gigabit switch that supports jumbo frames?

I'm looking for 8-port gigabit switch (home LAN, under $100). I looked up two cheapest ones (netgear, trendnet) and found someone complaining about jumbo frames not working.

Is there a review or some ways to find gigabit switch that is verified to work with jumbo frames?

Reply to
peter
Loading thread data ...

Ah the joys of the IEEE refusing to produce a standard for a larger frame size for "ethernet" :)

Support for _which_ jumbo frame? Since there is no de jure standard, different vendors have different definitions as to what is a "jumbo frame." For some, they hew to the line initiated by Alteon and define it as an MTU of 9000 bytes. Others only 8XXX. Some up to 16XXX. And some may claim support for jumbo frames with something as low as

2XXX. (Although that one is a rather vague recollection)

IMPO "jumbo frame" is at least 9000 byte MTU. Anything less is wimping-out.

rick jones

Reply to
Rick Jones

Ah the joys of the IEEE refusing to produce a standard for a larger frame size for "ethernet" :)

Support for _which_ jumbo frame? Since there is no de jure standard, different vendors have different definitions as to what is a "jumbo frame." For some, they hew to the line initiated by Alteon and define it as an MTU of 9000 bytes. Others only 8NNN. Some up to 16NNN. And some may claim support for jumbo frames with something as low as

2NNN. (Although that one is a rather vague recollection)

IMPO "jumbo frame" is at least 9000 byte MTU. Anything less is wimping-out.

rick jones

Reply to
Rick Jones

As a matter of interest why do you want to use jumbo frames?

Using jumbo frames can actually slow transmission down because they need more buffering.

Reply to
Marris

As a matter of interest why do you want to use jumbo frames?

Using jumbo frames can actually slow transmission down because they need more buffering.

Reply to
Marris

As a matter of interest why do you want to use jumbo frames?

Using jumbo frames can actually slow transmission down because they need more buffering.

Reply to
Marris

As a matter of interest why do you want to use jumbo frames?

Using jumbo frames can actually slow transmission down because they need more buffering.

Reply to
Marris

Hmm, I'm not sure what that referrs to. I know that using JumboFrames can have applications interact with the nagle Algotirhm when they didn't before, and I suppose might affect how many segments per window. Is that what you meant?

Otherwise, using JumboFrames significantly reduces the CPU overhead of transferring data.

rick jones

Reply to
Rick Jones

Your switch will store frames before forwarding them so the longer the frame the greater the transmission latency.

Admittedly at gigabit speeds the amount of time taken to store a 9K frame is very small. If your link is mulit-hop this becomes more of an issue.

However supporting jumbo frames does significantly increase the amount of buffering a switch needs which has a cost.

Anyway the majority of frames transmitted on a network are not max length and your network adapters should be designed to cope with transmitting and receiving 64 byte minimum length frames continuously.

It is >> Using jumbo frames can actually slow transmission down because they

Reply to
Marris

That depends whether you're talking about one individual frame, in which case the larger frame will take slightly longer to get through. Or whether you're talking about a file transfer, in which case the jumbo frames will help get the file through faster overall.

That's not obvious to me. In one case, you store fewer jumbo frames. In the other case, you store more small frames. If congestion occurs, it's not clear that you need more buffer space for a than for b. The fundamental problem is that small frames mean many more of them.

Every time I see data on this, there are three frame sizes that prevail in real-world networks. The small ACK frames, around 40 bytes, the smallish 576-byte frames, which is the default MTU for IPv4. And the

1500 byte Ethernet MTU frames. The 576-byte variety are probably in decline, given the greater usage now of broadband IP access. They were common wirth dialup modems.

Mostly, I guess, but that's also when jumbo frames are used. If you're doing anything that requires low latency, as opposed to max throughout, your application will probably transmit small frames (either by using UDP with small packet sizes, or using the PUSH bit in TCP. The network usage will be less than optimal in terms of capacity, but you'll get low latency.

If you're doing non-real-time streaming media, for example, I would expect that large frames would be a good thing. If you're doing interactive VoIP, e.g. telephone, obviously you'll use smaller frames.

It's very hard to achieve effective gigabit+ speeds with 1500-byte frames. It takes a very high packet rate.

Bert

Reply to
Albert Manfredi

Only if one is trying to to compute clustering. otherwise, even with a few hops, for frames of that size it is epsilon.

Which, for a switch which already has support for JumboFrames is already in the price one pays to buy it.

More to the point, the stack in the host needs to be able to deal with a high rate of minimum-sized frames. Unless it is a really crappy card, I'd expect the _card_ to be able to do it anyway.

Well, that's not completely true. DB accesses can often have sections of large frames - perhaps not to full Jumbo Frame size, but easly at

2KB or perhaps 4KB. The proper analysis to do to see if JF would have benefit would be to take a packet trace and see how often on any given connection one sees a full-sized frame followed by additional data without awaiting a response from the remote application. The analysis would be similar to that one would do to see if "large send" (aka TSO in linux speak) would be of value.

Some really old data, based on the SPECweb96 workload, which is not entirely large file transfers (for some definition of large file :)

ftp://ftp.cup.hp.com/dist/networking/briefs/WebMTU.html

The description of the distribution of URL sizes can probably be found in the docs on

formatting link
and keep in mind that this also includes all the connection setup and tear-down as SPECweb96 was HTTP 1.0 without any connection persistence. Connection persistence in SPECweb99/SPECweb99_SSL and now SPECweb2005 would probably show even greater benefit (if the URL distribution remained the same which isn't given) but then for those benchmarks, someone :) made sure that the MSS had to be no larger than 1460 bytes. That then would devolve into a discussion of the benefit of large-send/TSO I guess :)

rick jones

Reply to
Rick Jones

I don't think you can assume that. Bigger is not necessarily better. For a crude analogy doing your grocery shopping once a week is more efficient than doing it every day. However nobody would argue that doing your grocery shopping once a year is more efficient than doing it once per week.

I think you are saying 9K MTU is better then 1.5K. However I am sure you would agree there has to be a limit to max frame size. For example I think you would agree that a max frame size of 1Meg would cause problems. Ethernet has picked 1500 bytes and that is what most equipment has been optimised to handle.

Regardless of frame size the protocal stack has to move data about and calculate checksums. The only thing that large max frame size helps with is segmentation and reassembly. However there is a genuine cost in buffering large frames. Also the protocol stack should be designed to handle streaming frames of less than max frame size and so should be able to cope with streaming 1500 byte max length frames.

Smaller frames get through the switch faster which simplifies things.

Reply to
Marris

Please could you define 'epsilon'

True, but I think the original poster was complaining that he was having difficulty finding a switch that reliably supports jumbo frames.

Yes, but doesn't it have to do this anyway?

Reply to
Marris

As long as we're using qualitative arguments, the 1500 byte payload limit was chosen at a time in which the bit rate of the medium was set to 10 Mb/s. Surely, had the bit rate been 100 Gb/s, where it is now headed, you'll agree that a 1500 byte limit would not have been chosen.

The grocery shopping analogy should be this. If you're shopping for a single family, it probably makes sense to use a standard sedan and a few shopping bags of groceries. But you would not limit your grocery-carrying capacity to the trunk of a family sedan if you were shopping for the groceries for a restaurant. It might be possible to do, but it would require a silly number of trips to the supermarket. And possibly, depending how big your restaurant is and how far the supermarket is, you could not achieve your goal even in the best of circumstances, even if you run a lot of red lights to make the trip faster.

Here are some numbers to consider.

If you can transmit 30,000 frames/sec and a 32-bit longword in 0.1 usec, then you will reach a maximum of 320 Mb/s with a frame size of 1500 bytes, at 26,667 frames/sec. Assuming zero interframe time, increasing the payload of the frame will not increase that overall capacity, because it just takes that time to get the bits across. In other words, with a 0.1 usec word transmission time, you are sending frames back-to-back, with absolutely no gap between them regardless of frame size. Makes no difference if you increase the frame size, if you can handle 30,000 frames/sec.

Now you decrease the longword transmission time to 0.01 usec but keep the frame rate max set to 30,000 frames/sec. What happens? The 1500 byte frame limit allows for a capacity of 360 Mb/s, which is a slight improvement. However, a 4500 byte frame size permits 1.08 Gb/s, and 9000 bytes permits 2.16 Gb/s. These are big improvements. The reason being, now the frame rate is limiting you, so bigger frames will provide more carrying capacity. Like using a truck instead of a car, to supply the restaurant.

Now increase the frame rate limit to 50,000 frames/sec. If the longword transmission time is 0.1 usec as before, your link is still limited to

320 Mb/s, no matter whether the frame size is 1500 or 9000 bytes. But if you can manage to reduce that longword transmission time to 0.01 usec, now you will see a net 600 Mb/s with a 1500 byte limit, which is a decent improvement, but you will see a net capacity of 3.2 Gb/s with a frame size of 9000 bytes. Big improvement.

Routers and switches do have packet switching limits. The higher the capacity, the higher those have to go, but there are still limits. So as you decrease the time required to transmit a longword, you will eventually come up to a hard ceiling caused by the packet switching rate limit. The only way to get beyond that is to increase the size of each frame.

When you're dealing with link capacities of 10, 40, and 100 Gb/s, it becomes close to impossible to achieve those rates with the 1500 byte limit.

Bert

Reply to
Albert Manfredi

"Has to" is overstating. Several stacks have at least some form of zero-copy in them, at least for transmit (eg sendfile variants) and many NICs offer checksum offload. CKO is quite prevalent these days.

If the host CPU were indeed doing all the copies and/or checksum, the law of diminishing returns would indeed apply once the per-packet costs went to epsilon. With zero-copy/CKO in place it is the per-byte cost that goes to epsilon instead :)

IIRC, since Ethernet was first deployed, the size of disc sectors has increased, the size of virtual pages has increased, the size of cache lines have increased etc etc. Officially sanctioned Ethernet frame sizes have remained the same

Were 1500 byte frames getting through 10 Mbit/s switches too slowly?

rick jones

Reply to
Rick Jones

Small enough as to be insignificant.

_Have to_? Not really. At the risk of correction by "Those Who Really Know" :) the standards don't say that a card _has_ to be able to support the maximum rate of minimum sized frames.

There are examples from NICs past where they couldn't even handle back-to-back full sized frames. One which was particularly populare among PCs could only deal (IIRC) with two back-to-back full sized frames. I suspect that if the designers knew of the limitation, they probably handwaved it by saying "yeah, but PC's cannot transmit 1500 byte frames that fast anyway so it doesn't matter" Of course, PC's and the systems feeding them got faster and it did become possible. At that point, that NIC began to very quickly fall-out of favor.

There was even a Gigabit Ethernet NIC that had an aggregate netperf TCP_RR limit for minimum sized frames of ~40K transactions per second (80,000 PPS). It was still a "gigabit ethernet" card and, for its day, it was a decent card.

rick jones

Reply to
Rick Jones

Bert, I think this is where we disagree.

I think gigabit Ethernet equipment should be able to transfer greater than 1.5 million frames/sec so it can stream minimum size frames. If the equipment can handle this then whether the max frame size is 1.5K or 9K is irrelevant.

I guess you and Rick are saying that this is unrealistic in everyday end point LAN adaptors.

The IEEE is not going to change the 1500 byte max payload size. It has though recently increased the max frame size to 2000 bytes to allow extra encapsulation for things like encryption and tagging.

Arthur.

Reply to
Marris

With any luck that will be the camel's nose in the tent.

rick jones

Reply to
Rick Jones

It's possible that the IEEE 1500 byte payload size is going to become irrelevant at some point. An example from my own field (I work for a service provider): We require all new equipment to have support for jumbo frames (specifically: at least 9000 bytes). Similarly, Ethernet circuits that we lease (typically some form of Ethernet over SDH) must support jumbo frames. Whether IEEE says 1500 bytes or something else is not relevant - jumbo frames exist, and we will only use circuits and equipment that support it simply because we find it so useful.

Steinar Haug, Nethelp consulting, snipped-for-privacy@nethelp.no

Reply to
Steinar Haug

Yes, this is the disagreement. Although of course frame rate limits will continue to rise, but also bit transmission times will continue to decrease, as clock rates go up or people use parallel links. So there will continue to be pressure to increase frame size to achieve the highest possible throughput, IMO.

Taking your example, if 1.5M frames per second is the max frame rate, and time to transmit a longword is 10 nsec, then you're right. At 1500 byte or 9000 byte frame lengths, assuming no gap, you'll achieve the same 3.2 Gb/s. But what happens when you decrease the longword transmission time to 1 nsec? Now the 1500 byte frame gives you a total

18 Gb/s, but you need at least 2700 byte frames to achieve the maximum 32 Gb/s with that longword transmission time. And the need for larger frames will be more pronounced as the word time is decreased more. So people will want larger frame sizes to maximize that link's throughput.

When you mentioned segmentation and reassembly, it reminded me of ATM. ATM's original premise was that its constant-size 53-byte cells would be switched entirely in hardware, i.e. very fast cell rates would be possible, and that the lowest link speeds would be as low as 64 Kb/s. It's this low link speed of DS0 lines, and the desire to avoid echo cancelling in voice circuits in some countries, that drove down the size of the cell. Had the last drop to homes been something more like today's broadband links, 100s of Kb/s instead of 10s of Kb/s, even ATM with its high-resolution QoS offerings would have chosen a much larger cell size.

We're probably in a similar place now with Ethernet as we were with ATM. The smaller packet sizes work well when trying to provide QoS differentiation in the slower links. E.g. to do traffic shaping for broadband users. And they make life more difficult on the fast trunks.

Bert

Reply to
Albert Manfredi

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.