If you have 10 users connected to an AP, 9 of them are close and can get speeds of up to 54Mb/s, 1 of them is right on the range point and can only connect at 1Mb/s, is it true everyone will be limited to
i.e. Is it true that all users are dragged down to same speed as the slowest connected user?
I've been told today that this is the case, however I don't believe what I'm being told, but I've had a hard time trying to find something specific in plain english that answers this.
If you know the answer to this, I'd also love a link to a technical document if you have one. If I go back and say to the person "It's not true, according to an Internet forum post" I'm not really going to have a leg to stand on, however if I say that -and- have some hard documentation, I will!
------------------------------------------------------------------------ View this thread:
You'll need: IEEE-802.11-1999 IEEE-802.11b-1999 IEEE-802.11b/COR1-2001 IEEE-802.11g-2003 Warning: Reading these have been known to turn one's brain to mush.
The question is a bit muddled because two things are happening at the same time. Most packets run at individual connection speeds, but not in all modes or with all types of packets. The details:
Only one user can move traffic to/from an access point at a time. The system is essentially simplex, where the AP and clients can only transmit or receive, one at a time.
802.11g access points usually have at least two modes. The AP can be in 802.11g mode, where the speed it 6-54Mbits/sec and the modulation is OFDM, or it can be in the 802.11b compatibility mode, where the speeds are 1-11Mbits/sec using various modulation schemes. There also modes not specified which are used to obtain speeds greater than
54Mbits/sec. The problem is that when the AP is in one mode, it cannot decode transmissions sent in the other modes.
Depending upon the algorithms used, switching between 802.11g,
802.11b, and Turbo-Whatever modes can be quite sluggish. This is what I believe your sources were discussing. In the dim and disgusting past, when a single 802.11b packet was heard, other traffic thruput would slow down drastically because the AP was multiplexing between
802.11b mode and 802.11g mode. Typical was about 25-50% of the time went to 802.11b. During this time, no 802.11g traffic could move.
802.11b also takes much more air time than the same amount of data using 802.11g, so it can really impact the speed of the system. There's a speed table, which uses this scheme, in the FAQ at:
More modern chipsets and algorithms have eliminated the fixed time slicing and replaced it with a more efficient algorithm. I'm too lazy to dig out the patents, but basically it prevents the 802.11b mode from hogging too much AP time. The same is true for the faster speeds, which can also do the same thing. To maintain high airtime efficiency, it's fairly common for capacity limited systems (such as WISP systems) to just disable 802.11b compatibility and to not support turbo modes.
The other part of the question is whether each 802.11g client can connect at its own favorite speed or whether the access point give everyone the same speed as the slowest connection (assuming 802.11b compatibility mode is disabled). It would be really dumb to run the system at speed of the slowest client. Each connection speed is independent. The proof is somewhere in the IEEE 802.11 specs. If you can't find the reference, I'll dig for it when I have time.
Also, there's the not so minor problem of communicating 802.11 management and control packets (such as beacon, probe, auth, assoc, flow control, etc). In 802.11b, they are all transmitted at the slowest speed of 1Mbit/sec. I think it's the same with 802.11g at
6Mbits/sec, but I'm not sure and need to read the specs.