I noticed that (in a wireless adapter's specs) it's B tx power has
3dBm more power than G. Is this another clue that connecting under 802.11 B will give more
Good question. 802.11b and g powers are measured at the maximum power for any frequency or mode. For 802.11b, that's at 1 or 2 Mbits/sec which operates pure FM (frequency modulation). However, the 802.11g modes, that run from 9 to 54Mbits/sec, are all combinations of both AM (amplitude modulation) and FM modulation. The AM component reduces the ratio of the peak to average power when measured according to ANSI C63.4:2003 (Measurement of intentional radiators). Therefore,
802.11b will read somewhat higher power levels than 802.11g. Also, note that the tx power will vary over about 1dB from the lowest to the highest channel.
Sorry, you gotta be a paid member to download the proceedures and specs.
OK. So....if I understand correctly, it's not so much that 802.11B is effectively more powerful than G, but that the different forms of modulation simply yield different power measurements ?
A bit of the old "apples to oranges"?
In any case, is it true that operating in B can allow decent connections (albeit slow) at increased distances?
1) Given a required connection speed of say, 9-11Mbps?
2) Simply comparing lowest fallback in each; 1-2 Mbps in B to ...9 Mbps in G?
On 30 Jan 2007 09:02:59 -0800, "seaweedsteve" wrote in :
G falls back to B modulation at low speed, so there's no advantage to B over G. In addition, although 1 Mbps BPSK has a small advantage over 6 Mbps OFDM in theory, in practice I've always gotten better results from
Yep, that's about it. AM modulation causes the average power to decrease somewhat.
Nope. Just specmanship. The FCC specs are for average power. If they were for peak power, the two power ratings would be roughly equal. There's also RMS power (heating power), which is there just to confuse everyone.
Yes. But think of what you're doing. It takes roughly 13 times longer to send a packet at 1Mbit/sec than at 11Mbits/sec. You're occupying 13 times the air time (transmission time) sending this one packet. If I assume a nearby microwave oven raising havoc, I suspect that there's a 100% chance that the 1Mbit/sec transmission will get clobbered and need to be repeated. It might never arrive. However, the 11Mbit/sec will get clobbered less often, and some of the data will arrive, even though the error rate is horrendous. This also applies to multipath and co-channel interference. Going slow is NOT a general cure for reliability issues.
I suggest you read:
which is a report on the MIT Roofnet mesh mess. Mesh is the worst case topology, where interference is epidemic. Note the relationship between speed and probability of delivering packets. It's quite illuminating.
I tend to lock the speed of my access points at the slower OFDM speeds. This has the side effect of rejecting all 802.11b connections on SOME (not all) access points. However, It's the most reliable compromise I've found between range and retransmissions. However, it does not work for all conditions and I've had to go back to the more common "automatic" speed settings on two hot spots.
Sorry. I don't understand the question. In a perfect world, faster is better up to the limit of the fade margin. In a real world, interference seems to be the prime motivator, and that requires a much more complex model. For example, fragmentation threshold and flow control work nicely for reducing the effects of interference and hidden transmitters. In other words, there's no simple answer to determining the optimum speed setting.
On 31 Jan 2007 06:55:31 -0800, "seaweedsteve" wrote in :
Those folks don't know how to format a proper webpage, so you have to wonder about the quality of their advice. And you don't have to read far to see the advice isn't any better than the webpage formatting; e.g.,
B doesn't conserve battery life as compared to G. In fact newer G chipsets tend to have better power management than older B chipsets.
Speed is managed by the access point, not the wireless client.
OFDM (G) tends to be more robust than BPSK (B).
G doesn't have different channel requirements than B.
G doesn't cost more than B.
Although B does interfere with G, it doesn't "nullify" it.
Pentium processors don't operate in the 2.4 GHz band.
It's totally wrong. There isn't one single correct statement anywhere on that page. My favorite is: "G requires use of three different channels simultaneously, and the network implementation may have a constraint to not lock up three channels" Amazing. B and G both use approximately 22MHz or about 5 channels. The bandwidth requirements for both are intentionally identical. Only at speeds above 54Mbits do *SOME* systems require more bandwidth.
Yep. Battery life for B is much worse than G because:
For the same amount of data moved, the slower B is on the air longer than G. That applies to both xmit and receive. It takes longer to synchronize B (long preamble) and longer to sample receive the data. With B, all management packets are sent at the slowest
1Mbit/sec (for compatibility), which also requires more airtime. If one simply compared the current drain required to move an XX MByte file using B versus G, the higher speed G would by far be more efficient.
Yep. The only exception is that in an ad-hoc network, the initiating client can set the speed. However, we're talking infrastructure here, not ad-hoc.
Yep. OFDM has a really big advantage over B in that it is much more immune to reflections and multipath. OFDM consists of 52 sub-carriers, all of which contain parts of the data. If one disappears, the others still work. Frequency selective fading (as caused by multipath) is the major enemy of wireless. Each of the 52 carriers are on slightly different frequencies, where frequency selective fading might ruin one or two carriers, the rest will get through. That's far more robust than 802.11b, where the loss of part of the frequency spectra results in total loss of data.
Also, don't forget my previous comments about interference. Big slow packets as found in B are a much bigger target for interference than the smaller G packets (although fragmentation can help).
Actually, G costs less because the chipsets tend to be more tightly integrated than B.
Yep. However, the author might be referring to the 802.11b compatibility mode found on all access points. The presence of an
802.11b connection does slow things down considerably. However, better algorithms have largely reduce the damage to the point where it is easily tolerated.
That might be in reference to 2.4GHz being a common CPU clock speed. The RFI from the processor is fairly well scattered all over the frequency spectrum. At one time, I was seriously worried that there might be some interference from the processor. So far, I haven't seen any. However, the digital circuitry and clock junk from the processor on an access point or client radio creates far more interference than the main CPU. Although lower power, the clock crud is physically far closer to receiver than the CPU. Think inverse square law. Also, a good rule is that wires radiate, while components generally do not.
There's lots more wrong with that web page, but I don't want to burn any more time correcting them.
It's fairly easy to tell if someone is clueless about wireless. They tend to leave off the necessary qualifiers. For example, if someone said "802.11b has more range than 802.11g", that would be baloney because they didn't specify at what speed(s) and at what BER (bit error rate). BER is the reference level used to measure receiver sensitivity. More specifically:
Wrong: 1. 802.11b has more range than 802.11g. 2. 802.11g goes faster than 802.11b. 3. 802.11g is "better" than 802.11b.
Right: 1. For equal tx power levels, speeds, and bit error rates, 802.11g goes farther than 802.11b mostly because the receiver sensitivity for a given speed is better.
For equal tx power levels, speeds, and test condiditions, 802.11g is "better" than 802.11b because it is more immune to reflections and intereference.
Incidentally, there seems to be a rather odd misuse of the "b" letter suffice. 802.11 (no suffix) is strictly 1 and 2 Mbit/sec. 802.11b added 5.5 and 11Mbits/sec. 802.11g added 6, 9, 12 ... 54Mbits/sec. When someone is mumbling about the slower 1 and 2Mbit/sec connections, it really should be 802.11, not 802.11b.
 Older Breezecom FHSS radios would also do 3Mbit/sec.