Choosing a Channel?

I have a simple 802.11g home network, but every home surrounding mine has one too. Choosing an "open" 2.4 GHz channel for me is an exercize in logic versus bandwidth. Every reference, including the manufacturer of my 2.4GHz cordless phones, says choosing channels 1, 6 or 11 is optimum. Of course those 3 are heavily used, with most units staying on their default channel of 6 or 9. So, I feel one of the interval channels may be superior based on the fact that I can have a unique center frequency with the greatest amount of overlap only with the weakest neighbor transmitters.

However, I'm wondering how much concern I should have for users who are 1 vs. 2 channels removed? Most users near me use channel 6. Logic tells me to avoid channels 5 and 7. Channel 2 removes me the farthest from the meelee on channel 6, but it also places me closer to a single user on channel 1. In striking a balance, am I better to distance myself from a busy channel than one adjacent to just one user, or to split the difference between occupied center frequencies? IOW, is it better to share 3/4ths of your channel with a single signal, than 1/2 of your channel with a bunch?

Am I focusing on the wrong criteria? My weakest signal received from my own network is 65% in the kitchen. Should I just ignore every signal below that level and just concern myself only with the one or two signals that may actually get that strong in my house?

-Bill Radio

Reply to
Bill Radio
Loading thread data ...

In article , snipped-for-privacy@MountainWirelessNOSPAN.com (known to some as Bill Radio) scribed...

You can avoid the whole mess, and get the same potential speed, by using 802.11a (5GHz) equipment. It's fairly easy to find multi-standard cards, and access points for the 11a standard seem to turn up on Greed- bay pretty regularly.

Happy hunting.

Reply to
Dr. Anton T. Squeegee

the 11-14 b/g chanels can be confusing as they overlap - 1 6 and 11 or in japan only 14 are the non overlaping chanels check what your neighbors are using.

or as dr anton says

Reply to
Neuromancer

Researchers at Cisco found that, because of the way data is transmitted in 802.11b/g, it is actually better to use one of the non-overlapping channels (1, 6 or 11), even if it's in use by another network. See .

I suggest trying the non-overlapping channel (1, 6, or 11) with the weakest signal from other networks before you try any of the other channels.

Reply to
Neill Massello

Move to a new location. Switch to 802.11a (5.8GHz).

Ummm.... It's not that easy.

Correct. Those are the non-overlapping channels. Each channel number is 5MHz wide. However, the 802.11b/g signal is 22MHz wide. If you grind the numbers, that leaves you 1, 6, and 11 as the only non-overlapping channels. Incidentally, I've noticed that most 2.4GHz cordless phones seems to prefer the lower end of the band (i.e. Channel 1) and move up the band depending on interference. I can see them clearly with my spectrum analyzer all cluttering the lower end of the band. Not all cordless phones work this way, just the ones I can see.

Wrong(tm). The problem has to do with how your wireless receiver perceives these other channels. If you land on a channel with an existing user, your receiver will decode their data as valid data and your xmitter will wait the requisite time before attempting to xmit. The idea is to reduce collisions (or partial collisions). Things slow down, but do not stop.

However, if you have someone adjacent to the your channel, their off frequency transmissions will be decoded as noise rather than valid data. Your xmitter will not wait and simply transmit on top of them. The net result is a continuous series of collision where nobody moves data.

In addition, when you wedge yourself between two heavily used channels, you get the interference from BOTH of these channels, instead of just the one.

It's better to stay on 1, 6, and 11 than the others.

See my previous explanation. Stay on 1, 6, and 11.

I once troubleshot a system in high rise office building. Netstumbler showed some huge number of wireless access points, mostly on channel

6, but others scattered on other channels. Someone had installed their access points on channels 3 and 8 using your logic. It worked fairly well late at night, but was effectively useless during the day. I moved the access points to channel 1 and 11, and things started to work much better. However, I wasn't satisfied with the result so I spent the rest of the day installing panel antennas and relocating the access points away from windows with a view of the city. I had some problems with users that had a window office, but the users in the core of the building did quite well. My guess(tm) is that the antennas and repositioning had the biggest effect, but the channel change was also somewhat of a help.

That reminds me.... I should get back to doing my (late) billing.

Reply to
Jeff Liebermann

On Mon, 4 Dec 2006 16:45:24 -0700, snipped-for-privacy@newsguy.com (Neill Massello) wrote in :

9???

There's no way to know without actual extensive testing.

There's no way to know without actual extensive testing. Radio issues often seem illogical simply because they are so complex.

You should just test, particularly to find out if your 2.4 GHz phone is enough of a problem to warrant replacement (e.g., with 900 MHz).

"All generalizations are false," and that's not exactly what that article says -- it's mostly responding to suggestions to use 4 channels (e.g., 1, 4, 8, 11) instead of 3 channels (1, 6, 11) in a single multi-access point network. That study doesn't necessarily extend to separate interfering networks.

Sure, but I have found some cases where one of the other channels (e.g.,

4) does work better -- so it's worth trying 3, 4, 8, and 9 if you can't get good operation on 1, 6, or 11. (Note that even 1 and 6, and 6 and 11, overlap to some degree.)
Reply to
John Navas

On Tue, 05 Dec 2006 00:32:48 GMT, Jeff Liebermann wrote in :

Actually minimally overlapping channels -- there's no sharp cutoff at 22 MHz -- significant energy goes beyond those boundaries. Worse, many wireless products now use various forms of multiple-channel transmission that pollutes much more than a normal channel.

I've seen that behavior with Panasonic Gigarange phones. Others were all over the place.

With all due respect, I think that's a bit simplistic, exaggerated, and misleading.

True, Wi-Fi devices are designed to avoid each other, but there's no free lunch, and the total can be much less than the sum of the parts, sometimes much less. A common problem is where remote units on network A cannot clearly hear remote units on network B, and vice versa, so they merrily belch away at the same time, wreaking havoc for other units on both networks that can hear both of them.

On the other hand, just as Wi-Fi is designed to share a channel, it's also designed to deal with interference, principally by falling back in speed. In at least some cases this will work better on overlapping channels than having both networks on the same channel, particularly where the interference is less severe and/or when units are throttled back to lower speed. I'll often throttle "g" networks to as low as 11 Mbps when I know the client has no need for higher speed (a tip I picked up, as I recall, from you:).

I'd say try 1, 6, and 11 first, but if results aren't satisfactory, also try 3, 4, 8, and 9.

My speculation(sm) is that the antenna change and repositioning would have been enough without a channel change.

Reply to
John Navas

John Navas hath wroth:

See the Cisco article:

at Fig 7, which shows the spectral mask for 802.11g. Note that the signal level is -20dB down at about 10Mhz (2 channels) away. If I assume that the garbage is a constant -20dB down at any channel that's more than 10MHz away, I can assume 20dB of isolation.

So, how far away must two radios be isolated before the 20dB isolation becomes a problem? Let's make it really easy (because I'm lazy) and say the antennas have +2dB gain, are prefectly aligned with each other, the xmitter belches +15dBm and the receiver has a sensitivity of about -85dBm. The required path isolation for the interfering signal to be equal to a minimal receive signal is: 15dBm + 2dB + 2dB - -85dBm = 104dB isolation The noise level is -20dB below the signal so we only need: 104 - 20 = 84dB of path isolation Plugging into:

I find that the for 84dB of free space loss, the antennas are 0.1 mile or 520ft apart.

So, if your neighbor and your access points are stareing at each other (i.e. line of sight), and you're using the stock rubber ducky antennas, and the neighbor is on Ch 3 while you're on Ch 6, you can be

528ft apart before there will be any signifigant interference. Actually, it might a considerably closer because the receive reference level did not include any fade margin.

Just for fun, the numbers for Ch 1 to Ch 6 show about -30dB isolation. Using the same assumptions as before, the required path isolation is

74dB, which works out to 0.03 miles or 158ft. Again, if you are furthur than 158 from the source, there won't be much interference.

I took my Wi-Spy spectrum analyzer to Office Max and tested a few phones (before they threw me out). You're right. They do vary. A few even hogged the entire 2.4GHz band. Unless there's a predominance of Panasonic phones in the neighborhood, everything I can see from my house seems to favor the bottom of the band.

Sure, try other channels.

Possibly. However, I could have left the customer with a working system simply by changing channels. Frankly, I was rather suprised at how well it mostly fixed the problem. However, my ping tests showed that there was still some interference so I decided to pump up my exhorbitant charges and try to eliminate the interference completely. Besides, I didn't want to return the antennas and pigtails I had bought. So, the new antennas and new location finished the job. I probably could have put the system back on the original channels, but didn't see any reason to bother.

Reply to
Jeff Liebermann

I appreciate the detailed answers! I decided to drive around the neighborhood to see what was being used, and find the default channels of 6 and 9 are used even more than i thought. With so many on those 2 channels, use of channels 6 and 11 would both be subject to interference. If I agree with the Cisco figures, it leaves me just one choice: Channel 1. Unfortunately, one of the strongest signals received at the client is on Channel 1, although not that strong.

Using the same channel as a neighbor, and avoiding channels that are separated by as much as 10 MHz, is counter-intuitive to those of us in the RF world. Most of the wireless AP's are from the local DSL supplier and are probably running speeds as slow at 1.5 Mb and maybe slower, which would reduce the likelihood of full bandwidth use of the "g" spectrum. However, the tests showed the lower-bandwidth "b" systems also benefitted from the greater channel separation.

Every report I can find on the 'net claims the channel used is normally not of great importance. As Jeff points out, if a system has a problem, changing channels could do a lot to rectify the situation. But if I am getting the same throughput on the wireless network as I am on the ethernet-connected computer, I don't have a problem.

I am a bit surprised how few homeowners consider mounting their AP in their basement which would isolate it almost entirely from neighbors. The client would see the same interference, but the AP would see little, if any. Also, on my walkabout I found some houses with a much greater problem with neighboring systems than mine. But some of us will always try to optimize our system...it's what we do.

-.-. --.- -.-. .... .----

Bill, NAqNA

Reply to
Bill Radio

"Bill Radio" hath wroth:

Well, if you're using the Windoze "show available networks" or Netstumbler, you're only seeing those access points set to broadcast their SSID. You'll see more if you use a passive sniffer, such as Kismet under Linux. No need to reformat your hard disk to use Kismet. Boot a LiveCD with Linux and you have all the tools available.

First, make sure your wireless card will work:

Are you sure about channel 9? It's my understanding that 9 is NOT a default channel. Ch 6 is the most common.

Incidentally, I did a very crude site survey of a local small town this morning. Looking at the results from WiFiFoFum (active probe similar to Netstumbler) on my cell phone, I saw 18 access points with: Ch Number 1 1 6 14 10 1 11 2 However, when I fired up Kismet (passive sniffer) on my laptop, and let it run while we were at lunch, I found 25 access points: 1 2 3 1 6 16 7 1 10 1 11 4 This is fairly typical is what I see in predominantly residential areas, where access points tend to be installed with the defaults largely intact. That fact that the overwhelmingly large number of access points tend to be on Ch 6 and appear to coexist with each other seems to indicate that either:

  1. a large number of access points can peacefully coexist on the same channel.
  2. or most users can't tell when they're getting interference and have simply gotten used to the crappy and unreliable performance.

Also, I've seen some of the SSID's change channel over time. This is a feature of some access points where it searches for an unoccupied channel. It seems like a good idea, but I've seen nothing but problems when it is used.

Also, you might not be seeing interference from the new MIMO systems that monopolize more than 22Mhz of bandwidth. They usually show up on Ch 6 but are much wider than the typical 802.11b/g signals. There are multiple types of MIMO. Some are "good neighbors". Others are no better than jammers. The only way to identify these (at this time) is with a spectrum analyzer.

It will take a considerable number of weak signals to equal the effect of one strong signal. What weak signals do is just raise the overall baseline noise level. It's more difficult to work reliably at long range and with weak signals, but these weak signals do not materially affect the comparatively strong signals used by your local WLAN. However, a strong signal on the same channel will require sharing of the available airtime with the neighbor and will slow you down. As I previously mentioned, and is confirmed in the Cisco article, the collision avoidance mechanism is more effective with a co-channel interfering user, than with an off channel noise source. However, note that the Cisco article implies that the comparison is based on an equal signal strength comparison between the two systems.

Not really. Think of the off channel neighbor breaking the CSMA/CA collision avoidance mechanism of 802.11. It works with an on channel jammer.

Sorta. The problem is that 802.11b and 802.11g are really quite incompatible. The only reason they coexist is that 802.11g optionally includes an "802.11b compatible" feature which really means it time slices and listens for 802.11b clients. When it hears one, it switches temporarily to 802.11b mode. That's why 802.11g benchmark speeds are much higher when 802.11b compatibility mode is turned off.

The difference also shows up as how they share airtime. It's more efficient to run at a much higher speed than the DSL backhaul. For example, the typical 1.5Mbit/sec DSL line would theoretically not benifit from any wireless speeds faster than perhaps 5.5Mbits/sec (because the transfer speed is about half the connection speed). However, the air time used to move the same 1.5Mbits/sec data at

5.5Mbits/sec is much more than the same system running at 54Mbits/sec. This leaves more air time for other users. This is why 802.11g tends to constantly try to run at the fastest possible wireless speed.

It really isn't too important. For example, lets pretend you have a neighboring system that trashes every other packet for a 50% reduction in thruput. That's fairly bad, but you won't notice it if your wireless is running at perhaps 24Mbits/sec while your DSL is at

1.5Mbit/sec. It will only show up on connection reliability and local (LAN to WLAN) benchmarks.

Sorta. If the wireless reliability were stable and didn't change, you could say you don't have a problem. WISP and wireless bridge systems might do this because both ends of the link are fixed in a fairly stable environment. However, your indoor performance is infested with reflections, multipath, and a changing environment. Line of sight is usually a dream. So, indoor WLAN's are just not stable. What works fine today, may not work tomorrow. Add some interference into this mix, and you will see highly variable performance and reliability.

I've done something like that in office buildings, but not in residential installs. The problem with home routers is that they tend to be all in one conglomerations where the location is largely dictated by the location of where all the CAT5 wires, phone lines, CATV, and such come together. It might end up in a closet, dungeon, or sometimes in an attic. The order of priorities is usually wiring first, wireless a poor second. This is why I like seperate routers and wireless access points.

Are you sure they have a problem? Like I mumbled, if all those users can co-infest ch 6, having everyone on one channel is either workable, or the users standards of performance is minimal.

Actually, there may be another explanation. The SSID's that look like "2WIRExxx" (where XXX is the last few digits of the MAC address) are locally "home networking" systems sold by PBI/SBC/at&t. These come with wireless enabled (but encrypted with a WEP key by default). Many of these systems don't have any wireless users or devices other than the access point. It may look like a crowded Ch 6, but many of those systems show no traffic other than broadcasts.

C Q C H 1

Huh?

Reply to
Jeff Liebermann

Jeff, You're sharpening the pencil to just as sharp a point as I.

No, I'm using the Netgear wizard which is wonderfully sensitive, including systems with no SSID.

Yes, Qwest is currently delivering all of their "approved" Actiontec routers defaulted to Channel 9. Their older units (b) were all supplied on Channel

6, but there are only a couple of those here.

That explains one Linksys signal that jumps all over the band, but it mostly uses 1,3,6 & 9, the ones that are used most in this area.

In one trip around the block I found: ch 1 3 ch 3 2 ch 5 1 ch 6 8 ch 7 1 ch 9 10 ch 10 1 ch 11 4

The hard part is determining at what threshold the weak off-channel signals will actually cause a problem.

That means if I choose a channel with a neighboring "b" user, my network will slow down to their level. Then it makes a case for using the busier but more compatible channel 6 where there are no "b" users.

Under that scenerio, It can kill 75% of my packets and still get better than

5Mb, which is as good as a 1.5Mb DSL connection needs.

I decided to leave the router in the upstairs office so that one computer can be connected by ethernet cable...one less wireless point to be concerned about.

Yes, they may not have a "problem", but I would if I lived over there. But these home systems just aren't that active, so problems may be minor.

There's only one. Yes, for a time, Qwest ran out of regular DSL routers, so they were sending out wireless routers to customers who did not know they had wireless. I'm sorta glad to see they come WEP-enabled seeing as these neighbors would be unknowing wi-fi points.

Yes, it appears as though I will be sending a CQ to other CHannel 1 users after I have determined that it isn't affected by our Uniden cordless phones. Uniden claims that they use channels above, below & between channels 1, 6 & 11. I bet they just scan for inactive channels starting at the bottom of the band and would be happy to land wherever.

-Bill

Reply to
Bill Radio

"Bill Radio" hath wroth:

4:30AM and I can't sleep. Might as well do something useful.

Amazing. I never thought to check if it will detect AP's with SSID broadcast turned off. I just happen to have a WG511v2 here I can try.

Nifty. It works. It shows a blank SSID but does display the MAC address, channel number, encryption level, and mode. Thanks. That's going to be handy.

Hmmmm.... very odd. I have a Quest Actiontec GT701-WG wireless router in the office collecting dust. I could swear it was on Ch 1. I'll check when I can. The URL says the default is Ch 1:

I'm not sure, but I don't think that any of the Linksys routers have that feature. I know Dlink and Buffalo have it. Checking:

Y'er right. WRT54GX4 has automagic channel selection. Looks like it's also on by default. Aaaagh. Time for another minor crusade (along with my "secure by default" crusade).

Time to enable the technobabble option flag. The basic problem is what the receive declares to be decodeable data and what it considers to be noise. If the receiver happens to be listening for 802.11g data, anything that arrives smelling like 802.11b is going to be treated as noise. Same with the other way around. Same with everything that's off frequency. Only when the "802.11b compatibility" sampling window arrives, is it considered data. Same with Afterburner, SpeedBooster, SuperSpeed, TurboG 125mbps, HSP125, G+, SuperG 108mbps,

I can never remember which ones are 802.11g compatible. Most of the time, I just turn them all off. Anyway, the one's that are not, are treated as noise.

One of the really nice things about spread spectrum is processing gain. For FCC Part 15 DSSS, that's 10dB required minimum. That gives

802.11 a 10dB S/N ratio advantage over any form of inband noise or CW carrier, which is what makes SS so immune to most forms of jamming. In theory, it should be possible to have the jamming signal 10dB *STRONGER* than the desired signal, and still have a functional system. Unfortunately, the typical demodulators are not quite that good. I settled for a 0dB signal to jamming ratio because I was lazy, but also because it's a fairly close approximation. I do have some signal to jamming ratio test results for some old 802.11 (1 and 2mbits/sec only) cards. I'll see if I can dig them out.

Bingo:

Section 4 is on 802.11 jamming. Ugh, I gotta chew on this one. Looks like C/J ratio of +2dB at 11Mbits/sec and the author uses the same

10dB processing gain assumption that I mentioned.

Pretty much that's what happens. You don't even need to have an

802.11b neighbor to be slowed down. Just turning on the 802.11b compatibility mode in the AP will slow down the *maximum* 802.11g thruput from about 25Mbits/sec to about 15Mbits/sec.

Realizing that this is a problem, the chipset manufacturers have added some intelligence. The sampling period is shortened if it doesn't hear anything on 802.11b to the point where it's barely noticeable when running 802.11g thruput benchmarks. However, when the AP does hear something that smells like 802.11b, it reverts to about a 25-30% sampling window resulting in slothish thruput.

Also, note that 802.11b sends *ALL* it's management packets at

1Mbit/sec. That's a huge airtime burn. 802.11g packets will simply have to wait until the 802.11b access points shut up.

Well sorta. If the 75% lost packets were randomly distributed, you would still get 1.5Mbit/sec thruput. However, interference effects tend to be synchronous with whatever both systems use for timing, resulting in long periods of continuous collisions, followed by long periods of passible transmissions. Since there are two retransmission mechanisms running (802.11 and IP layer) this will create considerable exessive traffic, which actually increases the probability of collisions. The relatively long outages is also what causes the disconnects that are common in the presence of wireless interference. I vaguely recall seeing a Matlab 802.11 simulation that was able to predict the packet loss that could be tolerated to obtain a given thruput. I'll see what I can excavate.

Well, until you mentioned it, I've never considered intentionally burying the access point in order to reduce interference. The usual problem is lack of coverage in the house, which I usually solve by positioning of the wireless access point or with custom antennas. Somehow, burying the AP in the basement isn't very compatible with solving the coverage problem. Also, we don't have many basements on the left coast.

Well, 2wire is one of the few manufacturers that ships their routers secure by default. Unfortunately, even if their router has WPA and WPA2 encryption available, it's shipped with only WEP enabled.

You don't wanna hear what I think of CW operation.

Well, the other down side of using Ch 1 is also that it overlaps the satellite portion of the 13cm ham band. No clue on the Uniden phone. Watch out for those that use 2.4Ghz in one direction, and

900Mhz or 5.8Ghz in the other. Crossband duplex is cheaper to impliment than inband.
Reply to
Jeff Liebermann

snipped-for-privacy@nowhere.invalid (Peter Boosten) hath wroth:

Hell no. It drove me nuts. The most common problem was random disconnects of the client. The AP is smart, but the clients are really stupid. Let's say the AP decides to change channel for some reason. The clients are suppose to be smart and follow the change. Nope. All they know is that the access point has "disappeared". There's no 802.11 command available from the AP that tells all the clients to simultaneously change channel and continue doing whatever they were doing. So, the client eventually times out, scans for a new connection, and reconnects. If the client is REALLY stupid and is set to connect to any available access point, it may even connect to some other AP. I've seen it happen. The manufacturers answer to the problem is to simply raise the requirements for having the AP change channel. In other words, it switches less often, which effectively defeats its purpose. When I played with a DLink DI-624 to see how autochannel select worked, it was so conservative that I literally couldn't make it switch. My guess is that it requires substantial packet loss before it will decide to switch channels. I guess this is a suitable compromise.

Anyway, unless you have absolute control over the client, how it operates, and how it's configured, automagic channel juggling is an invitation to have the phone ring at odd hours.

Hmmm.... 5:30AM. Maybe I should get some sleep tonite.

Reply to
Jeff Liebermann

I wonder: many WAPs have the possibility to choose the channel automatically. My previous 3COM OfficeConnect had that feature and my current HP Procurve has that option.

Does it work in choosing the right (best) channel for me?

Peter

Reply to
Peter Boosten

On Wed, 06 Dec 2006 23:21:39 -0800, Jeff Liebermann wrote in :

Some perhaps, but not all, as shown by data I've previously posted here. Better g devices seem to be unaffected unless at least one b device is actually active.

The drawback, of course, can be excessive hunting and retries. Some products are truly horrible in this regard, much too aggressive in cranking speed back up.

And on VoIP.

But that's still a problem, and the bigger issue is the gain pattern of typical antennas, which assume horizontal paths, and thus suck in the upward direction.

Reply to
John Navas

I think the "counter-intuitive" part applies to those who grew up in the pre-digital world of plain old radio, in which a constant carrier was amplitude or frequency modulated by an analog signal and the phrase "collision avoidance" was mainly found in Mercedes Benz advertisements.

Reply to
Neill Massello

Thanks for your answer, Jeff. I'll keep it better fixed then. My WAP makes it kinda easy for me with the builtin 'Rogue AP detection' feature (it detects even the ones without SSID).

Regards,

Peter

Reply to
Peter Boosten

snipped-for-privacy@nowhere.invalid (Peter Boosten) hath wroth:

I left out one more reason why automagic channel selection doesn't fly. Not every type of client can scan channels to follow a channel changing access point.

I have several "wireless bridge" or "ethernet client" (or whatever) devices on my neighborhood WLAN/LAN. They're mostly DLink DWL-900AP+ boxes I picked up cheap over the years. Good enough for the tightwad neighbors so I don't have to rip into their computahs, game machines, or install USB drivers. These devices require that the connection details of the wireless access point must be setup in advance and stored in their NVRAM. That includes SSID, MAC address of the AP, encryption details, and (insert drum roll), the channel number. None of these can change automagically. More specifically, the only time such wireless client bridge devices scan for a connection is when one manually selects the "site survey" feature. If the AP changes to a different channel, the connection is simply lost.

Reply to
Jeff Liebermann

John Navas hath wroth:

Warning. Technobabble flag is still on.

Yep. I keep wanting to dig into the MIT Roofnet (mesh network) summary which covers the tradeoff between speed and probability of packet delivery.

There was one report that had some really interesting things to say about the tradeoff between speed and reliability.

The issue is simple but the calculations are complex. Slowing the network down to relatively slow speeds increases the S/N ratio, which improves the probability of delivering a packet. However, it also reduces the available bandwidth by the same amount. Overly simplified, a 1Mbit/sec data packet might have the same probability of delivery as a 54Mbits/sec connection that fails to deliver the same packet 53 times, but succeeds on the 54th try. (This isn't numerically true because the 54Mbit/sec connection has a higher overhead of management packets, and inter-symbol timing).

In the presense of noise and interference, things change again. A long 1Mbit/sec packet makes a very big target and has a high probability the noise and junk are going to trash the packet. At

54Mbits/sec, the packets are much shorter (in time) and therefore have a lower probability of getting trashed. However, the required S/N ratio at 54Mbits/sec is MUCH higher than at 1Mbit/sec, making it easier to trash packets at 54Mbits/sec than a 1Mbit/sec. I think you can see where I'm heading. Everything affects everything and the math is messy.

Just to make life complicated, there are two retransmission mechanisms working. One is at the wireless MAC layer, where the 802.11 protocol detects delivery failures and retries sending the packet. The other is at the IP layer, where TCP detects errors and asks for an instant replay. If you ever see packet loss at the IP layer (using netstat), you can be sure that the packet loss at the MAC layer is truely horrible.

The wireless chip manufactories have made their decision with automatic speed selection that favors the higher speeds. This is a vote for "damn the retries, full speed ahead". In theory, there's a highly intelligent processor, running a patented algorithm for selecting the correct speed. From my observations, it's a crap shoot. So, we see lots of retransmissions and the sale of lots of add on antennas to improve the S/N ratio. Wanna greatly improve the range of your wireless system? Easy, just slow it down.

Yep. Any streaming media hates packet loss because there are no retransmissions. The current technology is optimized for TCP, where packet loss can be tolerated to some degree by retransmissions. With UDP, if a packet is lost, it's lost forever. So, the access points now have a "wireless multi-media mode". This is a form of QoS that roughly give priority to UDP packets over TCP. I couldn't find anything that connects WMM to wireless speed control, but suspect that there may be some connection in the actual implimentation. It makes sense to slow things down to improve reliability.

Reply to
Jeff Liebermann

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.