How many Google wireless access points for San Francisco?

His skepticism about the whole idea is not a lot different than my own. Like Seybold, I think interference in the 2.4Ghz band is going to be a monstrous issue, among other things.

Which is why I was thinking (as posted earlier) that they should probably consider something like proprietary, neighborhood-based mesh radios which local users can tap off of and distribute either via wires or their own WAPs. Just run a "backbone" and let the end-users install the local radios so you can wash your hands of the local interference issues.

Then the challenge would be how to interconnect the backbone to the local segments. Sigh.

I've got it -- Bluetooth! Hehehehe...

Reply to
Philip J. Koenig
Loading thread data ...

What's the point of putting radios in 5Ghz spectrum when almost no users are going to be able to receive those signals directly?

Telling everyone to go buy 802.11a hardware is going to be a non-starter.

As I posted a few days ago, for the SF bid Google is partnering with a San Diego-based wireless engineering company which, among other things, will be involved in building a muni wifi system in Madison Wisconsin.

This is the partner:

formatting link

Reply to
Philip J. Koenig

A bit of math on counting the number of wireless access points required to cover San Francisco.

The original numbers: |

formatting link
The revised version: |
formatting link

Reply to
Jeff Liebermann

The eventual result:

formatting link

Reply to
Phil Nelson

Possibly, but a lot of technology is driven that way. 5 GHz hardware is not common and expensive because everyone piled into the 2.4 GHz band. If they chose for SFO the 5 GHz band they would free themselves from the current issues and the 5 GHz equipment would become just as common and inexpensive.

Reply to
George

Lessee. If the inter-access point spacing is D, then the density is 1/D^2 per unit area. If they are optimally arrayed in a honeycomb fashion, the maximum distance to a client is D / (2 * cos(pi/6), or 0.57 * D.

If you want 300 feet to be the maximum distance to a client, then D = 526 feet, which is pretty close to exactly 0.1 miles, so yeah. 100 AP's per square mile. The revised calculation is correct.

But (a) 300 feet is too far for this all to work and (b) you won't be able to optimally place them so you'll need more. I'd say 200 per square mile if you want it not to choke, or 10,000 access points in San Francisco.

Steve

Steve

Reply to
Steve Pope

Not common? 5.7GHz is where all the wireless backhauls are currently located.

formatting link
SF decided to go with 5.7GHz 802.11a clients, then I'm sure the market would gladly support replaceing everything you own with the latest and greatest. (Evolution. Upgrade or die). However, relocating the backhauls to 22Ghz may be a bit of a challenge. considering the mesh network rule of thumb of the week of no more than

3-4 hops from client to backhaul, and 100 access points per square mile, that means 25-33 backhaul radios or wired connections per square mile.
Reply to
Jeff Liebermann

There are 25 unlicensed channels in the 5 GHz band. I assume they will deploy there -- otherwise, it is completely hopeless.

What I don't know is whether Google has an actual engineering team on this or whether it's still just in marketing.

Steve

Reply to
Steve Pope

Y'er correct. It's 24GHz. Equipment is available but it's not cheap. Dragonwave Air Pair 50: |

formatting link
$22,000 each.

I've worked on wasted projects that proposed to solve technical issues by throwing money at the problem. I've also learned that if it doesn't work on paper, it's not going to work in the field. I find the Philadelphia system entertaining to follow as technical details and the entire topology seems to change every month. I suspect that the SF network will be much of the same as Google discovers RF reality. I'm sure the stockholders will eventually have something to say when this project starts resembling a financial black hole.

I also find it entertaining to read about how such networks will be used. The common description is "untethered access" where users with PDA's, laptops, and VoIP phones, roam around the city while browsing the internet. I learned a few lessons watching Metricom and found that what most users *REALLY* want is a cheap full time connection in the home. My guess(tm) is that the bulk of the use will be from fixed locations and solely as a cost reduction (or elimination) expedient.

Reply to
Jeff Liebermann

Adoption of 5 GHz is a lot slower than many in the industry predicted, but it will soon be pretty standard. For those with older 2.4 GHz-only devices, they will just have less coverage -- much as having a single-band cellphone (if there is such a thing now) has less coverage.

Steve

Reply to
Steve Pope

I haven't really followed this discussion, and I don't know any of the technicalities, but surely that's more like 9 - if you've got your 100 APs lined up on a grid 0.1 miles apart, then you would only need your backhaul's to be every 0.3 to 0.4 miles, so placing them 1/3 mile apart should do it.

Reply to
Derek Broughton

You mean 24 GHz I think.

Despite their fumbling, Google definitely has the money to deploy that hardware this project requires. Google's only limit is the laws of physics.

S.

Reply to
Steve Pope

Don't be so quick to assume that using 802.11a will magically solve all the congestion issues. Certainly in the beginning, there are fewer 802.11a AP's to contend with. (and also fewer cordless phones, microwaves and baby monitors)

But it is still unlicensed spectrum, and free for anyone to put up their own 802.11a WAP whenever they want, and point it wherever they want, so no matter how well it seems to work on rollout day, things could very well turn pretty cruddy as more and more devices come online. (both part of the municipal system and independent of the municipal system)

Not to mention, having to buy 802.11a radios for end-users to connect with the system would fly in the face of the whole philosophy of the project, which is to "bridge the digital divide". People with bucks already have fancy broadband, it's the people without the bucks that they really are targeting. Those people aren't likely to go out and spend $75 just so they can use that "free" internet.

Reply to
Philip J. Koenig

At 96 Billion in market valuation, they'd have to spend an awful lot on hotspots to disappoint investors.

Steve

Reply to
Steve Pope

Well, I screwed up by a factor of two, by mixing radius and diameter.

The 3-4 hop assumption I made was that a mesh network client (user) radio should not need to endure more than 3-4 store and forward hops before hitting a backhaul to the internet. To keep the hop count down to 3 hops, the effective radius of coverage simply increases by a factor of 3. Therefore, the original 300ft radius becomes 900ft.

Now, visualize an array of 1800ft diameter circles (cells) fitting into a 1 mile square (5280ft on a side). That would be about 3 circles on a side or 9 circles. Inside each cell would need to be a wired access point or the hop count will exceed 3.

That's still not quite right because the enlarged 1800ft diameter cells have considerable dead area between diagonal circles. So, they have to be a bit smaller with some overlap to compensate for this dead area. My guess(tm) is about 700ft radius or 1400ft diameter. That's about 4 cells on a side or a total of 16 wired access points.

However, that's still not correct. I'm assuming a cartesian 3x3 array with a not very tight packing factor. If the circles were arranged for optimum packing factor (similar to the hexagonal cellular arrangement), the number of 1800ft circles can be done with about 12 circles (including combined parts of cells).

Thanks for the correction. (Grumble...Gone sulking.)

Reply to
Jeff Liebermann
[POSTED TO alt.internet.wireless - REPLY ON USENET PLEASE]

What makes you think that Google is dumb enough to have not studied the RF issues? A lot of very, very smart folks are at Google, which has a pretty sterling track record.

Reply to
John Navas

I didn't even suggest that Google hadn't "studied" the RF issues. They can study something until they're buried in reports and paperwork. What makes the distinction between studies and reality is actually building a system and trying to make it play. That doesn't mean an in-house system inside the Googleplex, but a real live outdoor WISP (wireless ISP) system complete with interference issues, irate customers, tech support, maintenance nightmares, physical access issues, and local politics. The RF issues are just part of the puzzle. Perhaps I'm not giving the very smart folks at Google sufficient credit for predicting the real and potential problems with deploying such a wireless network. Perhaps Google's math isn't much better than mine[1]. Perhaps I'm being just my normal cynical self. Perhaps I'm biased by previous experiences with very smart people offering science fiction business plans in the dot com era. I just look at the numbers, the history, my limited experience, and offer my judgment. I don't subscribed to calls to authority.

[1]
formatting link
also made the same calculation error plus some additional math screwups. Oops.
Reply to
Jeff Liebermann

Flash! -- Homeowner Foiled Again -- Film at 11.

Reply to
kashe

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.