sharing connection

It really depends on what you and your partner are doing on the internet. If both of you are just surfing the web, which uses most of the bandwidth in the downstream direction (incoming), the bandwidth will divide equally. However, if your partner is doing peer to peer networking or file sharing, the bulk of the traffic is upstream (outgoing). On a DSL line, the upstream available bandwidth is about

1/5th of the incoming. It doesn't take much to saturate the upstream bandwidth. A saturated upstream will block or delay ACK's (acknowledgements) sent by your computah to the various servers on the internet. The result is very slow download traffic for your computer even though you have lots of downstream (incoming) bandwidth available.

The only way around it is to purchase a replacement router that supports QoS (quality of service) and configure it to prevent saturating the upstream. The ACK's take very little bandwidth so reserving perhaps 20% of the upstream bandwidth will suffice.

Also, the upstream can be saturated by trojan horse programs, worms, DDoS attacks, spambots, and viruses, that are using your computers to launch attacks. You might want to verify that there is no unusual traffic or vermin installed on your machines.

Reply to
Jeff Liebermann
Loading thread data ...

Hi We have a Belkin Wireless 54G router which is hard-wired (ethernet) to my machine, and wireless enabled to my partners machine in another room. The problem that we have is that the bandwidth seems to go to the other machine by default, ie when its using a lot of bandwidth, mine cant connect to the internet or is VERY slow. Any ideas?

Thanks in advance

Reply to
new identity

Good enough. It's clean.

It would be helpful if you disclosed the model number of your Belkin router so I can check if it has any useable features.

Configuring a port "just for that particular program" probably means that you redirected the IP port numbers to your partners machine for incoming traffic. That does not control the amount of traffic and only assigns which machine gets the file sharing junk. That's not QoS.

Most (not all) file sharing programs have built in bandwidth limiters, where the user can control the bandwidth of the outgoing traffic. If you set it to about half (or less) of your outgoing (upstream) broadband bandwidth, it should prevent saturation.

I don't think you'll need a new router if the file sharing program has it's own built in bandwidth limiting or throttling. I kinda prefer Linksys WRT54G with DD-WRT firmware using Wonder Shaper.

formatting link
is complex, but works well. Heavy reading:
formatting link

Reply to
Jeff Liebermann

Thank you for the information.... both machines are clean (Norton/AVG/Adaware/Outpost etc). Yes he is doing some file sharing, and I have configured a port just for that particular program, but there is very little improvement. Which router do you recommend that has the QoS?

Thanks again in advance

Reply to
new identity

F5D7230-4 |

formatting link
QoS or bandwidth management features in the router. However, methinks you'll have better luck throttling the unspecified file sharing program by using its configuration features. For example, for Bittorent, read: |
formatting link

Reply to
Jeff Liebermann

Thanks again for the information. Our router is a Belkin F5D7230-4.

Reply to
new identity

Thats a rather simplistic explanation, and not likely the case. In such a simple "network", the bottleneck is likely due to the # of connections that each user has open. Each session can have 1 full window of traffic outstanding. Assuming downloading is the bottleneck, todays applications all use stupidly large windows (linux defaults to

64K, windows to 32K or more in later versions). The reason file-sharing is such a parasite is that the apps open tons of connections. With a browser, you might have 4 or 5 connections to download a page. With p2p you might have 100. That means that 100 windows of data (100 x 32K bytes) could potentially be outstanding in the upstream router's queue, which creates huge delays.

This is the reason the shapers that do window shaping (packeteer, etinc.com) are superior in performance. Limiting bandwidth doesn't reduce the size of the delay upstream unless it also shapes the window downward to reduce the size of the windows. You'll get some relief, but if all of your traffic is monodirectional you'll still have huge queue delays in the upstream router, particularly on lower speed lines.

Dennis

formatting link

Reply to
dennis
[POSTED TO alt.internet.wireless - REPLY ON USENET PLEASE]

It's actually quite correct.

Only if that overloads the router, which would manifest itself in failures to connect, not just slow performance.

True, but that won't affect performance.

That's possible, but the more likely culprit with common filesharing on

*asymmetric* connections is saturation of the uplink.

That actually makes sense, because it improves throughput when latency is high.

Again, saturation of the uplink is the most common problem on *asymmetric* connections.

A low-end router doesn't "queue" window data -- it just has relatively small FIFO buffers.

As Jeff wrote, what's more important on *asymmetric* connections is preventing saturation of the uplink. See see RFC 3449 "TCP Performance Implications of Network Asymmetry" .

With all due respect, that's just not true.

Reply to
John Navas

Please use your head. Try to absorb some simple 3rd grade math here.And learn to pay attention to people who have developed real solutions to problems rather than those that have written some ridiculous white paper that gets them no more than a pat on the back from a bunch of bearded video-game experts.

Take the case where a single user on a 1Mb/s connection opens 100 connections and a GET results in one window of data. The 100 remote servers can send 100 x 32K of data, or 3.2MB of data. Suppose all of that data arrives at your ISPs upstream router at once or close to at once. The router can only send data out at 1Mb/s, because thats the line rate. So to send the 3.2MB of data (which is 25 megaBITs of traffic) will take 25 seconds @ 1Mb/s. Any request made after the 100 connections would experience a 25 second delay in this case, because the queue in the upstream router is filled with 25 seconds of data (if it doesn't get dropped). There's nothing you can do locally to change that. You can't tell the upstream router to do things any differently than FIFO, so any subsequent requests will have to wait for the upstream queue to clear before even the smallest packet can get through.

With a window reduction to 1K, those 100 connections can now only produce 100KB of data, or 800Mbits, which is less than 1 second of delay.

If you don't window shape there is NO WAY to stop the remote servers from sending one full window of data. "asymmetry" is a fact of life. You can't control the behavior of your ISPs router. You can't do it by manipulating ACKs, because there is too much data before you even get to ack.

RFCs such as the one that you've cited are simplistic and useless on a large network. With 1000s of connections and relatively low bandwidth you need to pace your traffic to eliminate bottlenecks. There's just no way around it.

Dennis

formatting link
Bandwidth Management Solutions

Reply to
dennis

If it really was a 25 second delay, then TCP would have timed out long ago and requested a transmission. The retransmissions would pile up endlessly and bottleneck the connection even furthur, resulting in almost zero thruput. I've only looked at BitTorrent to see how it works. I'm sure if their users incurred 25 second chronic response delays, the protocol would do something about it at the source. The general effect is that the connection just stops and drops. Users are complaining about the problem and solutions are being offered that limit the bandwidth and number of simultaneous connections:

formatting link
number of simultaneous connections can easily be limited by the user with something like: | "btdownloadgui.exe" --minport 10000 --maxport 10030 --responsefile "%1" which limits the available ports to 30, and thus also limits the number of simultaneous connections.

Also:

formatting link
the bottom of the page, it mumbles something about "Choking is done for several reasons. TCP congestion control behaves very poorly when sending over many connections at once. Also, choking lets each peer use a tit-for-tat-ish algorithm to ensure that they get a consistent download rate." Apparently, the protocol rotates throttling multiple users in rotation every 30 seconds allowing some of the streams to have short delays, while others probably time out. I'm not sure I'm reading this part correctly. It also appears to delay opening new streams.

Sorta. The ISP can play QoS and MPLS games and reduce the priority of any packets deemed to be file sharing. That would allow time critical packets to arrive ahead of the file sharing junk.

That's possibly workable for asymmetrical network connections but is far too low for symmetrical fast connections. The SWIN (sending window) buffer has to be sufficiently large for the sliding window to buffer everything sent until the sending server requires an ACK. More simply: buffer = bandwidth * latency I'll use about 30msec for typical high DSL latency across the internet. For my 1.5 Mbit/sec SBC ADSL connection, that's: buffer = 1.5 Mbits/sec * 30 msec = 75Kbits = 9.4KBytes

If the buffer were set to 1KByte, the maximum bandwidth that could be shovelled through a 30 msec DSL line would be: 8Kbits / 30 msec = 267Kbits/sec which is a bit low for an outgoing connection on a typical DSL line. (Mine is 1500/384). However, it would sorta solve the problem by causing the server to sit an wait for ACK's, which would leave some available bandwidth of about: 384kbits/sec - 267kbits/sec = 117Kbits/sec That should be sufficient for ACK's for anyone sharing the connection to do conventional web surfing. Therefore, methinks that lowering the SWIN buffer to 1KByte has the desired throttleing effect, but only because the TCP window size limits the thruput.

Reply to
Jeff Liebermann

You know, if you start a message like that _and_ you don't quote properly, you manage to alienate _all_ your potential readers.

Reply to
Derek Broughton
[POSTED TO alt.internet.wireless - REPLY ON USENET PLEASE]

My experience says otherwise, and is consistent with the IETF and the RFC, so we'll just have to agree to disagree.

Reply to
John Navas
[POSTED TO alt.internet.wireless - REPLY ON USENET PLEASE]

That's not how it works.

Correct. This potential issue is discussed in RFC 1323 "TCP Extensions for High Performance" (May 1992, Standards Track). See "RTTM -- Round-Trip Time Measurement". And as noted in my earlier response to this thread, see RFC 3449 "TCP Performance Implications of Network Asymmetry" .

Reply to
John Navas

Yes it is how it works. Get yourself a lan monitor and use empirical data instead of relying on theory and RFCs to formulate your ideas. Slow start is a fantasy. Realistic implementation of slow start would cause benchmarks to suck rocks, and it would cause servers to be slow as crap and need 5 times the amount of memory.

It was just an example to make the math easy for you. Sorry if I confused you, but the concept should be easy enough to interpolate out to a real network that has 50+ tcp requests per second.

No, you're wrong. The RFCs do not address the dynamics of 100s of connections from different devices. The first RFC was written when cave men walked the earth (and tcp windows were 256) and really has no relevance to modern networking dynamics. The second one is just plain wrong. Well its not wrong, but it doesn't propose a practical solution. It misses the entire point, which is that the problem is today's large standard wondows.

Here's the deal. We now have broadband and the clowns running Microsoft and Linux camps want their benchmarks to show really high throughput, so windows are typically 32K to 64K, and they are advertised immediately. This changes everything. Because when you do a typical GET, whether its a graphic or a web page or a directory, the entire result fits in one window. So the connection is

GET Send the answer... ..wait for ack FIN

The connections are too short-lived for any protocol adjustments.So the example that I cited, although perhaps not likely, is EXACTLY how it works. And its exactly why TCP ack management is irrelevant most of the time; because the vast majority of TCP connections are not lengthy enough to be effected by it.

TCP ack management is not in play when you are talking about network management. You're dealing with 100s and 1000s of short-lived connections that cumulatively can request a lot more data than the size of the pipe in a very greedy manner. The only practical solution is to reduce the amount of data that can be requested.

DB

formatting link

Reply to
dennis
[POSTED TO alt.internet.wireless - REPLY ON USENET PLEASE]

Been there; done that, a great many times.

TCP/IP throttling actually works quite well. If it didn't, high-speed servers and backbones simply wouldn't work.

We'll just have to agree to disagree.

Reply to
John Navas

we don't "disagree". You're dead wrong, as most people who rely on theory are.

Reply to
dennis
[POSTED TO alt.internet.wireless - REPLY ON USENET PLEASE]

Actually we do.

Actually I'm not.

I'm not relying on theory.

Now please give it a rest. This isn't the place to be flogging your business.

Reply to
John Navas

You know its funny, that no matter how clearly something might be, there's always one bubblehead who will pipe in claiming that the sky is purple or something. It never seems to fail.

Your inability to substantiate any of your "points" leads to doubt that you've ever actually done anything; so excuse me if I'm skeptical. Your "points" that TCP throttles really have no purpose. The POINT isn't that tcp doesn't throttle, its that the resulting connections SUCK when you have more connections than your bandwidth can support easily. I'm talking about making connections not suck. I'm not sure what you're talking about, but it has nothing to do with cleaning up a network, which is the point here.If you have empirical evidence to support your position I'm sure everyone would like to hear it.

You seem to ignore the fact that every network that has less bandwidth than it needs 100% of the time has congestion problems. If you've only worked on huge networks that have more bandwidth than they need, then you really don't understand the subject. I have 3000 customers that will disagree with you. If you care to reject the real world that's your choice, but endorsing yourself as an authority on a subject you clearly don't understand is a detriment to the entire community.In this thread, if 2 guys sharing a connection have unacceptable delays, then its obvious even to a complete idiot that TCP alone will not "handle" the problem without help. And anyone who has seen real products in action knows that ack management doesn't solve any real problems. I've explained why it doesn't.

Reply to
dennis

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.