General question about TCP and buffering in switch/router/modem

I have the following setup:

COMPUTERS ROUTER MODEM1 MODEM2 | | | | | ============ ========================= VLAN 10 [Switch] VLAN 20

ROUTER has a fixed 10mbps/half duplex interface for its wan port. MODEM1 has a fixed 10mbs/half (acts as backup) MODEM2 has auto adjust 10/100 half/full duplex.

ADSL service is 5mbps/800kbps.

The switch negotiates properly with each device.

QUESTION:

Say the window size is 64k (roughly 50 packets). When the computer starts sending data after the TCP call has been established, is it correct to state that it will send a burst of 50 packets at 100mbps non-stop to fill the window and only after that will it moderate its sending throughput to match received ACKs ?

If the router's buffers are not large enough, it will be dropping packets from the initial burst of 50 packets, right ?

Similarly, the modem will be receiving data over 10 times faster than it can push it onto the ADSL line, so the modem would also be losing plenty of packets in that initial burst.

It is normal/expected that the first batch of packets in a TCP connection will experience high packet loss rate ? Or is there expectation that the devices allong the way should be able to buffer the full window size ?

While I am at it: is there a standard time/logic that a sender uses to waits for an ack before starting to resend data from the last aked sequence number ?

In terms of the switch:

Since the router's WAN port is fixed at 10/half and it talks to the modem.

Are there any advantages/disadvantages of having the modem at 100/full vs 10/half in terms of performance ?

aka:

10/h 10/h 100/f 10/h

My thinking is that by having the modem at full duplex, it would remove some latency and would let the switch worry about feeding the older router at the right time/rate.

Reply to
JF Mezei
Loading thread data ...

No it is not. TCP only increases the TCP window by 1 packet each time an ACK is received and the receiver's TCP buffer is not full. Each time a packet is dropped, the TCP window is cut in half. This is a gross simplification of initial TCP window size, but adequate to answer your question.

It is NEVER expected that any device (router) between the sender and receiver will buffer ANY traffic. Buffers in routers are there because traffic is inherently bursty, and buffers allow traffic to be queued for a very small period of time, instead of dropping it. "small" is relative depending on the speed of the interface. On a 56K line, small might be

500ms, but on a 100MB/s interface small might be on only 1ms or 100us. Buffering is NEVER substitute for bandwidth, and buffering data causes jitter.

You need to read up on TCP. An explanation here would take too long for me to write.

No, but always use 10/full whenever possible because 10/half has an effective maximum bandwidth of about 3Mb/s in each direction. 10/full has an effective maximum bandwidth of about 9 Mb/s. If both connections are

10/half you could possibly have two bottlenecks and more dropped packets meaning lower performance.

Latency is not the problem, because at 10Mb/s each hop adds about 52us (thats micro seconds, millionths of a second, not milli seconds, or thousandths of a second). As a comparison, a 5MB/s ADSL line adds at least

3 or 4 ms in delay, or 3000 or 4000 us (because of speed (aka serialization delay) and the distance the signal must travel.) Everything is relative, so adding a few microseconds in delay is a drop in the bucket compared to the RTT of your slowest link.
Reply to
Thrill5

For further reading look up - TCP congestion window !

Reply to
bod43

Reply to
JF Mezei

once the window builds up to that size then yes - but the window starts small and grows.

the drop can happen anywhere buffers build up - usually the 1st point where the onward link is slower will cause a bottleneck, but there are plenty of cases where you get further restrictions downstream.

Ideally the devices understand fast resend and selective ACK to minimise the amount of data that gets resent.

With a good implementation at each end the session doesnt have to "stall" for timeouts.

Note your setup is common, where too big a burst drops packets at the

1st bottleneck, close to the sender. Since the hops to get to that bottleneck are LAN based (ie fast, low latency) it doesnt cost much to let that device throw packets and recover locally, since you havent used relatively expensive WAN bandwidth.

Probably, although there are some ways the modem (or the router) can exert "back pressure" to slow down the sender.

But the buffering and then overflows in general will happen anywhere that the arriving trffic rate is higher than the onward link.

No - potential pipe sizes / bandwidths / latency and buffering vary by several orders of magnitude so TCP has to adapt to current conditions for each session to be useful.

But - sessions do build up to fairly big windows and send big bursts of packets, and you can see repeated sets of dropped packets.

i used a Sniffer on one recently where 64k byte "lumps" from microsoft file sharing went from GigE LAN to 10M WAN - the burst is 40+ packets, but only 25 or so consistently "made it" on 1st attempt.

with an XP stack, fast recovery resent the packets pretty quickly and we got close to wire rate.

With another IP stack (TCP offload card in a server) that waited for a

1 sec timeout on every burst, the effective throughput dropped by 90%.....

Yes - basically the TCP stack uses the arrival times and intervals to estimate the current round trip time to sort out timers that depend on it. You really need to find a good explanation and look for that.

latency isnt going to hurt, and 10M 1/2 duplex will be fine as long as collision detection is working properly.

Biggest risk is a duplex mismatch which can really impact the throughput - the config needs to be consistent on each side of each Ethernet link.

Reply to
Stephen

The congestion window is calculated based upon the observed RTT (round trip time). The point of "windowing" in TCP is to improve the transmission rate while at the same time to minimize the amount of data that needs to be resent if a packet is lost. The congestion window is the calculated as the number of packets that can be "in transit".

I found this site, to explain "the sliding window" but its not that good because you can't set the RTT to less than 10. Try the demo with the window size as 10, and the RTT as 10. The "window size" is the congestion window and at these settings you see the most efficient transfer with this limited demo.

Reply to
Thrill5

Once I realised that congestion window != receiver's window, all the explanation I had read fell into place.

Reply to
JF Mezei

Same here - but it took me *days* 'n' days to figure it out. The keys to TCP are in my view -

  1. It is *very* smart
  2. It has *many* subtleties
  3. It is *designed* to drop packets (data rate is increased until drops are achieved)
  4. The two stacks communicate about the network state without any signalling bits. It uses dropped packets, RTT... I forget.
  5. If you make sure you are using selective ack then even on high RTT links with gross error rates all will be pretty good:))

I think that might have been weeks 'n' weeks:)

Reply to
bod43

Oh and one more -

  1. It is **designed** to work on networks with dissimilar rate links. i.e. 1G into ADSL is OK.
Reply to
bod43

And packet discards on the box that connects the ADSL to the GigE are

**normal**!

Sam

Reply to
Sam Wilson

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.