I have the following setup:
COMPUTERS ROUTER MODEM1 MODEM2 | | | | | ============ ========================= VLAN 10 [Switch] VLAN 20
ROUTER has a fixed 10mbps/half duplex interface for its wan port. MODEM1 has a fixed 10mbs/half (acts as backup) MODEM2 has auto adjust 10/100 half/full duplex.
ADSL service is 5mbps/800kbps.
The switch negotiates properly with each device.
QUESTION:
Say the window size is 64k (roughly 50 packets). When the computer starts sending data after the TCP call has been established, is it correct to state that it will send a burst of 50 packets at 100mbps non-stop to fill the window and only after that will it moderate its sending throughput to match received ACKs ?
If the router's buffers are not large enough, it will be dropping packets from the initial burst of 50 packets, right ?
Similarly, the modem will be receiving data over 10 times faster than it can push it onto the ADSL line, so the modem would also be losing plenty of packets in that initial burst.
It is normal/expected that the first batch of packets in a TCP connection will experience high packet loss rate ? Or is there expectation that the devices allong the way should be able to buffer the full window size ?
While I am at it: is there a standard time/logic that a sender uses to waits for an ack before starting to resend data from the last aked sequence number ?
In terms of the switch:
Since the router's WAN port is fixed at 10/half and it talks to the modem.
Are there any advantages/disadvantages of having the modem at 100/full vs 10/half in terms of performance ?
aka:
10/h 10/h 100/f 10/hMy thinking is that by having the modem at full duplex, it would remove some latency and would let the switch worry about feeding the older router at the right time/rate.