AT&T Wireless data congestion possibly self-inflicted [Telecom]

We've all heard or read stories about how iPhone usage has overloaded the AT&T Wireless network but it's likely at least some of their problems are the result of configuration errors -- specifically, congestion collapse induced by misconfigured buffers in their mobile core network.

In early September, David Reed sent this interesting message to the IRTF's "end-to-end" email list. List members include some world experts on Internet protocols. During the next couple of days, there were over 40 messages in related threads. While some of these experts were over-thinking the problem, if you are patient enough to read through the many messages, what emerges is clear. At least in the case David measured (from a hotel room in Chicago, while he had 5 bars of signal strength, using an AT&T Mercury 3G data modem in his laptop), the terrible throughput and extreme delays he experienced appear to result from overly large buffers in the routers &/or switches in AT&T's core network. Note: if you don't want to read all the list messages the short summary is: >8 second pings times! What's more the effect was bymodal: either ping times under 200 ms, or over 5 seconds.

[article continues at following URL]

formatting link

Reply to
Thad Floryan
Loading thread data ...

Who wants to move forward into the past by using positive end-to-end control, more small buffers (for improved end-to-end latency), and request-to-send/clear-to-send hardware signalling at the end-user interface?

Reply to
Jack Myers

Could this be explained in layman's terms?

That is, what is "end to end control" and the difference between "positive" and non-positive control of it?

What is "end to end latency" so that we want to improve it?

Why is this considered "into the past"? Is this stuff good or bad?

What is "request-to-send/clear-to-send hardware signalling" and if not accomplished as described, what are other options for doing so, and are they better or worse?

Thanks!

[public replies, please]
Reply to
hancock4

.........

Pre-empting an anticipated avalanche of replies, "end to end latency" is the time it takes whatever you send at one end of a link to get to the other end.

In digital circuits this is the sum of the physical time it takes to send one data bit across the path AND the time it takes to assemble ALL of the bits you want to send in each packet that is sent as an individual entity across the path (referring to connections with a packet switched component, of course).

Phone users will be familiar with the latency of geosynchronous satellite circuits compared to land line connections (either local or international) and the general rule for any real-time communication (like voice) is the less latency the better.

In data connections, smaller packets reduce overall latency but at the cost of inefficiently using the available data bandwidth compared to large (which means higher latency) packets, so any data carrier will try to squeeze as much efficiency out a highly loaded link by using as large a packet size as possible.

It's a compromise, the ATM packet size is a classic example of being bigger than what the voice people wanted but smaller than the data people's preferences, so ATM packets aren't the most efficient things for data carriage but work well enough and could well have been smaller for less voice latency but they still work well enough.....

If packets (of any sort) end up being buffered, then the time it takes to clear the buffer just adds to the overall latency as well because it adds to the physical time in the overall connection/circuit.

-- Regards, David.

David Clayton Melbourne, Victoria, Australia. Knowledge is a measure of how many answers you have, intelligence is a measure of how many questions you have.

Reply to
David Clayton

I suspect a possible troll since your postings here indicate you really do know this stuff.

The Internet Protocol (IP) is a "best efforts" transport layer. Dump bits in at one end and accept the possibility that some of the bits will get lost along the way. The bits are important--why else would we pay the big bucks to send and receive them--so we are willing to pay in terms of time, throughput, or bandwidth to detect/correct these transmission errors. Congestion, which is due to instantaneous peaks and valleys in the aggregate traffic flow, causes some of the data loss. Buffering helps to smooth out the flow. It reduces data loss (reduces retransmission requests, improves throughput) at the expense of possibly increasing delays (network latency). The complaints were about variable delays, and the best-efforts IP layer is the ultimate cause. Some positive alternatives are reservations (like airline seats) and access metering (like freeway on ramps.)

Some of the original data communications protocols, which have been temporarily displaced by IP, had hardware or software reservation schemes or access metering. It's a way for the network to signal its status and capabilities back to the end users.

It's just engineering. There's no intrinsic good or bad here.

RTS analogy: A car stops over the sensor loop on a freeway on ramp. CTS analogy: The traffic metering light turns green.

Access metering advantage: Throughput is optimized, delays minimized, fewer cars "lost" in transit.

Access metering disadvantage: Neutrality suffers because the "haves" who are already on the freeway have an advantage over the "have nots" who are queued up at the entrance or taking a longer slower route.

Conclusion: something more than simple metering is required [to optimize] transit [times].

Is my frustration with the academic systems engineers on a recent networking project coming through yet?

Reply to
Jack Myers

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.