FCC Invites Experiments To Test Effects of All-IP Telephone Network. [telecom]

By Paul J. Feldman, CommLawBlog, February 3, 2014.

| Commission seeks data for critical policy dialogue; coming changes | may particularly affect smaller carriers - and their customers. | | Major changes are coming to the telephone system that provides the | interconnected communications system on which American society has | long depended. For more than 125 years that system has been based | on a circuit-switched, mostly copper-wire-based public switched | network (PSTN) -- nowadays sometimes called a "Time Division | Multiplex (TDM)" network. But networks based on Internet Protocol | (IP) technology have begun to replace the PSTN. The FCC has now | expressly acknowledged that the "the global multimedia | communications infrastructure of the future" will consist of all-IP | networks very different from the circuit-switched technology we | have been used to since Alexander Graham Bell. | | And with that acknowledgement, the FCC has now started to take | steps to identify and assess the effects that the fundamental | technological overhaul of our nationwide phone system are likely to | have on phone companies, consumers, and the FCC's own ability to | achieve its statutory responsibilities.

Continued:

formatting link

-or-

formatting link

Neal McLain

Reply to
Neal McLain
Loading thread data ...

AFAIK, there is no specification for transit time in IP. The /whole/ /idea/ of the ARPANET was that a reliable /network/ could be constructed from /unreliable/ links, using a protocol that would retry and/or reroute failed packets.

In other words, unlike ATM, the Internet was never intended for "real time" traffic. If the PSTN is going to morph into a Public /Routed/ Telephone /Virtual/ Network, then there will have to be a new set of protocols that can deliver something akin to the "virtual circuit" technology we're using now. Either that, or we'll all have to get used to getting "almost as good as a cell call" quality on our landlines.

Bill

Reply to
Bill Horne

No, that was considered out of scope. There was a separate protocol called ST (the STream protocol) which was intended to serve that purpose, but it had little deployment, and the ATM FUD killed off any desire to implement it.

In later years -- the mid-1990s, when I was working on it -- people realized that you didn't need a circuit-based network techology to have reliable multimedia. What you did need was two things:

1) Adaptive protocols that could cope with jitter by buffering playout by a few milliseconds. RTP/UDP (and the protocols built on top of it, like SIP) essentially solves this problem, particularly if you have the bandwidth budget for some forward error correction. 2) A mechanism for network elements (routers, switches) to allocate resources in a way that provides for the differing requirements of best-effort and real-time traffic.

The people I worked for called this "ISIP" -- "Integrated Services Internet Protocol". Two different approaches were developed for requirement (2): a soft-state-based resource reservation protocol, RSVP (which IIRC was mostly the work of Bob Braden and colleagues at USC Information Sciences Institute), and a set of "differentiated services" specifications that allowed administrators to describe different applications' requirements in a way that could be aggregated all the way up to the level of tier-1 network service providers while preserving the specified behavior for each individual stream (provided that the network was provisioned adequately).

The thing they didn't figure out was the economics. Nobody understood the economics of the Internet in the mid-90s, not even the ISPs. Dave Clark, who heads the research group I used to work for, has spent much of the last fifteen years trying to understand the economics so that market structures can be designed that give NSPs the necessary incentive to provide these services. One of the most significant issues is determining who actually derives the value in any particular communication; this is the question at the center of the Network Neutrality debate. If I'm paying Netflix to stream me a movie, then it's clear in which direction the money flows, and there does not seem to be any reason why Netflix should not be able to pay my network provider to get service that I find more satisfactory. But if I'm using peer-to-peer SIP, for example, they have no way of knowing who benefits from the transaction -- if indeed anybody does.

Of course, this is all irrelevant to the issue of running voice over private IP circuits within a carrier.

That whole business about "rerouting packets" is a myth, and has always been one, so far as I can tell. If a packet doesn't reach the intended recipient, it's the sender's responsibility, not the network's, to try again. Maybe there will be a different route. Maybe there won't. Maybe it will go through. Maybe it was dropped due to congestion, and the sender should wait before trying again. Neither sender nor recipient has any way of knowing.[1]

There is some experience with reliable, or semi-reliable, link-layer networks, and the general conclusion is that they are usually a bad idea, because of the end-to-end principle. One exception has proved to be wireless networks, which have a loss rate (and a "physics problem") far greater than fiber- or copper-based technologies do, and link-layer retransmissions are required to get the loss rate down to the point where TCP performance is acceptable.

-GAWollman

[1] Well, there is something called Explicit Congestion Notification (ECN), which is specified, and is implemented in many operating systems, but has fairly limited implementation in the network devices (home gateways and at-best-duopoly last-mile access devices) where most congestion occurs.
Reply to
Garrett Wollman

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.