my future telephones [telecom]

For many years I've used classic Western Electric phones throughout my house. Where I need multiple lines I have 2564 sets, though I do not have any 1A2 support hardware. I really like the ergonomics and sound quality [of] these phones.

Recently I've been installing VOIP. All of my lines are connected to FXO ports and Asterisk replaces dedicated hardware including answering machines, alarm dialers, CID boxes, etc. (adding flexibility in the process, though probably reducing overall reliability). I have an FXS port connected to a line button on a 2564 set and I am satisfied with the voice quality for strictly internal purposes (all running G.711u) including connecting to an outside POTS line.

Asterisk is happy to provide various features (transfer, park, etc.) by inband DTMF signaling, but enabling such means that a call is no longer transparent and accessing interactive voice response systems becomes a problem. There may be ways to do things with hook flashes but feature buttons would be better.

Is there any chance of finding a VOIP feature phone with quality comparable to my WE sets? All the ones I've tried in offices have been disappointing, but then so have most of the "modern" pre-VOIP office phones.

One possibility is to build an adapter to take advantage of the buttons on the 2564 and make it into a VOIP phone. Building the audio part is more than I can do, so unless there is an open source POTS/VOIP FXS adapter I would have to use an existing one in conjunction with an entirely separate interface between the lights/ buttons and Asterisk. It would be really cool to have a little box with a 50-pin Amphenol connector and an RJ45 that turns a

2564 into a VOIP feature phone, but I suspect the market isn't huge. :(

Dan Lanciani ddl@danlan.*com

****** Moderator's Note *****

Please explain what "G.711u" means, and why you chose that option. I'd also appreciate you telling us how much it cost.

Bill Horne

Reply to
Dan Lanciani
Loading thread data ...

Per :

ITU G.711

G.711 is a high bit rate (64 Kbps) ITU standard codec. It is the native language of the modern digital telephone network.

Although formally standardised in 1988, the G.711 PCM codec is the granddaddy of digital telephony. Invented by Bell Systems and introduced in the early 70's, the T1 digital trunk employed an

8-bit uncompressed Pulse Code Modulation encoding scheme with a sample rate of 8000 samples per second. This allowed for a (theoretical) maximum voice bandwith of 4000 Hz. A T1 trunk carries 24 digital PCM channels multiplexed together. The improved European E1 standard carries 30 channels.

There are two versions: A-law and U-law. U-law is indigenous to the T1 standard used in North America and Japan. The A-law is indigenous to the E1 standard used in the rest of the world. The difference is in the method the analog signal being sampled. In both schemes, the signal is not sampled linearly, but in a logarithmic fashion. A-law provides more dynamic range as opposed to U-law. The result is a less 'fuzzy' sound as sampling artifacts are better supressed.

Using G.711 for VoIP will give the best voice quality since it uses no compression and it is the same codec used by the PSTN network and ISDN lines, it sounds just like using a regular or ISDN phone. It also has the lowest latency (lag) because there is no need for compression, which costs processing power. The downside is that it takes more bandwidth then other codecs, up to

84 Kbps including all TCP/IP overhead. However, with increasing broadband bandwith, this should not be a problem.

G.711 is supported by most VoIP providers.

Reply to
Thad Floryan

G.711 is an ITU-T standard for audio companding. It is primarily used in telephony. The standard was released for usage in 1972. Its formal name is Pulse code modulation (PCM) of voice frequencies. It is required standard in many technologies, for example in H.320 and H.323 specifications. It can be also used in one of methods for fax communication over IP networks (as defined in T.38 specification).

G.711 represents logarithmic pulse-code modulation (PCM) samples for signals of voice frequencies, sampled at the rate of 8000 samples/second.

G.711.1 is an extension to G.711, published as ITU-T Recommendation G.711.1 in March 2008. Its formal name is Wideband embedded extension for G.711 pulse code modulation.[1]

G.711, also known as Pulse Code Modulation (PCM), is a very commonly used waveform codec. G.711 uses a sampling rate of 8,000 samples per second, with the tolerance on that rate 50 parts per million (ppm). Non- uniform quantization (logarithmic) with 8 bits is used to represent each sample, resulting in a 64 kbit/s bit rate. There are two slightly different versions; ?-law, which is used primarily in North America, and A-law, which is in use in most other countries outside North America.

Reply to
T
*Snip*

I used to work on an Ericsson switch. After it had been in service for 6 months or so someone figured out that the codecs had been set to a-law instead of u-law (this was a North American market). I had to show up on site during the maintenance window while a switch engineer made the change. Immediately after the change was ordered he called me up & asked how he sounded, I'll be damned if I could hear any difference between the two. :\\

Reply to
Howard Eisenhauer

[Moderator snip]

The public switched telephone networks used circuit switching, so a sample was transmitted every 125 microseconds and, because of circuit switching, each sample had the same delay reaching the other end. The major delay for VoIP networks is the packetizing delay, as multiple bytes (samples) are included in each packet. Additionally a large buffer is required at the receiving end to accommodate variable delay in packet delivery due to the non-circuit switched network. This was a big deal with ATM as the voice folks wanted very short packets and the data folks wanted very long packets. In an IP network the data guys won. The compression processing is not the big factor in determining the delay between the two parties, the packetizing and variable network delay is.

Having said that, who cares? The answer is in user tolerance to echo, which will be present if one of the parties has a hybrid in the circuit (there's one in every conventional phone set). User tolerance to echo depends on two factors, the loudness and the delay. VoIP always has a longer delay than the PSTN, thus echo is more of a concern. If the round trip delay is long enough (satellite circuits come to mind...), the delay itself becomes bothersome even in the absence of echo.

In short, the need for a big buffer at the receive end to accommodate variable packet delays and the packetizing process itself (which has to accumulate multiple samples before transmitting) leave VoIP very susceptible to delay issues, including echo problems. VoIP providers often use various network "tricks", such as giving priority to voice packets through routers, to minimize delay on voice circuits to minimize these impairments.

Of course this is not to say that G.711 has better voice quality than lower rate compression schemes. The standard measure is Mean Opinion Score, which is a one to five scale of circuit quality based on user testing. G.711 is the "gold" standard in these cases, with an MOS greater than four, which the Bell System considered "toll quality". Circuits with an MOS of 3 or so are usable, thus the heavy compression of cell phones and VoIP circuits which have very short analog sections.

ET

***** Moderator's Note *****

Thanks for your contribution. I can tell you've been "in the trenches" of VoIP, so I'll add some more questions to my first.

  1. Why does VoIP have to be delayed by packetization? Can't the originating station simply send a lot of tiny packets?

  1. I was told that ATM cells are sized to prevent echo entirely: is that true? What is the "official" delay point at which humans perceive echo?

  2. What process is used to mark VoIP packets for priority? I had thought that there wasn't any specification for minimum transit time in the IP protocol, so if routers are able to identify VoIP traffic and prioritize it, I'd like to know how it's being done.
  3. What is the MOS of an ISDN BRI line? I ask because some users are very sensitive to voice quality issues, and they tend to be much better satisfied with ISDN connections than with POTS.

Bill Horne

Reply to
Eric Tappert

Bill,

The reason for the added delay due to packetizing is simply the length of the packet requires multiple samples, thus they have to be accumulated in a buffer before being sent. There is considerable overhead in an IP packet, so short packets are undesirable as they tend to use more bandwidth than longer packets for the same data throughput.

ATM is a switched circuit technology with fixed length cells of 53 bytes, thus the packetization delay is only 48 samples or 6 milliseconds. This size packet is too short for relatively efficient use of an IP network. The delay jitter to the far end is much less in a switched circuit network where the packets (cells in this case) always traverse the same path, so the receive bufffer can be shorter (only a couple of cells). IP networks route each packet separately, so there is no control over the transit time. The perception of echo is a function of loudness and delay, very short delays are perceived as sidetone and not considered an impairment. Generally echo control devices are not used on circuits less than a few hundred miles in length (I seem to recall 500 miles as the minimum for echo canceler installation in the PSTN). On very long delays, the echo really gets bothersome. The magic point is about 40 dB of echo return loss, then echo ceases to be the issue and the delay in response from the other side is the annoying factor on very long transit delays. Unfortunately, G.711 codecs have a S/N of about 35 dB, so network echo cancelers need some form of additional suppression (called the non-linear processor...) that adds it's own impairments.

There are extensions to the IP protocol (like RSVP protocol) that allow identification of traffic priority and VoIP providers usually use them on their networks to minimize the delay jitter. They are also necessary for video, BTW.

The advantage of BRI is that the circuit is "4-wire" from the origin, thus there is no echo path due to hybrids in the non-existent analog loop. Of course if the call connects to an analog loop, echo is still a problem due to the reflections from the far end. In this case the called party has no echo path (except the very short one from the phone to the hybrid on the CO line card, which would be interpreted as sidetone anyway,,,) but the BRI user would have an echo return from the analog loop at the other end. On an ISDN to ISDN call, the quality of service is generally controlled by the codec, which is G.711 compliant, so that's the best you can get. Digital all the way is the way to go...

As an aside, the military had a private network (AUTOVON) with 4-wire loops (and phones) and no hybrids. The idea was to eliminate the echo generation components so that circuits could be strung together in any old fashion in wartime without regard to echo control.

Hope this helps.

ET

Reply to
Eric Tappert

VOIP noob question: if a VOIP provider is going to interoperate with an ILEC (or other CLECs who have to interoperate with an ILEC), wouldn't the samples have to be in G.711 format anyway? If so, wouldn't the question then be, do we want to save on bandwidth over our 'internal' network and convert at the interchange points, or to carry the data everywhere in that format for convenience and give our customers the codec quality they've come to expect?

Reply to
Geoffrey Welsh

At every point where packets are switched, they may be directed to a link that is not idle, so outbound packets are queued. Using a variety of mechanisms (including looking into the IP datagrams to see if you can recognize attributes of time-sensitive protocols) forwarding devices MAY choose to insert the datagram not at the end of the queue like most traffic but at (or near) the front, minimizing added latency.

As Eric points out, since there is no guarantee that every datagram that makes up a connection follows the same path, this does not eliminate jitter.

As people far wiser than I have pointed out, every network treats "quality of service" features differently (or not at all), so you should not count on any help when traversing the internet at large.

Reply to
Geoffrey Welsh

My experience with VoIP is limited to asterisk; I didn't buy the Ooma thing last month (money is tight).

For the systems I've setup, a PRI is ordered and run from the CO to the client site. Here in California it's basically fiber now to a wiring closet, then wire to the server room computer which has a PCI card (model number escapes me, it's been a few years) that is the client's endpoint of the PRI. Think of the PRI as a T1. So, yes, G.711 is used for the interface to the PSTN.

Internally to the client, I don't specifically know if G.711 is being used -- it's a function of which instruments are on the VoIP LAN. Cisco 7960 VoIP phones are typically used and their voice quality is excellent even using a headset plugged in their backs.

Asterisk voice mail is saved in several formats, one of which is PCM, and that sort-of implies G.711.

Looking right now at my old AT&T 3B1 manuals and the Voice Power manuals (noting I still have 3 functional systems), it looks like G.711 but I don't specifically see that nomenclature. Audio is sampled at 64 kbps and it's definitely claimed as µ-law with no compression in the Voice Power Programmer Applications Reference manual.

Hmmm, haven't looked at these manuals in awhile, and it almost appears one can create a miniature CO with the 3B1s and the Voice Power cards (with up to 7 Voice Power cards per 3B1).

Reply to
Thad Floryan

The thing is, actually doing the compression and decompression between networks is very easy. Compute power is very cheap today.

Also, the proliferation of cellphones has decreased the public's expectation of telephone line quality.

--scott

Reply to
Scott Dorsey

There's a variety of hacks. The easiest is to observe that most VoIP traffic uses UDP port 5060 so the router can move those packets to the head of the queue. That won't work so well in backbone routers, where you have to assume most of the traffic is hostile, but it works pretty well on gateway routers like the one on your DSL line or cable modem.

R's, John

***** Moderator's Note *****

What's the definition of "hostile" traffic?

Reply to
John Levine

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.