I repeat, "FALSE TO FACT".
> First, there is _NO_ repetitive 'scanning of all lines' for
> on-/off-hook status. That approach is TOO *DAMN* EXPENSIVE (in terms
> of resource consumption) to be practical.
> I'm not going to go into all the gory details, but the basic outline of
> switch architecture is that it has some functions that *must* be done
> at specific time intervals. These are 'real-time' tasks. Each of these
> tasks has the exclusive, and *un-interruptible* use of the CPU until it
> finishes its processing. Obviously, these routines are written to use
> as little CPU as possible, and return control to the scheduler. "Bigger"
> real-time tasks are broken down into "whatever" number of smaller pieces
> as are needed to get the execution time of each individual piece under
> the size of the real-time scheduling 'slot.
I have no knowledge of the RTOS or operating code in WECO switches, however I have programmed hard realtime systems and know that the best use of slots is for coding shared DSP tasks (such as filters) which run to completion or exit early to give more time to background tasks, and that pulse decoding, whether for bit-banging a serial interface or interpreting dial pulses from a telephone line can be handled by a properly managed priority interrupt system and do not consume realtime slots. I can't imagine that modern switch hardware would have any overhead issues with dial pulse decoding. Even 'asterisk' supports it without caveats on FXS hardware that detects it. Any perceived cost to an operating company regarding time to complete a call is also probably a red herring in today's environments. BTW, a polled environment is more deterministic and may well be the method of choice for scanning lines in a truly hard realtime implementation, and with modern hardware may well require less machine cycles than an interrupt-driven method (it would be my choice if designing a switch).
N.B. All of the automated environmental alert boxes that I have, from a variety of vendors, use pulse dialing and do not even offer DTMF.
***** Moderator's Note *****
I'm sorry to say that I don't know the internal architecture of the #5 ESS or the DMS-100. I invite CO Engineers to clarify the design goals and tradeoffs involved here.
The Bell Labs introductory engineering textbook states that lines were scanned for a change in _status_ in the No. 1/1A ESS, "expensive" or not, that's what they did. Today it is likely handled by an interrupt system, as described by Mr. Grigoni below. I
*In any event*, you must remember that signalling between the subscriber and switch is STILL _DC_ off/on. A change in DC status (current to no current or vice versa) indicates an 'event' has occured that requires switch service. Usually that event is lifting the receiver to make or answer a call, or hanging up to terminate a call. But it can also be dial pulses. Software handles this, just as software handles the difference between wanting to make a call or answering a call (both initiated by going off hook, but are two different situations.)
But, as stated before, the percentage of calll traffic using rotary pulses is likely to be so small today that whatever extra machine resources they may--or may not--require is insignificant in the larger picture.
Could you provide the citation information for your source? Thanks.
The above is correct and common practice for computer systems. For the No. 1 ESS, they chose polling (scanning). In the 1A, they had a separate signal processing handle that stuff. In our context-- interpreting dial pulses--it doesn't matter whether it is polling or interrupts.
ESS also has to deal with a variety of inter-office singalling arrangements, which may include DC pulse transmissions and signalling from other offices of an older design. Now today everything is ESS but when these boxes came out there was still a great deal of step and panel out there.
converted a massive number of electro-mechanical switches to ESS. Pretty impressive achievement.
Would anyone know if the No. 4 ESS is still used as the long distance switch or has it been superceded?
I forgot to mention another common switchook DC signal "event" that must be timed is subscriber flashing, such as for call waiting or 3- way calling.
I was curious as to how many instructions a computer could execute while someone keys in a phone number. I wrote a little QBASIC program to count up executed instructions while I keyed in a number. Even keying as fast as possible the computer was still able to do 1,000,000 simple instructions (a counter increment)! (And those are Quick Basic instructions on a PC, which are not very efficient; a phone switch could do much more.)
Anyway, the point is that, dial pulse or Touch Tone, an ESS switch is doing plenty of other things during the time a human is dialing. Even in the early days it 'time shared'. It is not 'tied up' or waiting around; it is doing other things. Further, ESS, has front end processors to handle I/O needs.
Lets run some numbers for a big switch. 10,000 lines per prefix, the switch handles 5 prefixes, (50,000) lines. To reliably detect 20PPS dialing requires a minimum of 80 samples/line/second. 4,000,000 samples/second. scan logic resembles:
top_of_loop: compute address of status port, for this line compute address of 'prior data value' for this line read value from port compare to prior data value if changed jump to 'service' label if have changes jump to 'digit' label loop_end: increment line id value if less than max_line_id go to top_of_loop label set line id value to zero go to top_of_loop label service: get time-tick from system clock/counter save as last_service_time this line go to loop_end label digit: get time-tick from system clock/counter compute current_tick - last_service_tick this line if (difference < inter-digit_min) go to loop_end label end-of-digit: count state changes divide by 2, compute 'next digit address store next digit go to loop_end label
This example _grossly_ understates the clock cycles required, but we're still looking at around 80 million instructions/second.
For starters, one wants the scan rate at least 50% higher, and to get two instances of the 'new' signal level, before regarding it as 'changed'. plus the logic for testing 'current' against 'last current', before jumping to the 'service' label. Probably closer to 200 million instructions/ second.
That's 20% of a '1 _billion_ instructions'/second processor, being consumed regardless, even if _nobody_ is doing anything.
If one assumes that off-hook is detected by hardware, which generates an interrupt, and that the pulse scanning logic is activated only then, and stays active for 20 seconds , with a switch-wide average of 20 outgoing calls/day/line, the scanner is active for a total of 400 seconds/line, instead of 86,400.
Assume that the interrupt service overhead is 40 instructions (_way_ high), and you've reduced the CPU 'overhead' load from 20% of capacity to 0.20% of capacity.
1) Please explain how switchhook flashing is detected, and determined to be a flash, not a new call. By your logic merely scanning for flashing would overload the switch. Obviously it doesn't.
2) Not all lines of a switch can dial at the same time, only a small proportion.
3) Testing for dial pulses would be done only for subscribers actually dialing a call.
4) Some sort of scanning is still required for Touch Tone entry to collect the digits.
5) If scanning (polling) is done, it is done by a separate signal processor, not the CPU.
6) As the other poster described, probably scanning isn't done, but processing handled by interrupts when either a switchhook or dial pulse is transmitted.
7) Your description describes what a switch does _not_ do. More helpful would be a quotation from a citable text on what the switch does do. My source is a Bell Labs intro engineering & operations textbook. With far slower CPUs, they managed to scan.
I don't understand what point you're trying to make with all of this. As mentioned before, so few people make dial pulse calls the issue is moot. The phone companies dumped party line service when it became a burden to support; if dial was an equal burden it'd be long gone. Indeed, it would COST MORE to elimiinate dial from the generics: remember how everyone emphasized how much testing is necessary for ANY change to the program. Pull out dial support and every function must be retested. Not worth the time nor trouble nor risk.
[programming code snipped]
If interrupts are used, as we believe they are, there is no need to scan at all. Generate an interrupt for each DC signal event. This _must_ be done now to handle flashing and supervision.
Interrupts need not be handled by the CPU. Full sized ESS had signal processors independent of the CPU to handle that. IBM mainframes have channel processors independent of the CPU to handle Input/output devices, and they handle I/O interrupts and intefaces. (Low end ESS and mainframes intended for light duty had the CPU handle that stuff to save money.)
***** Moderator's Note *****
The ILEC's didn't dump party lines: they simply withdrew the tariffs, and then offered the same service as "Ringmate". This is a win/win: it uses the equipment that would otherwise be idle, and gives parents a chance to have a separate number for the kids at minimal cost.
I doubt a switch owner would remove equipment once installed, including party line capabilities, but especially dial pulse: after all, there's no telling when someone with a 554 set on the wall of their bomb shelter will lock themselves in ...
I always maintained switchook flash is detected by -hardware-, just like the on-hook/off-hook transitions. It's a simple, *cheap* differentiator, doesn't require 'real-time' reactivity. *ALL* it does is detect continuity changes that occur on the line, with a simple hysteresis function to suppress 'very short duration' false positives. By simply adding about 3 instructions to the interrupt service routine that responds to a DC-continuity change, one can make a decision as to whether the circuit was 'on hook' "long enough" to constitute a 'hang up', or whether it was only a 'flash'.
The same hardware _could_ be used to detect dial-pulses, *but* on a DTMF-enabled line, where a 'hard real-time' task is already slotted to sampling the audio channel it is much more efficient to add the handful of instructions to -that- task, to sample the continuity state *without* the overhead of the interrupt. (This software-scanning over hardware- interrupt benefit exists _only_ because the scanning is _already_ being done for DTMF recognition. _One_ scan with two functions is generally more efficient than one scan plus one interrupt-service, even when the 2nd function is infrequently needed.)
*PROVING* that it is not done by software scanning all the lines -- like you have claimed as being done for off-hook detection.
Thank you for making the point, for me.
That runs counter to your claim that it is done by *same* routine that detects on-/off-hook -- which,it should be obvious -- must be scanning continuously.
It's worth noting that I have asserted from word one that pulse handling would be done _only_ during call set-up.
You've previously claimed that it is part of the routine that continuously scans all lines for on-/off-hook detection. [Have you] changed your mind on that?
To be technically accurate, software-based DTMF detection requires frequent _sampling_ of the particular line, during call set-up. *Or* a hardware-based digit decoder to be switched onto the line for the set-up interval.
It still takes 200+ million instructions/second of secondary processor capacity, be it one 200+ million instruction/sec processor, or 20 10+ million instruction/second one. However you do it, that much processing power costs non-trivial money.
That is _exactly_ how I have claimed switch-hook detection was done, and that you've been insisting is wrong.
The fact remains that that is what is _necessary_ to do things the way you claim they are done.
The point *IS* that 'few people use pulse dialing', [and] that it _does_ require a "disproportionate" amount of switch resources to handle pulse vs tone dialing, [and] that those resources _do_ cost money. Distributing that increased cost burden over the *small* number of people who 'may' use it *DOES* justify a 'surcharge' for that functionality.
"Surprise, surprise, surprise!!"
That's exactly what I've been asserting the entire time.
That's exactly how I've been claiming on-/off-hook detection is done. For that detection it is desirable to have hysteresis in the detector so that it does -not- false-positive on short-duration events. (It's easier/quicker/cheaper/simpler to do that in hardware than filtering those spurious events in software.)
When one gets out of the call set-up phase, it is trivial to 'flip a bit' on a configuration register, and change the hysteresis constant to respond to flash-type signals, while still ignoring (in hardware) shorter-duration events.
I'm not going to debate 'who does what' within the 'committee' of processing elements inside a large computer system. You've still got to have 200+ million instructions/second of capacity at whatever level of processor is doing the line scanning, *if* comprehensive line-scanning is, in fact, being done per your claims. Plus, there are additional 'overhead' instructions at that level for (a) the interrupt service overhead, and (b) communicating the data to the 'higher level' processor.
Note: I was *deliberately* over-estimating the overhead of doing things via interrupts, to make a point. WITH _overly-expensive_ interrupt-driven logic, it is still one thousand times 'cheaper' in CPU requirements to use interrupts over pure scanning. Using 'realistic' costs for interrupt- driven overhead, the advantage only gets _bigger_.
*OR* the TT generator in the 'modern' set dies, and you can't hook-flash 10 or more times to reach the operator in a life-and-death situation.
We had two-party service and we received a letter that it was being discontinued statewide since it was not compatible with new equipment. (Previously it grandfathered to existing customers). Our line would be converted to a private line and the rate increased accordingly. If our telephone set needed modification that was our problem. There was no offer for any kind of new service. I understand this went on in several states. (It was discussed here, but no one seemed to know accurately the _current_ availability of party service throughout the country or how many lines still existed. New loop technology made them obsolete.)
As I understand it, party lines do require extra hardware to handle the ringing sides, plus administrative headaches. Further, the cost differential was no longer very significant and not many people had it. It wasn't so much as getting rid of equipment as not getting new gear on a new order. That is, an old No. 1 ESS supported it, but perhaps the replacement No. 5 ESS did not.
So we're clear, what exactly are those "additional resources" you claim are necessary _solely_ to support pulse dialing? As mentioned, my source is the Bell Labs intro engineering/operations textbook. What is yours?
What are the numbers of pulse calls, number of total calls, and what is the dollar and cents cost of those "additional resources" to support those pulse calls?
________________________________ From: " firstname.lastname@example.org" To: email@example.com Sent: Saturday, June 20, 2009 10:54:26 PM Subject: Re: [telecom] Pulse dialing overhead, was: ANI vs. Caller ID [Telecom]
In Rhode Island, New England Bell did not have enough pairs to support the boom in the 50s of suburbia. When my family moved in in 1950, we had a two party line with a house two doors away. I think it used reverse ring for the second phone. There was no ring code. When there were enough pairs installed to provide private lines it was a low to no cost switch. The telco didn't want party lines, it only provided them because of necessity. Having a line was not a guarantee that you could make a call, no dial tone problems were common during peak demand times.
This was a common problem after WW II. There was a huge backlog in service requests and the country had a great deal of prosperity, resulting in a big demand for service. In addition, the cold war had an expanded Defense Department which took up a lot of Western's Production. Hollywood made a silly Doris Day movie about it, "Pillow Talk".
Some new communities couldn't even get phones. Bell had kiosks on street corners with pay phones until they could run wires. The Pennsylvania Levittown had this problem.
Yes, it wasn't only a shortage of pairs to serve houses, but central office capacity. A postwar photo of a town's manual Central Office shows switchboard positions squeezed in places not normally used. Levittown PA had 17,000+ new homes and required a multi-story building to house the switch and business office. [Today the ESS #5 switch takes up a fraction of the #5 xbar floor space].
Outlying districts of cities were also built up and had a similar shortage of wires and switching capacity. Party lines were mandatory. if you could even get a phone line.