History--Bell toll routing machine cards--other applications? [telecom]

A critical part of telephone switching is routing a call--the 'road map'--from the calling exchange to the called exchange. The further distance of the call, the more routes the call may use.

When setting up the path of a call, the switching machines need to know what route to use, and alternate routes in case the main routes are busy. Before the days of computers, the routing 'knowledge' was stored in varous ways. In the days when trunks were very expensive, it was critical to make good use of them--to have enough to meet busy hour demand, but not waste them. A great deal of effort went into efficient trunk utilization over the years.

For long distance routing, Bell Labs devised an ingenius steel punched card memory. For each area code, and sometimes for exchanges within an area code, a primary and alternate routing code was punched into thin steel cards. Magnets would pull up the appropriate cards using notched tangs, and then photocells would read off the information (an early application of transistors). That information would then guide the switching equipment. The box could hold and read 1,000 cards. Each switch had at least once box. DDD couldn't have existed without this component.

Remember, in 1950, electronic computers were still essentially laboratory curiosities. Random access memory of the size needed for routing information did not yet exist. (The disk drive wouldn't be invented by IBM until several years later and the magnetic drum wasn't big enough).

I was wondering: This random access metal card storage device seems like a very powerful tool. Would anyone know if IBM ever considered it as a read-only memory store for its early machines? It seems like a useful way to store machine accessible information.

Thanks for your help. Any comments about toll routing then or today would be appreciated. [public replies, please]

[Information taken from the Bell System Eng & Sci history, Switching, 1925-1975).
Reply to
hancock4
Loading thread data ...

Here is one such translation card:

formatting link
And the tranlation machines looked like this:

formatting link

But more interesting to me is Bell's expiremnts with flying spot and barrier grid storage devices:

formatting link

Reply to
T

Bell's CRT tubes seem to be similar to IBM's "Williams" tubes used as memory for IBM's earliest computers. IBM found them very unreliable and need of constant care. As soon as core was perfected, IBM retrofit core on all its of CRT-memory machines. Apparently Bell had more success.

In reading the Bell history, it seems Bell utilized its own technologies rather than those off the shelf, and stuck with them for longer. For instance, Bell stuck with punching AMA onto paper tape long after magnetic tape would've been much more efficient for that purpose. It seemed they waited a long time until they used core memory.

Reply to
hancock4

Barrier Grid and Flying Spot were only used on the Pre-Morris ESS test in 1958. After that Bell switched to twistor which is their version of core.

Reply to
T

I once was in in a building where a whole floor of those translation machines was operating. The power consumption of those machines must have been very high. They were noisy; all those steel cards moving up and down created a din that was impressive.

Reply to
1100GS_rider

The Williams tube was used on the Manchester University Computer long before IBM took the idea over. It turned out not to be a very good idea; CRTs aren't all that stable and they need constant refresh (just like modern DRAM).

Bell built as much as possible in-house, they built it to last, and they ran it as long as they could, until it was more cost-effective to employ something else. They did not push the envelope if they could avoid doing so. Consequently, the telephone system was very stable.

--scott

Reply to
Scott Dorsey

Basically true.

But I wonder if other organizations, such as IBM or other computer companies, made use of Bell card translator device. It looks like a great random access read-only memory. I have no idea of its cost. I think the capacity of it, even holding 1,000 cards, was not too high, maybe roughly 15 numeric digits per card. I don't know the capacity of magnetic drums of that era. One problem with drums might have been patent rights, I think ERA/Univac owned them.

The first IBM disk drive held 5 million characters, obviously far far more, and that was soon expanded to 50 million.

As to Bell doing everything in house, until the 1970s, this policy made sense. Since Bell was totally responsibile (today we call "DBOM-- design build operate maintain) it was to their advantage that maintenance be as cheap as possible and to max out the life of equipment. Also, this meant high reliability in tough service, vital for customer satisfaction. Unfortunately, in the 1970s some ancient equipment was kept in service or not well maintained (letting line finders fail in an SxS office was a disaster).

Also, for ESS development, commercial electronic computers did not have the same reliability 24/7 that ESS required*.

Non-electronic business machines tended to have much longer lifespans than electronic machines. This is ironic in that electro-mechanical and mechanical gear have moving parts that wear out and need regular service while electronics does not (other than the vacuum tubes).

The amount of effort to cut over a central office was absolutely enormous. It's understandable this was deemed a rare thing to do.

Back in the 1950s and 1960s, even if a central office was 'left alone', it actually wasn't. There was always wiring changes to accomodate new trunking and routing and likely expansion of subscribers. New features, like Touch Tone and DDD were added.

Bell did buy lots of PDP mini computers for various applications, both in actual switching and administration. I forgot where, but I think they used PDP computers (and IBM System/7) in upgraded AMA. I think it was for the PDP that Bell developed "C". I think for customer billing and general accounting they used the big IBM mainframes, but for things like plant administration they used mini computers..

*I'd love to visit Morris IL and try to find if there's any newspaper articles or documentation about the ESS experiments held there; such as what the customers thought of it.
Reply to
hancock4

I doubt it. IBM certainly didn't.

Of course. Under rate of return regulation, the more they spent, the more they made. They had a great incentive to do lots of R&D and capitalize the costs.

No, C was originally implemented on a GE 635 and then ported to the PDP-11 and other computers.

Regards, John Levine, snipped-for-privacy@iecc.com, Primary Perpetrator of "The Internet for Dummies", Information Superhighwayman wanna-be,

formatting link
ex-Mayor "More Wiener schnitzel, please", said Tom, revealingly.

***** Moderator's Note *****

So now, I get to ask the questions which has been on my mind for years:

Did Kernighan really admit that "C" was just a bet between him and Ritchie to see who could write the longest function?

Bill Horne Temporary Moderator

(Please put [Telecom] at the end of the subject line of your post, or I may never see it. Thanks!)

Reply to
John Levine

"not exactly" -- at least according to Bell Labs official history at :

The "whole thing" got started with a PDP-7. with a whole 8k of memory.

Development tools resided on the GE635 and produced PDP-7 executables.

Initial work was in assembler language, using the 635-hosted cross-assembler..

They then started developing a 'native' PDP-7 environment that was to become known as UNIX. When they had the basics -- editor, assembler, shell, "the link with the 635 was severed". this is 1969-70.

The "B" language (described as "BCPL, cut down to run on a PDP-7 with 8k, filtered through Thompson's brain") was developed on the stand-alone PDP-7 environment.

A successful proposal (based on the usefulness of "PDP-7 UNIX") to acquire a PDP-11 was floated. and after arrival (cpu arrived late summer 1970, but no disk drives until 12/70), development work migrated to the -11.

'B' begat 'NB', which begat 'C'.

"B" was word-oriented, and implemented as an interpreter (slow!), so it was modified into "NB" ('new' B) for the -11. NB included strict data types, and was implemented as a compiler. This is into early 1971.

Dennis Ritchie had an early version of C running in early 1971, but was still adding 'basic' features -- e.g. 'structure' types -- and improving the compiler code, through '71. His bio shows '72 as the for 'Creates C language'. It was also the summer of '72 that UNIX was re-written in C.

Reply to
Robert Bonomi

I can't answer your question Bill, but for a funny article along similar lines, see:

formatting link

Reply to
Rich Greenberg

Right, Ken programmed his original version of Unix in assembler.

Then they rewrote it mostly in C on a PDP-11, largely as an experiment to see if Dennis' new C language was good enough to use for an operating system. Early PDP-11 versions of Unix still had a fair amount of software written in assembler, including the entire Fortran compiler and runtime library.

Regards, John Levine, snipped-for-privacy@iecc.com, Primary Perpetrator of "The Internet for Dummies", Information Superhighwayman wanna-be,

formatting link
ex-Mayor "More Wiener schnitzel, please", said Tom, revealingly.

Reply to
John Levine

In the 1970s, I toured the underground facility where this cable terminated in San Luis Obispo, CA several times. As I recall, the building goes down three stories and is mounted on springs to survive an earthquake or nuclear blast. The single coax to Hawaii terminated in this building. Vacuum tube repeaters on the ocean floor were powered by DC sent along the coax. As I recall, they supplied 1.5kV positive on the center conductor in SLO, and 1.5kV negative on the center conductor in Hawaii (or vice versa). I'm not sure if the total voltage was 3kV or 6kV (it's been a while). They ran frequency division multiplex using single sideband to carry multiple telephone calls down the cable. They also used TASI

formatting link
to increase the capacity of the cable. Later, as the rest of the long distance network was converted to digital, I understand the frequency division multiplex equipment was replaced with very high speed modems that used the entire capacity of the cable for digital transmission. Finally, as previously mentioned in this thread, the coaxial cable was abandoned and replaced with fiber. I believe we now have several undersea fibers terminating in San Luis Obispo, including AT&T, Global Crossing, and perhaps others. I suspect these tie in to fiber running along the railroad right of way.

Back there in the 1970s, the AT&T Long Lines underground facility had a brochure about the facility that had the headline "San Luis Opbiso - Communications Center of the World."

Here is a link I've found about the facility:

formatting link

Harold

Reply to
harold

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.