# Program to Estimate Cat 5 Cable Length

• posted

Hello all, I just have a quick question. Has someone written a program that measures, say, the ping between a computer and a router and takes that and estimates the length of a network cable? I was just wondering if something like this was out there.

Thanks, Dan P

• posted

Not likely. The total transit time of the ping and echo over the wire is less than 0.5 uS. The time it takes the computers to transmit, handle and receive the ping or echo is far greater than that. However, there is an instrument, that can not only measure cable length, it can also show faults along that length. This device is called a "time domain reflectometer" (TDR).

N.B. The velocity of a signal in a vacuum is about 300,000,000 metres per second. The maximum twisted pair ethernet cable is 100 M. Allowing for

200 M total echo distance and allowing for velocity factor, brings us to about 300 nS or 0.3 uS total transit time for ping and echo. Also, at 100 Mb, a ping will be more than 5 uS long.
• posted

The time would be much, much too small in comparison to the electronics.

• posted

It just doesn't make sense: to accommodate for all possible combinations of lengths that a pair of Ethernet devices may encounter (from 0 to 100 meters), the devices are designed to hold the data received in the buffer memory for at least the duration of the longest trip - 100 meters. Therefore, any measurement you might be able to take this way (without arguing whether it's even possible to measure such tiny time interval with devices not specifically designed for that) will always show one length -

100 meters (295 ft)
• posted

NVP (Nominal Velocity of Propagation) in a UTP cable would be around 0.67 times velocity in vacuum. That makes it "only" about 200,000,000 meters per second ;-) Really does not change much about the particular matter though, so agree with everything else.

• posted

• posted

Actually, the longest trip may be far greater than 100 M. For example, on

10base5 network, the maximum distance is 500M and fibre can go much further than that. Also even with CAT5 on a switch, you can go 200M etc. Ethernet, being a level 2 protocol, will make sure the packet gets placed on the wire, without experiencing a collision. Once 512 bits have been transmitted, it's considered collision free and the ethernet portion of the stack has no more concerns about the data, such as if it made it to the destination intact. It then becomes the responibililty of a higher layer (either TCP or the application, if UDP) to ensure end to end integrity. UDP apps might not worry about that.

The 100M CAT5 limitation is due entirely to signalling limits and has nothing to do with collision detection etc.

• posted

You got it! 295ft is actually 90 meters, which is what I'm usually thinking about: sans the patch cords ;-)

• posted

It's the first I've heard of it, and I've spent 23 years in the telcom industry. There are a lot of things that "make sense", but are in fact fiction.

Also, I don't recall AT&T being much in the local lan business. Phone cables are capable of much greater distances.

• posted

I agree with you, James. I was trying to make a point that any delay that actually has anything to do with the real cable length (no matter how small or long of a delay) will essentially be "normalized" to the maximum allowable limit on the hardware level.

Also, being a cable guy I am, I have another, fully "passive physical level", explanation of the 100 meter cable length limit: As read in a source that cannot recall the name or origins of, some 25 years ago AT&T commissioned a survey of about 1000 (OK, make it any statistically significant number - I don't recall the actual number anyways ;-)) commercial buildings in the US to find out that more than 80% of the useable spaces in the buildings surveyed can be reached by a cable that's 100 meters long from the telecom closet. Thus they decided to stick with the length as being a nice rounded figure, and the electrical characteristics have been maxing out at that length ever since. The electronics components manufacturers in turn had to work around the electrical characteristics limit to make sure the equipment will work on the installed base of cable. Don't know if that was true or not, but it made sense to me when I read it.

• posted

I was referring to the PDS (later SYSTIMAX) unit of AT&T. These guys have a lot to do with LANs and little with traditional telecom. But I do admit there might be a great deal of urban legend here.

• posted

I don't know if your story is true or not, but I was with Datapoint for 10 years, from 1978 till 1988. It was in 1976 that Datapoint introduced ARC or the Attached Resource Computer, a networking system that ran on 92 ohm coax. This was a token-passing system or more properly CSCA Carrier sense, collision avoidance unlike Ethernet's CSCD carrier sense, collision detection.

ARCnet ran at 2.5 mbps and was a very robust system for its time. It was around 1981 or 82 the Norwegians (or was it the Swedes - even possibly the Finns) were redoing their national infrastructure and beginning to deploy data networks throughout the country. It was their intent to become a "cashless" country allowing a credit card or key to automatically debit a person's bank account for any type of purchases.

Being unable to wire the country with coax - fiber was still a laboratory toy - they developed media converters to change from a coaxial cable based system such as the IBM 3270, Wang and even Datapoint to run over regular telephone cable. Because telephone cable varied in guage, twist or lay, insulation types and any other number of variables including DC resistance, the practical distance was set more by the physical and electrical limitations of the cable in use rather any "standard distance to wire a floor."

It wasn't too long after this that IBM developed their specification for running Token Ring over UTP. The G21 spec as I referred to it (the first letters of the spec were G21-) required the twisted pair cable to meet certain specifications as far as capacitance, DC resistance, guage and other requirements in order to carry the 4Mb signal. Again, the type of insulation, capacitance, resistance and other factors limited the distance the cable could carry a usable signal. It wasn't until '91 or so that Cat 3 came about and I referred to the earlier G21 spec cable as Cat 2 1/2 - it would work, but not always well or at longer distances.

The insulation also had a lot to do with it. This is all pretty much before plenum cable and teflon insulations, so we were dealing with pvc and other types of plastic. The equipment used to manufacture cable also didn't have the tolerances that are commonplace today so you could get marked variations in cables from the same manufacturer off cable making machines that were side-by-side. Even from the same machine as different spools of copper were loaded or a new batch of plastic added to the extruders.

From my experience, the 100 meters had a lot more to do with the physical and electrical limitations of the copper cable rather than any "standard cable run" ever had. If you actually go back and can dig up a '91 edition of the 568 standard or even TSB-40 or 40A I think you will see that Token-ring ran distances greater than Ethernet on the same cable.

Rodgers Platt

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.