Why is there a minimum spacing?

Our own Rich Seifert certainly can, but IIRC it has to do with keeping impedance discontinuities caused by taps far enough apart that they don't reinforce each other.

{google,deja} news is your friend.

Reply to
William P. N. Smith
Loading thread data ...

According to the Spurgeon book, the spacing is a guideline to help avoid signal reflections resulting from too many transceiver taps being clumped together.

He goes on to say that maintaining an even 2.5m spacing isn't critical: when joining cable sections you can ignore the marks, and if two taps happen to land close together when cables are joined, that's okay too.

/chris

Reply to
googlegroups

As you realized, one bit-time at 10 Mb/s is 100 ns, which corresponds to

23.5 m of coaxial cable.

The basic problem is that transceiver taps appear to the transmission line as discrete, lumped capacitive loads; the specification mandates a maximum of 4 pf, but this is still significant. When the signal encounters this capacitance, it creates an out-of-phase reflection of a portion of the energy. To all other devices on the cable, this reflection appears as asynchronous "noise," i.e., a signal that interferes with the desired signal.

The situation to be avoided is where all of the transceiver taps are spaced such that the reflections from each of them add up in phase, thus combining *algebraically* (i.e., simple summation). The small reflection from 99 transceivers added up could create enough interference to cause bit errors. Ideally, one would want the transceivers to be *randomly* spaced along the cable; this would ensure that the reflections added not algebraically, but on a root-mean-squared basis, yielding much less reflected energy. In fact, my original proposal was to do exactly that; I even had a patent application prepared for a method of manufacturing cables with randomly-distributed markings for this purpose!

As it turns out, random markings were neither practical (installers didn't like the idea, and neither did the cable manufacturers) nor necessary. I did extensive simulations of the resulting reflections from transceivers at various spacings, and empirically determined that 2.5 m was "good enough." It was relatively easy to mark the cables with a uniform 2.5 m marking; as the cable comes flying out of the extruder, it passes across a roller with a 2.5 m circumference, which places a mark at every rotation.

The idea is not just a *minimum* 2.5 m spacing; it is that transceivers are only placed at the 2.5 m markings. However, as another poster noted, it's not all that critical; if a few transceivers are offset, or even lumped together, it is unlikely to cause a noticeable problem. I was just trying to design for the worst-case, figuring that it would surely show up *somewhere*, and that one installer would have no idea what the problem was.

By the way, that cable-spacing work, along with the work that defined the proper lengths to use for concatenating short coaxial cables into long runs, constituted a major part of my EE master's thesis some 25 years ago.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

IIRC, the idea is that each connection to the cable causes an impedance discontinuity. Spreading them out minimizes the problems caused by those connections.

Reply to
James Knott

Why not post the link here?

Reply to
James Knott

I wonder if anyone can give a definitive answer as to why there is a minimum spacing specified on (some) ethernet cable. The thick stuff with markers every 2.5m for example which is 1 bit of delay at 10 MHz.

There is some mention of it on various web sites but the reasons for it are not stated. Maximum lengths etc are simple enough to understand: you need to be sure that collisions are not late. The only reason I can think of for specifying a minimum distance is to maximise the effect of a collision when two MAUs start transmitting at the same time. Only I can't see that it would. They won't actually start together. If they're waiting for the line to become free, the last data going past them will make sure one starts after the other. So the second will start up at the eaxct moment the first's one's data arrives. So it will experience a zero time-difference collision. The first one will have a two bit difference. Even if there's an advantage in that - which I don't understand -it assumes exactly one 2.5m section of cable. But the 2.5m is only a minimum: the spec doesn't require exact multiplesof 2.5m over hundreds of metres! So I'm racking my brains as to why it was ever specified at all.

Reply to
Henry

Actually, I did all of the analysis in the time-domain, rather than the frequency-domain, although of course they are fully interchangeable.

I started where a communications systems designer SHOULD start--with a requirement for a maximum bit-error rate (which translates into a frame-loss rate). For the specified BER of 10^-9 (worst-case), using Manchester encoding, the minimum signal-to-noise ratio turns out to be

14 db, which is a factor of 5:1. You then take the worst-case minimum transmit level and attenuate it by the maximum amount possible (worst-case cables, longest specified lengths) to calculate the minimum received signal level. The allowable noise at that point must be no more than one-fifth of the minimum received signal to achieve the desired BER.

(I could re-create the actual numbers, or even find my old notebooks if I looked, but my point here is to show methodology, which should apply to a wide variety of communications systems, rather than show the specific numbers for a now-obsolete system like coaxial Ethernet.)

I then apportioned the allowable noise among the various contributors: tap reflections, reflections from cable impedance variations, external EMI, etc. The tap reflection allowance resulted in the specification for maximum shunt capacitance and the "2.5 meter" rule. The cable impedance allowance resulted in the specification for maximum deviation from nominal impedance (50 +/- 2 ohms), and the rules for concatenating long lengths from shorter pieces. The EMI allowance resulted in the specification for transfer impedance of the cable shield (effectively mandating the quad shield design).

Our motto was always that the system had to work in the worst-case. Sure, most environments were much more benign than we assumed for the design criteria; those environments would experience a much better BER than worst-case. But even the worst environment would behave acceptably. When you are planning for millions of networks, and tens-of-millions of installed devices, even 99.9% assurance means a lot of angry customers.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

William P. N. Smith said

Oops, just realized there's a factor of 10 missing there. My attempted guess at the reasoning was wrong... The mystery deepens.

Thanks. I've found some stuff from Rich Seifert going back to

1980-something which explains it, sort of, though it's a bit woolly - not Rich's explanation but the thinking behind it.
Reply to
Henry

That doesn't take me anywhere.

It's minimum distance, though the old thicknet cables had specific points marked on the sheath, where a vampire tap could be attached.

Only in that impedance discontinuities will create standing waves, which can interfere with the signal.

Reply to
James Knott

In theory, practice follows theory. In practice, it doesn't. ;-)

Reply to
James Knott

James Knott said

I was going to do exactly that but I tried to retrace my search and couldn't find it :( I may have been mixing two posts in my head.

Still, this is short and simple:

Unfortunately it seems 802.3 is ambiguous (anyone got the thing?) and can't make up its mind whether 2.5m is a minimum or whether you're supposed to tap in ONLY at multiples of it.

The fact it talks about non-alignment suggests someone must have thought there were significant and potentially troublesome components in the waveform up to 100s of MHz. I can believe that, since the small mismatches in resistive impedance become very large mismatches, effectively a short circuit, for very fast edges which encounter a capacitive tap. However, then it doesn't make sense to insist on using exact multiples as this will tend to create standing waves.

But at least I know it's nothing to do with collision detection.

Reply to
Henry

(snip regarding tap spacing on thick ethernet)

Having actually put taps into cables in cable trays and suspended ceilings, it is sometimes hard to know where the other taps are. Sometimes I have done it by feel when I could barely see the cable. (Well, enough to know it was the right one.)

Then again, it is hard to know that there aren't more than 100.

As previously discussed here, random spacing would be even better, but guaranteeing it is hard, and it seems that making a machine to mark cables at random spacings is also hard.

-- glen

Reply to
glen herrmannsfeldt

(snip)

There is physics, and then there are rules. The rules are set so that the system will work within the physical limitations.

In many cases the rules are more strict than necessary to make them simpler. The 2.5m tap rule is simple to state, not too restrictive for actual use, and allows the system to work.

In many cases you can't see all of the cable, so you couldn't guarantee a minimum. With cable marked at 2.5m you can be sure that if you tap at marks you meet the requirement.

For thin ethernet the rule is 0.5m minimum. As BNC cables commonly come premade in lengths that are not multiples of

0.5m it is good that it isn't required to be multiples. Also, in the thin ethernet case, the real restriction is against the lumped impedance effect of many taps close together as seen from some distance away. A very short cable with a (relatively) large number of taps will work just fine.

-- glen

Reply to
glen herrmannsfeldt

In coaxial Ethernet (the subject of the original post), collisions are detected by measuring the average DC voltage on the cable, NOT by comparison between the transmitted and received signal. A tap reflection does not change the average DC; thus, while it might cause data corruption, it will never cause a false collision.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

What's the difference, between two signals colliding and a signal and it's reflection colliding?

Reply to
James Knott

(snip)

I am not sure how accurate the velocity factor is, but...

Constructive interference would result from a half wavelength spacing, so 11.75m. A 500m cable could have 43 taps with that spacing, which could be significant. If you put 44 taps equally distributed over the same distance they will pretty much cancel each other out. If you put 43 taps spaced at 11.75m and the velocity factor is off by 2% they also pretty much cancel out.

The first odd multiple of 11.75m that is close to a multiple of 2.5m seems to be 82.25m.

It seems to me very unlikely that, unless someone intentionally spaced them at 11.75m that they would cause problems, but it is nice to have a rule with a known effect.

-- glen

Reply to
glen herrmannsfeldt

snipped-for-privacy@marget.com said

I can't see why it would matter even an itsy-witsy little bit. But that's where people seem to have different theories.

Reply to
DHP

(snip) (I wrote)

I don't believe that a scrambler is used, and repetitive bit streams are fairly common. One could easily imagine the entire cable filled with all zero bits. Otherwise, yes, for each combination of bits there should be an appropriate combination of taps where they will add constructively.

Whatever it looks like it will add in phase to one delayed by one cycle. For a stream of zero bits that is one half a bit time down the cable.

-- glen

Reply to
glen herrmannsfeldt

HA! Thanks, I needed that! Can't explain it to most folks, but thanks anyway!

Reply to
William P. N. Smith

Rich Seifert said

Hi Rich, thanks for all that.

Could I now quiz you a bit more? 4pF on a 50 ohm system gives a characteristic time of some 200ps or a frequency of about 800MHz. So I'm guessing (having forgotten the theory ages ago), without doing a phasor diagram, that you'd get a reflection coefficient ~f/800 for each component. But at the same time, you only need to worry about reflections that interfere constructively, i.e. over about half a wavelength = 117m/f.

So if the allowable reflection is 5%, the number of taps in 117/f m of cable is 5/100 * 800/f, which is about 1 tap per 2.5m, though there should be the odd fudge factor to upset the convenient result. Anyway I can see the point of having a lowish average density of taps! Would I be right in thinking that the requirement to place taps at equal spacing is a result of needing to cater for the higher frequencies? My thinking is that the allowable density of taps taken over a fraction of a wavelength brings you down to just a small handful of taps so you may as well just space them equally rather than worsen the noise with a cluster? Is that the "real" criterion - to avoid clusters over short distances? It would seem to assume that NICs are sensitive to out-of-band noise.

Thanks for your time in answering this, it seems to crop up regularly

- though the google archive seems to peak in the early 90's :)

Oh yeah, my Masters is even older than yours!

Reply to
DHP

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.