I am using CobraNet to transmit audio over 100Mbit Fast Ethernet. I'm aware of the limitations using CAT5 and multimode fibre, however I may need to run over other copper circuits.
What is the minimum bandwidth required for 100Mbit Fast Ethernet to work correctly.
Other copper circuits? What does that mean exactly? I beleive if you meet the specs of CAT5 you'll be guaranteed to carry 100BaseT (didn't someone prove Ethernet over a barbed wire fence?), anything less will be at the mercy of your patron saint, the equipment involved, and the phase of the moon.
Thanks, but I don't think you've grasped my point.
If you transmit Fast Ethernet over copper, or anything else for that matter, with a bandwidth of 100MHz, then at the other end instead of a square wave, you will get a sine wave, which, I assume will not be acceptable.
The bandwidth of the connection determines the risetime of the Ethernet
100MHz square wave (pulses) so the question is simple, what is the minimum bandwidth to achieve the correct risetime?
Well, in the sense that any waveform can be represented as the sum of sine waves...
What you would get instead of a square wave would be the sum of the component sine waves in the frequency range transportable along the wire. That is not going to be a sine wave unless you were using a very low bandwidth transport that was acting as a filter.
For most communications systems the signal at the far end is usually much closer to sine than square. That comes out naturally if you want to use most of the available bandwidth.
A 100MHz sine will go through a 100MHz cable and come out looking like a sine at the other end. If you want it to look more square, the next harmonic is 300MHz, and even with that, it will still not look very square.
Even more, for UTP systems keeping the harmonics down is an important part of keeping RFI within limits. Always remember that the signal is in the sidebands, not in the carrier.
"Charles Turner" wrote in message news: snipped-for-privacy@bt.com...
It's for some reason difficult to ferret this really basic information out of the IEEE 802.3 standard. I mean, you'd think there would be a handy table showing the physcial layer signaling for all the different Ethernet PMD solutions, wouldn't you?
Anyway, the popular Cat 5 solution for 100 Mb/s Ethernet is straightforward 100BASE-TX. It uses a single twisted pair in each direction, and works at 125 Mbaud, because it uses 4B/5B coding and NRZI signaling.
The waveform itself is not really square wave, but it does have sharp edges. However, since there's no fancy symbol coding going on here at all (unlike in the faster Ethernets over copper), I don't think it matters all that much if the edges are rounded off in transit. All you need to detect is 1 or 0, from the NRZI waveform.
So, you shouldn't need a lot more than 125-150 MHz of bandwidth end to end, to get a good signal at the receiver.
If fancier coding is used, like AM, PSK, or QAM, distortion of the symbols becomes more of a factor. In those cases, you save on bandwidth, but at the expense of needing typically better signal to noise ratios.
I presume with a scrambler to insure enough transitions for clock recovery.
I still remember all the problems caused by using NRZI for 800 BPI tapes, and how much more reliable PE 1600 and GCR 6250 are.
800 BPI tapes depend on odd parity to guarantee at least one transition per character, but that depends on the head being aligned exactly right. PE and GCR clock each track separately, so are much less sensitive to alignment (azimuth).
Also, there are a number of discussions about some digital system not being modulated, and so the interface device isn't a modem, though usually it is. Definitely DSL and cable use modems, and I would even argue that 10baseT was modulated (synchronous phase modulation). The only one that I might say isn't modulated is NRZI, which looks pretty much like the original signal, but only without a scrambler.
No. The 4B/5B block encoder ensures sufficient transition density.
Really? 10BASE-T uses the same Manchester encoding as in the original coaxial Ethernet. It's hard (although not impossible) to conceptualize that as "modulation".
-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX
Ah, thanks. I think I knew that at some point in the past, now that you mention it.
(Again, too bad this info isn't easily found in 802.3, but I guess it's hard to make everything "easy to find" when there's so much info to present. And besides, what would "consultants" have to do?)
So, MLT-3 would show a string of consecutive transitions as one cycle for 4 transitions. If the transitions are represented smoothly, as you said, that should create a worst-case bandwidth requirement of not much more than 125 Mbaud / 4 = 31.25 MHz. Somewhat more, but the extra spectrum can probably be ignored and still allow robust decoding.
Like most line signaling codes, 4B/5B strives to eliminate any sort of DC bias, so I'd expect a spectrum analyzer to show pretty close to 31.25 MHz, even if the actual data stream has long strings of zeroes, for instance.
Much easier if you call it synchronous phase modulation, one cycle at either 0 or 180 degree phase shift for each bit.
As for NRZI, it is interesting to describe it as 4B/5B encoded and then NRZI. At least in magnetic tape terms the encoding is not separated from the modulation, NRZI tapes write the data bits as a magnetic transition for 1, no transition for zero. GCR is GCR, not GCR coded followed by NRZI. The distinction is more important for tapes where NRZI uses the nine tracks together to supply the clock (and the cause of the reliability problems).
Not pointless. It allows cramming more bit rate in the fiber, with the limitations we have today on individual driver speeds.
Maybe AM per se isn't the best to use, but certainly frequency division is used in fiber. You can change the light frequency to create different channels in one fiber, then use NRZI in each of these. You've created a number of parallel streams. They call it WDM (wave division multiplexing).
You could use the parallel channels as a single, parallel interface. Just as you do with copper. In principle, if you also change the amplitude of the light, you'd end up with something much like QAM over fiber.
I would expect fairly uniform (on a linear scale) bandwidth usage up to around 31.25MHz, and then falling somewhat fast after that.
If the 11111 code point is used in the 4B/5B code it could have a
31.25MHz peak, but I will guess that it isn't. Most likely 00000 isn't used either, so the low frequency part should be small, also. Remember that the sidebands are were the useful signal is.
DC won't go through the transformer for UTP signals.
For fiber, since you can't have negative light, there will be a DC (in amplitdue) signal.
The 31.25 MHz "peak" incorporates the sidebands. It's about as wide as the signal should appear in a spectrum analyzer.
Essentially, you calculate the highest frequency sine wave you're expecting to see and then you figure there will be a whole series of other sinusoids of lower frequency. In practice, in order to carry useful information, the MLT-3 code cannot constantly represent a series of consecutive transitions. There will instead be many cases where no transition occurs, and the waveform created by MLT-3 will require less bandiwdth to transfer. So over time, if useful information is being transmitted, you'll see a continuum of frequencies used by the signal, up to something a bit higher than 31.25 MHz.
Hmmm. As long as we only do color and intensity detecting for light over fiber, what you say is true. Intensity is a function of sine squared + cosine squared, hence it has to always be non-negative.
But there's no reason in principle why a controllable, coherent light source through fiber can't be treated exactly the same as any other EM wave, since that's what it is. Right? Just a higher frequency version of RF.
I fear that you are getting bogged down with a technical nit. My (limited) experience with cobranet is that it does not play well with other traffic on the same network. If you have cobranet running on your regular office or plant LAN, you end up with all kinds of unpredictable and irregular latencies in your audio stream. Since you only need something like a 18 millisecond delay for the Haas effect to kick in, you don't have much tolerance for delay and jitter. What is your application and what is the problem driving your question?
I think maybe I have, but you haven't grasped mine.
It depends on the transmitting and receiving hardware, as it's (most importantly) the particular receiver you are using that'll decode your original signal (or fail to do so).
In order to make it work with _any_ transmitter/receiver combination, you need your cable to adhere to CAT5 transmission standards, and use a CAT5 tester to ensure compliance. [Or use CAT5 parts and good workmanship and assume compliance.] Note that CAT5 doesn't just talk about bandwidth, it also covers impedance, termination, and crosstalk.
We need to transmitt CobraNet around a fairly large site, transmission distances 5-10Km.
There is no CAT5 installed anywhere however there are plenty of twisted pairs, some screened, some unscreened, so if I know the bandwidth requirement it is fairly staightforwd to see which copper cables, if any, will work.
Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.