How to do BERT testing on an Ethernet?

Folks,

I need to perform the equivilent of a bit error rate test on an Ethernet segment running through a synchronous serial satellite modem. In the old days of synch serial data comm one would simply use a synch serial BERT device and generate a pattern and check for pattern sync, slips and BER figures.

Now I need to make the same type of measurments on an Ethernet. I have found software which essentially pings over an extended period of time and reports packet loss percentage and such. But this does not translate to a true bit error rate.

Can anyone suggest a method, device or software to achieve this goal?

Thanks in advance,

Mitch

Reply to
Mitch
Loading thread data ...

[I've had this discussion many times over the last few years.] Unless you are interested in knowing the margin of some underlying comms medium (e.g. satellite), BER on a packet based network doesn't really mean much. Your customers are interested in packet loss rates; BER is irrelevant.

Do you really need to make the same type of measurments on an Ethernet, or is it just a hangover from your telecoms / satellite background? I'd like to understand your reasoning here.

Assuming you really do want to measure BER... Your problem is that a single bit error is indistguishable from a burst error - both will cause the packet to be dropped (due to bad CRC) before it gets to your computer for analysis. This means you can't actually count errors, only lost packets.

It's sometimes a good assumption that if the BER is very low, the bit errors don't occur in bursts, so we can guess that there's one bit error for each lost packet. But that depends on a lot of factors.

Real test equipment (that you buy, e.g. an Agilent N2X box) can put a PRBS in the data part of the packets it sends out, and program the MAC

*not* to discard packets with bad CRCs. It can then analyse the received PRBS pattern to count the number of bit errors in the bad frames. Of course, if there is any intervening Ethernet equipement (e.g. a switch) the bad frames will be dropped before they get to the test equipment, so only the last segment can be measured in this way.

The more frames you lose, the better the accuracy of your results, so crank up the packet rate you are sending. But please don't take it all the way to 100%, as switches (etc.) will occasionally drop packets when heavily loaded. I don't think that's what you are trying to measure.

Regards, Allan

Reply to
allanherriman

Mitch wrote in part:

If it's ethernet, there is a CRC at the end of each frame. The ethernet driver might drop these frames silently, or report them to the OS.

If it's the sat BER you want, you'll have to ask the modem mfr for diagnostics. The ethernet segment should be error-free unless you've done something newbish like crimping plugs to split a pair.

Yes, because software controlled the modem directly.

-- Robert

Reply to
Robert Redelmeier

All of the above is correct. In addition, remember that only 10 Mb/s Ethernet uses true bit-by-bit transmission; all of the 100-and-higher Mb/s variants perform block coding prior to serialization. A change of one code-bit in the block coded transmission can imply a change in multiple bits of the decoded data; e.g., a change of the 5B pattern

0b11110 to 0b11010 (a single code-bit error in the third bit) changes the decoded data from 0x0 (0b0000) to 0xC (0b1100), a two-bit change. Thus, counting even counting data errors (as suggested by the use of a PRBS generator and a bit-by-bit comparison) will not tell you the true BER of the channel.

-- Rich Seifert Networks and Communications Consulting 21885 Bear Creek Way (408) 395-5700 Los Gatos, CA 95033 (408) 228-0803 FAX

Send replies to: usenet at richseifert dot com

Reply to
Rich Seifert

Cabling-Design.com Forums website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.