In article , Colin Cant wrote: :i've got following problem witch i have to simulate. we got a laserlink :connection bedween two buildings. this link often breaks down in winter with :foogy weather. now i want to try the hole thing in a lab. taking two cisco :cats 3500 xl and create a etherchannel. one link is connected thru a vdsl :modem link(for backup) and the other through my simulated laser.
:i thought about taking a crossover cable an put a resistor potentiometer :bedween the TX / RX circuits to simulate the link quality dropping and :increasing. and now im not sure if this is gono workout or i'l break one of :my ciscos.
I don't -think- it would break the device, but I'm not much of a hardware person so I don't know.
If you put a variable link between the TX and RX, then it seems to me that you would be varying the potential Near End CrossTalk (NEXT). It seems to me that NEXT is a problem when you have signals on the transmit and receive pairs simultaneously: if you don't, then the non-transmitting end will get a bit of garbage that the NIC will just ignore, or else the NIC will receive an "echo" of the packet... which for the most part it would just ignore because it wouldn't have the right MAC destination. You might generate an extra packet in response to a udp broadcast or an icmp echo to the subnet broadcast, but that packet would go out on the regular stream, which would not be a problem.
I would tend to think that to simulate fog properly, you would want to put in something that causes bits to be dropped. I'm not sure that even introducing noise into the circuit would be good enough, but that would depend on exactly how bits are encoded over the laser. If they are encoded with something like phase modulation with the standard trellis encoding for error correction (e.g., 8/5 encoding), then introducing noise into the ethernet circuit would not be an equivilent test unless you were to find that the fog had the effect of shifting phase encoding of light instead of just absorbing the light.
I am not sure what a good way to simulate dropped bits would be.
*Maybe* something like a variable capacitor, with the idea being that if the signal voltage is below the catapicense, that the current would be buffered into the capacitor instead of going through unchanged. I seem to recall [probably incorrectly] that pushing a signal through a capacitor tends to round the signal: perhaps it would round it enough to alter the signal phases, thus corrupting bits.
My suspicion is thus that a better simulation would involve capacitors and inductors rather than a varistor bridging between the receive and transmit. Or maybe just use a transistor to block the current from flowing...