Sorry for abusing my membership to this forum for this question.
We are busy with building an embedded application that must retrieve data very fast. The choice is to either have the data locally or go to a central server(pool) that contains the data.
In evaluating the network option, I thought that the people here could possibly help me with the expected network latency for a Gb network via a switch. My gut feeling says that with increased load, the switch will bundle the traffic to the different nodes more and this will result in higher latency.
The other built in assumption here is "only 1 switch" - many campus network designs would have your traffic crossing several.
The 1st part of the delay is intrinsic to the distance involved - the rule of thumb is around 500 uSec per 100 Km of fibre (which probably is not a lot if your server is "local" - but sometimes means somewhere in the same country).
any store and forward device in the link will add some latency - the packet gets recieved, checked and then sent on the next link. If you go via WAN links there may be many store and forward "hops" embedded in the network path.
since this is a packet network the minimum added latency per hop depends on the packet size used in the transfer.
In real life the switch will add more latency for internal processing
- on the order of 10's of uSec.
And as Rick stated, as the load on the chain of links goes up to a reasonable fraction of 100%, you start to get queuing delays if the packet is competing with other traffic streams......
However - it is worth remembering your app and server will see the same kind of effect as they send / recieve traffic to the network card.
maybe the solution is to design the application to stream efficiently, so that latency