~ Thanks Aaron! Just a couple questions on how that actually works in ~ the real world.
Ah, well things get complicated in the REAL world. My discussion below is a rough APPROXIMATION with much implicit handwaving passim.
~ If one link is T-1 (1.544Mpbs) and the other link is .5 Mbps, I ~ theoretically have about 2Mbps of bandwith. If I used IP cef and equal ~ costs routes, would I be able to download something at 2mbs?
First of all, I assume that you have control over the routing tables at each end of your link pair.
[router 1]s0---link 1 (1.5Mbps)---s0[router 2] f0____link 2 (0.5Mbps)____f0
So let's say that you have equal cost routes on each side - i.e. r1 has:
ip route 0.0.0.0 0.0.0.0 f0
and vice versa.
So if you configure CEF per [source/]destination, then half of your source/dest pairs will use link 1 and half will use link 2.
Will this give you the ability to download something at 2Mbps? No; each source/dest pair will be able to use either at most 1.5Mbps or at most 0.5Mbps. However, with two concurrently active connections, one could use 0.5Mbps and the other 1.5Mbps, for an AGGREGATE of 2Mbps.
On the other hand, if your main interest is single stream throughput, then this scheme would be worse than just having your default route use the 1.5Mbps link, as half the time your single stream will get 1.5Mbps and half the time 0.5Mbps.
The alternative here is to do per packet load balancing. Then your single stream will send one packet to the .5Mbps link, one to the 1.5Mbps link, one to the .5Mbps link, etc. With the result that you will be transmitting at 1Mbps (assuming equal sized packets and other assumptions.) Again, worse than the single route via link 1 scheme.
So you could try doing this:
ip route 0.0.0.0 0.0.0.0 f0 ip route 0.0.0.0 0.0.0.0 s0
Now, you will switch only 1/4 of your packets out f0 and 3/4 out s0. With the result that THEORETICALLY your single stream might see 2Mbps of throughput.
However, here is where the real world, where you encounter things like TCP implementations that can't ACK out of order packets, starts to encroach.
Bottom line is, it's almost surely not worth it to try to spread load across links with a 3:1 speed difference (esp. if they have a significant latency variance). Except as a learning experience.
~ Also, if one of the links were to drop, is it smart enough to stop load ~ sharing on the down link, and just use the link that is up?
Sure, assuming that your routing scheme is smart enough to know whether an interface is down or up. That's inherent in your T1 link (probably), but your network path thru your Ethernet might go down without the Ethernet going down, so static routes probably wouldn't do the trick, and you'd need to mix in fancy stuff like an IGP or "Reliable Static Routing Backup Using Object Tracking".
Have fun,
Aaron