It isn't really hyped, think of why.
1- Hardware and software are generally not cost effective.
2- to telecoms benefit
3- cheaper and easier in many cases to just up bandwidth from one source.
4- technical issues (mixing synchronous with async protocols)
5- issues with static IPs
6- It dosn't work if all bandwidth is really from one "master" provider or if you have maxed out avalible bandwidth usage.
- Some good applications are this:
Combining several Ts or partial Ts from different providers on different backbones for robustness.
Combining many different low bandwidth providers into one to increase bandwidth and/or provide more stable connection.
Combining different wireless (satellite and LOS "super wifi") sources for better bandwidth and stability.
Increasing unused bandwidth for you from other local users by connecting through them at their convience (sub-leasing bandwidth).
Irrating your local IT person- like she donsn't have enfough to do.
I think Bandwidth Aggregation is a better term than Link Aggregation.
It wouldn't make sence, for instance, to add a 56k modem connection to a DSL line unless you absolutly had to always be connected to some remote site.
It's late and I am rambling, but I have a little more to add.
People complain about OS X not having the same easy to use tools as OS X Server. I.E. creating, maintaining and fixing RAIDs is a lot harder with out OS X Server. Many forms of repair are only available thought terminal. This is likely why Bandwidth Aggregation is not a feature of OS X for most users. Likely, as Vista looms on the horizon, it will eventually become a more commonly used feature, as high-bandwidth digital media content delivery takes off. It may even enter into the regular consumer version of OS X.