The hybrid solution (low amounts of dedicated VRAM + allocatable system RAM) has only been around since PCI-Express.
TurboCache is really just a marketing term to describe a 'needed to do it at some point' advance in the technology of VRAM management, but get the customer to go 'oooh, shiny' at it anyways. Video cards have been able to leverage system memory and access it directly since AGP.
"TurboCache" is only new-ish in the sense that it is 3 years old, and the only thing it added was the ability to feed the system memory request directly into the GPU rather than VRAM, which does improve performance for games that have a lot of textures that are used pretty regularly that wind up stuck in system memory.
In PCI, the problem was that the driver had to manage the textures, and it wasn't pretty. The GPU had no way to control what textures were in VRAM, and it lead to some rather inefficient uses of VRAM in many cases (since the OGL/DX driver only had a small piece of the big picture to work with for optimization). AGP's solution was to add a pipe that let the GPU talk to the controller and out to system memory. The GPU would then fetch the textures from system memory, rather than the driver feeding them to the card. The drawback here is that you are still using VRAM for textures that are rarely used.
"Turbocache" or whatever the hell you want to call it, makes it so that these rarely used textures don't hit VRAM, and instead are just fed from system memory into the GPU's pipelines as needed. That way you don't take nearly as large a hit on games that are hard to store in VRAM (or on low-end cards with little VRAM).
Apple does support it on cards that leverage this sort of VRAM management, mostly because you can't turn it off, and it is in the GPU's logic itself, not the driver. The performance woes of GPUs on Apple systems don't lie in the 'lack' of feature support (Apple actually does pretty well making sure nVidia and ATI extensions make it into their OGL implementation, and GPU features that do require driver support work)... but with other woes of how the OGL stack itself works, among other things. Apple just doesn't believe the marketing speak from nVidia or ATi should be plastered all over their webpages.