Please correct me if I'm wrong, but these accelerated tasks are still 2D. I would be highly surprised, if there was any difference at all between the lowest and the highest end graphics solution on the market in regards to user experience, since no stream processors/CUDA cores, ROPs, TMUs etc. are used.
You kinda need TMUs and ROPs to do the tricks Core Animation does. The content of a view gets written to a texture and then rasterized live by the GPU to produce the manipulated result. In fact, CoreAnimation historically has been built on top of OpenGL directly for that reason. We'll see if it migrates to Metal. I thought Sierra was supposed to do that, but I'm not finding confirmation online. Stream processing is used every time you apply a CIFilter to a CALayer for screen display to handle the filtering on the GPU rather than the CPU so you can do it realtime rather than having to process it all on the CPU. And if you do any sort of complex compositing, such as the frosted "blur" you get from Notification Center, that's a CIFilter more often than not.
Can I build an app on macOS that avoids this stuff? Yeah, I can, since an NSView (AppKit's views) doesn't need to be backed by Core Animation, and Quartz predates a lot of this work that's been added to OS X over the years. But take even a simple app like Doo which relies on animations and it is now doing those tricks via the GPU. Complicated apps like Safari build on it to give you smooth animations like the zoom out animation to see all your tabs and more performant rendering.
On iOS, UIKit is entirely built on top of CoreAnimation and OpenGL ES. So everything is backed by the GPU in some way there.
What I can't tell you all that well is where things break down on the Mac between the accelerated and non-accelerated parts. If even one view is backed by a CALayer, the entire screen really should be using the GPU for final rasterization. If my memory isn't faulty, I believe at this point, all of WindowServer's display is built on top of the GPU. Apps that don't use Core Animation themselves are given a single surface to draw to and then composited during the final pass with everything else. But if I use Core Animation (and a surprising number of bog-standard apps do), then those individual layers will get rasterized by the CPU to surfaces, and then those are treated as textures for the final pass. Assuming CoreGraphics doesn't build out command-lists these days like QuartzGL was supposed to do years ago.
I've worked on apps that explicitly use features in CoreAnimation so we could offload things that amounted to pixel blits onto the GPU and free up more CPU time for other parts of the UI. Things like this are becoming fairly common when working on performant user interfaces on the platform now. 10 years ago? Not really.