Compressor, Motion & especially Logic Pro X and Final Cut Pro. Those last two alone dictate the trajectories of Apple's 'pro' hadrware.
Pretty much anything Metal based will run better on an Apple CPU than an Intel one (without a discrete GPU) just because of how the GPU/CPU pipe is optimized. That's... well.... most of those apps, plus more and more the Adobe apps.
The A Series is basically an HSA architecture, which is the key to the faster performance. Apple does have one other option if they want that optimized CPU/GPU pipe, and that is AMD.
https://en.wikipedia.org/wiki/Heterogeneous_System_Architecture
(Apple isn't noted as an HSA member because they're not. Their implementation is not part of the HSA standard even though it's the same idea.)
Intel so far, for whatever reason, has not shipped a similar architecture. But it would open the door to a much faster single chip architecture from either AMD or Apple that would blow the doors off the current MacBook Pro in Metal pro apps.
An HSA architecture is also a key part of something like starting to use Metal for audio work, so it would open doors to new types of acceleration that isn't possible right now on Intel.
Edit: A bit more to explain why this is important for performance and even affects mixed CPU/GPU loads...
Every time your CPU needs to send data to your GPU, or your GPU needs to send data back, there is a sync. Syncs take a while, and while the sync is going on, your CPU and GPU sit idle. If you're in Final Cut Pro or After Effects/Premiere, rendering a frame has to sync to the GPU, and then sync back to the CPU. And if you have filters running on both the CPU and GPU, you might have a lot of syncing back and forth as it switches between CPU and GPU, which means a lot of wasted time.
The A series is different in that it doesn't need to sync. The GPU and CPU can both work off the same data. So while the Intel CPU is sitting there, spending a lot of time idle, doing nothing but syncing the integrated GPU and the CPU, the A series is spending all it's time working. The end result is the A series is a video processing monster while the Intel chip is... not.
This does apply to the Mac Pro because the problem is actually worse with discrete graphics. A discrete GPU is much faster than an integrated GPU, but the distance your data has to travel is further. So syncs cost even more and you're spending even more time waiting. In some cases, for certain tasks, a Mac Pro can bench worse than an A series chip because the Mac Pro is drowning in syncs while the A series is not.
Where performance people get excited is the idea of a A series style large CPU that had a large GPU with as much power as a discrete GPU. Then you'd get the best of both worlds. You'd have a fast GPU with the sync-less design of what Apple did with the A series.
As mentioned, AMD is doing this sort of work with Ryzen. Nvidia is also working on this with Tegra. Apple has the A series. All the shipping game consoles are using this type of design. And Intel has... nothing. The rumor was Intel wanted to buy ATI to use them as a basis of an A series like design a long time ago, but then AMD beat them to the punch. And for whatever reason they've never upgraded their integrated graphics to use this design.
The syncing problem is why Furmark exists. Furmark is designed to never send anything back and forth between the CPU and GPU, so your GPU and CPU don't have long periods of sitting idle. That's why it stresses out your hardware in ways that are considered unrealistic compared to normal usage.
It's also why Metal, Vulkan, and DirectX 12 exist. OpenGL 4 doesn't know how to handle sync less architecture, so new APIs were designed for this new generation of architecture. In theory, Metal would also adapt nicely to what AMD is doing. But right now Metal runs sub-optimally on all Intel Macs (as does Vulkan and DirectX 12).