Vector processing, the core thing a GPU does, can easily scale exponentially depending on workload FWIW
That's complete nonsense. The total compute throughput of a GPU is roughly:
(number of cores) * (vector width) * (clock frequency)
If you add more cores and rerun this calculation, you get linear scaling, not exponential. The only workload-to-workload variance is that some tasks may scale sub-linearly, because perfect scaling with core count is sometimes hard.
Vector processing is not superlinear magic. If you think it is, you probably don't understand what it actually does.