What makes you think that? Apple compares its GPUs to Nvidia's gaming GPUs, not to the workstation GPUs.
I think these questions are ultimately meaningless because they try to apply definition outside of the relevant context. There is no formal set of criteria of what constitutes a "workstation GPU". In the end, it's just a fairly artificial concept that only exists in a specific market. Something like "workstation GPUs are a certain brand of GPU products offered by Nvidia and AMD that are marketed towards professionals and are priced considerably higher". Apple's marketing doesn't really work like that, they don't differentiate their GPUs by functionality or targeted market, so I don't think that describing them in these terms makes sense. In comparative terms, Apple GPUs have properties of both classical gaming and workstation GPUs, but what do we get from this kind of insight? Very little, I think.
I believe it makes much more sense to discuss these products in terms of their suitability to specific domains of interest rather than trying to pigeonhole them into a set of narrow preexisting categories. M-series chips don't cater to one specific niche. They are all-round products that are capable of fulfilling many different roles, even if some might consider these roles to be contradictory. It is entirely possible to build a GPU that is equally good for gaming, for rendering and for video editing. In particular, Apple achieves this by focusing on a compute-centric architecture with large caches, unified memory and bandwidth+compute efficient rasterisation.
What advantage does Apple's GPU have over Nvidia's workstation GPU?
For typical "workstation applications"? At this moment much larger memory pools as well as larger caches. Apple is likely to perform better on complex workloads that use huge datasets. For example, rendering of very large complex scenes.
I doubt Apple's GPUs are more efficient than Nvidia's workstation GPUs.
In terms of perf/watt, Apple is much more efficient (factor of 4x-5x), unless you talk about ML applications, where Nvidia benefits from their large dedicated accelerators. For those things, the efficiency is comparable, it's just that Nvidia will be much faster. Nvidia gaming GPUs are likely to be even faster btw. since they are usually clocked higher, but that will cost them some efficiency.