Even with perfect buses, infinitely fast caches, zero multi-core hardware contention, and heavily multi-threaded code, the code itself will constrain multi-core speedup. Each execution thread is not totally independent of others and must periodically synchronize work with them. Besides that a certain % of the execution path cannot be parallelized. Only a small % of serialized code will cap maximum multi-core speedup; this is called Amdahl's Law:
https://en.wikipedia.org/wiki/Amdahl's_law
Using the base Xeon frequency and assuming 100% parallel code, the 18 core might be 27% faster. Using the turbo boost frequency it would be more. But who knows the exact turbo boost behavior for a given workload? This varies with the exact CPU and many other factors.
There is supposedly a max "all core active" boost but I don't know the number for the specific Xeon-W "B" variants used in the iMac Pro:
https://www.pugetsystems.com/blog/2...-What-You-See-Is-Not-Always-What-You-Get-675/
Before the iMac Pro, the fastest Apple computer any FCPX user ran was (depending on workload), a 2017 top-spec iMac 27 or a 12-core D700 Mac Pro, or some modified older Mac Pro or Hackintosh.
Momentarily setting aside the 18-core iMac Pro, the 10-core Vega64 version is likely about 2x faster than any prior Mac anybody has run, esp. on FCPX. So the question is how much faster than 2x improvement would the 18-core machine be on a FCPX, Logic and Lightroom workload? And if it was (say) 25% faster would it be worth it?
For certain highly parallel code it makes a difference as has been shown here on the latest benchmarks on Apples iMac Pro page. However these are carefully selected -- see the footnotes. For some benchmarks they don't show the 10 vs 18 core versions:
https://www.apple.com/imac-pro/