Intel Macs (at least 2017 and newer) will get more than two OS releases. When the PPC-Intel transition occurred, OS releases were roughly every two years. Now Apple has moved to an annual release cycle, and given that ~2012 is the cutoff for Big Sur, it stands to season that a current Mac should easily have four OS updates before support is killed off.
That means all those who spend 50k or more on intel 2019 mac pro - you are stuck with a paper weight.
To make to clear: nothing has been officially confirmed. But if Apple means business in the pro space, they have to work on their SIMD.
people keep saying this but the majority of Mac Pro purchases are by enterprises who if spending 20k must feel they need top performance so in 4-5 years they will need to buy a new machine anyway to keep having the best possible performance. If you are working in machine learning etc you will always need to be on the cutting edge.
A high dollar asset has to depreciated somehow and typically computers aren’t depreciated at a rate of $10k per year, so that they be tossed in five years.
That 6 year old tower that was a massive workhorse and cost the company something like $30k in 2013, was put in a caddy under the desk of the accounts payable clerk.
We do. The engineering software we run on workstations starts at $30k per year per seat, and goes up to $90k. The PhD running them has a loaded cost of $200k per year. The price of the computer is noise.
False economy. We worked this out. The workstations burn 200 watts just sitting idle. At 12 cents a kWh, that costs $210 a year, or $630 over a 3 year lifecycle. We can get base-model admin computers new for $600. So nothing has been saved. Not to mention the noise, IT guys spending time fiddling with RAID arrays and buying SAS drives for a simple admin, the old systems are missing a number of security features (TPM 2, HVCI), and the complaint that the thing hogs cubicle space whereas everybody else has a USFF or an all-in-one.
No they don't.
First, you have to cut through the marketing crap and understand that SIMD is a compromise: it's because fetching, decoding and scheduling an instruction has a cost, so you can get a gain by doing it once for multiple pieces of data. However, you get inefficiencies when you can't fill the whole vector, either because of your problem or because of conditionals, or because you have to wait until the data shows up.
If there were no per-instruction costs, then having 8 independent ALUs/EUs will always match or perform better than an 8-way vector.
I would have been given the joy of depreciating and writing it off. Businesses will hold onto things far longer than their initial use cases unless you’re in a company that just doesn’t.
Personally, I am not a fan of Intel's approach — making separate ISA sub-extensions for various vector widths seems like a waste to me, especially given the fact that you start running into all kinds of weird behavior when using these extensions (like how transition between AVX and SSE stalls everything or that you get reduced clocks when using wider vectors). You end up doing a lot of crap like runtime feature detection and function dispatch, in the end this is unnecessary engineering cost and the performance is hampered.
All of the RISC players seem to be happy with 128-bit vectors, because their costs are different. Look at POWER9. Instead of Intel's approach of executing 1 512-bit vector, it handles 4 128-bit vectors at a time.
From observing Intel's own MKL, it doesn't use AVX-512 very often, even on systems with the second FMA. I think it's more marketing than useful, which is really 80% of new Intel features.
Ultimately, if your problem really can take advantage of vector, then go GPU. I suspect that AVX-512 is a worst of both worlds.
ARM (or rather Neon) is actually fairly elegant there by comparison. If it becomes a more mainstream option beyond just Apple, I'll probably go that way myself.
That would be the gamers with their "muh single core speedz"...
Did you have a look at SVE? Vector width agnostic code is really neat, no need to unroll anything, no need to treat the last few elements in a special way — the ISA just takes care of it for you.
That looks like a very pretty solution, especially from the compiler end. It mentions multiples though, so it might not completely save you from peeling the last few elements. I just googled for it, which yielded a paper entitled " The ARM Scalable Vector Extension". It sounds very nice, if that's what you're referring to.
SVE uses masking which allows you to only use fractions of the available SIMD registers when needed. Basically, you loop over multiples of the reported vector widths (which is determined on runtime) and the last iteration will use a partially masked operation. See an example starting from page 17 on these slides.
The beauty of the system: you don't need to write different code for a machine with different vector width. You could write an algorithm and run it on an iPhone (with its 128bit vectors) or on some sort of ARM super-CP with 2048-bit vectors. This radically simplifies debugging. I really hope that SVE becomes widespread, it is amazing for scientific computing.
Hi all,
apologies for the n00b question: invest in the MacPro now, or wait for an ARM version to come out?
I'm an empirical researcher working with large datasets, statistics and machine learning (Python, Matlab, C++).
I was about to order a maxed out Mac Pro for my work (research funding), and hoped to be comfortable for the next 5-7 years.
But ARM might topple this. What are the implications for software (like the statistical programs), and could I upgrade a mother board later on using the current components (RAM, SSD)?
thanks for your help.
Are you willing to wait until WWDC 2021?Hi all,
apologies for the n00b question: invest in the MacPro now, or wait for an ARM version to come out?
I'm an empirical researcher working with large datasets, statistics and machine learning (Python, Matlab, C++).
I was about to order a maxed out Mac Pro for my work (research funding), and hoped to be comfortable for the next 5-7 years.
But ARM might topple this. What are the implications for software (like the statistical programs), and could I upgrade a mother board later on using the current components (RAM, SSD)?
thanks for your help.