Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Too bad the old RISC archs went down the toilet.
Someone might just take Power and do something with it.
Maybe the Chinese? They did somewhat revive Alpha. And MIPS, although in a different war, still lives.
Why not a come back of something that was great back in the time.
Something new is needed, x86 is stalling.
Itanium really had my hopes up but didn't make it, unfortunately.
 
The clock speed does not impress me. On programs that cannot effectively utilize more than 10 cores, (such as Photoshop) these server class CPUs will be very slow, comparing to iMac. How about the programs you guys are using? Do most people prefer less cores but high clock speed, or the opposite?
Note that the base speed is determined by thermal constraints. If only some of the cores are busy, they will turbo up to much higher speeds.

Of course, it's silly to buy the high core count CPUs if you know your apps can't exploit them. Save money and get the 8 to 12 core models.
 
  • Like
Reactions: pat500000
I think the reason for the low clocks in those CPUs is because they are built on high density node.

There are 3 types of nodes. High performance, low power(mobile), Density. Density nodes are usually the ones used in server CPUs, for obvious reasons. With time, all of processes mature, so the gaps between the types can merge, however currently 14 nm process is still not mature enough to be used universally for all 3 types of markets. It still has to have 3 different versions, at least that is for Intel. 14 and 16 nm processes from TSMC and Samsung can be used universally for mobile, or high performance. They will not achieve high density however, for obvious reasons.
 
iOS is a full OS. Remember when SJ introduced the original iPhone and said "iPhone runs OSX"? And it wasn't even called iOS until 2010.

MacOS may or may not be slightly heavier on resources as there wasn't as much need to trim it down for low-power chips, but I'd be surprised if it was by an order of magnitude.

Maybe some of the devs who build stuff for both platforms here can chime in on this?
I’m going to suggest that if you had El Cap running on the iPhone7, and the A10 it wouldn’t be as smooth an experience as is often suggested.
Also that is you had an A10 powering a MacBook it wouldn’t be that nice either.
 
The clock speed does not impress me. On programs that cannot effectively utilize more than 10 cores, (such as Photoshop) these server class CPUs will be very slow, comparing to iMac. How about the programs you guys are using? Do most people prefer less cores but high clock speed, or the opposite?

Cores - I do 3d art and it is all about the cores (and RAM). Clock speed is way down the list.
[doublepost=1473540644][/doublepost]
Too bad the old RISC archs went down the toilet.
Someone might just take Power and do something with it.
Maybe the Chinese? They did somewhat revive Alpha. And MIPS, although in a different war, still lives.
Why not a come back of something that was great back in the time.
Something new is needed, x86 is stalling.
Itanium really had my hopes up but didn't make it, unfortunately.

They didn't go down the toilet - They are running big iron with AIX/Linux. Speeds hit 5Ghz about a decade ago. The Power9 series is due out next year - https://en.wikipedia.org/wiki/POWER9

Apple dropped the PowerPC architecture because IBM wouldn't build & sell the silicon at the price that P.T. Barnum wanted. I suspect that the fact that Apple couldn't control Power.org, the organization that standardized the Power Architecture, was also a factor.
 
Too bad the old RISC archs went down the toilet.
Someone might just take Power and do something with it.
Maybe the Chinese? They did somewhat revive Alpha. And MIPS, although in a different war, still lives.
Why not a come back of something that was great back in the time.
Something new is needed, x86 is stalling.
Itanium really had my hopes up but didn't make it, unfortunately.

Risc went down the toilet? No, they morphed into the Arm cpus... ARM stand for Acorn Risc Machine you know...
 
iOS is a full OS. Remember when SJ introduced the original iPhone and said "iPhone runs OSX"? And it wasn't even called iOS until 2010.

MacOS may or may not be slightly heavier on resources as there wasn't as much need to trim it down for low-power chips, but I'd be surprised if it was by an order of magnitude.

Maybe some of the devs who build stuff for both platforms here can chime in on this?
Not at all, not as macOS or android, it's has a simplified kernel w/o logical file block among other things disabled by design which requires a lot of cpu time and which are the reason which A10 cpu wins over Qualcomm Snapdragon 820 it spend less clock cycles on background OS checks. But it's architecture it's the hi-fi design by ARM which debuted on samsung's note 2 advertised as quad core but actually only 2 cores are actives at a given time.

Also note ARM cpu aren’t capable to do SMP Yet (hyperthreading).

So I'll bet the A10 is more likely a 1:1 design with Qualcomm SD820 (2 KYRO cores + 2 A53) , very likely it uses 2 A72 cores or equivalent plus two A53 (low power/IPC).

Qualcomm'd KYRO cores are customized in-house developed core A72, and I doubt Apple still have this R&D capabilities more likely they are using standard ARM IP.

The A9 was an dual A72 solution.
 
Too bad the old RISC archs went down the toilet.
Someone might just take Power and do something with it.
Maybe the Chinese? They did somewhat revive Alpha. And MIPS, although in a different war, still lives.
Why not a come back of something that was great back in the time.
Something new is needed, x86 is stalling.
Itanium really had my hopes up but didn't make it, unfortunately.
After the IBM's Power resurrection an itanium resurrection doesn't seems impossible.
 
And if Intel moves from x86 then AMD will be toast since they'll be left without any right to the new architecture and codes. They don't have the $$$ and market share to compete directly with Intel.
 
My source tells me they will update some macs soon.... But Seriously I truly believe a new Mac Pro is on the way. I have faith for some reason. Despite what most people think. For me buying a new Mac Pro is one of the most exciting investments I make every 3/4 years. I don't wanna switch to pc... Please Apple listen to ur customers! Give us power!! Don't mind if u make it small just give us power!!!! O and a pair of 1080's would be nice, cos i do like to game as well as animate! :)
 
No, Itanium is dead.
Really, really dead.
Deader than the Atlanta Barves.
https://en.wikipedia.org/wiki/Itanium
Intel and HP threw away Billions on that dead end.
Itanic would have sunk lesser companies than Intel and HP at their zenith.
They are shadows of their former selves.
the truth is that as Moore Law's ends and is no possible to squezze more transistors (at about 7-4nm there is no way go smaller -economical at least-).
at this point, ARM which is by far more efficient on power and transistor count (an ARM cpu require much less transistors than its IPC equivalent x86), then is where x86-64 architectural legacy taxes heavy its competitity.
Then Intel has no other solution than invent an all-new architecture capable to efficiently handle deeper execution pipes and x4 SMT (current smt is 2x, itanium theoretically should be capable to reach 4x SMT -it never happened but the way its instructions where coded allowed it).

If no Itanium Intel should have now developing an all new architecture capable of deeper execution queues and 4 or even 8 ways SMT, this is not feasible on x86.

The challenge is not only design the architecture, but at the same time the compilers and JIT compilers, Itanium was killed by its compilers, even GCC never efficiently compiled ia64.

Anyway nothing is really killed until you kill its fathers, and Intel is alive and strong .
 
the truth is that as Moore Law's ends and is no possible to squezze more transistors (at about 7-4nm there is no way go smaller -economical at least-).
at this point, ARM which is by far more efficient on power and transistor count (an ARM cpu require much less transistors than its IPC equivalent x86), then is where x86-64 architectural legacy taxes heavy its competitity.
Then Intel has no other solution than invent an all-new architecture capable to efficiently handle deeper execution pipes and x4 SMT (current smt is 2x, itanium theoretically should be capable to reach 4x SMT -it never happened but the way its instructions where coded allowed it).

If no Itanium Intel should have now developing an all new architecture capable of deeper execution queues and 4 or even 8 ways SMT, this is not feasible on x86.

The challenge is not only design the architecture, but at the same time the compilers and JIT compilers, Itanium was killed by its compilers, even GCC never efficiently compiled ia64.

Anyway nothing is really killed until you kill its fathers, and Intel is alive and strong .
This is a very naïve post.

Xeons (and Core) do not run the x86 (or even x64) instruction set. Why don't people understand that? x64 is the "bytecode" for the RISC engine architecture that the chip executes.

And as far as SMT depth goes, the problem is not the instruction set. The problem is available execution units. If you have a load/store unit, an integer/address unit, and an FP unit - you'll get some benefit from SMT. If you have two of each, you'll get more benefit.

8-way SMT is almost ludicrous - instead of putting 8 execution units of each type in each core, use the transistors for four cores with two units per.

NOH8 on x64.

BTW - there's one important use case where Intel hyperthreading gives 2X performance, know what it is?
 
  • Like
Reactions: pat500000
This is a very naïve post.

Xeons (and Core) do not run the x86 (or even x64) instruction set. Why don't people understand that? x64 is the "bytecode" for the RISC engine architecture that the chip executes.

And as far as SMT depth goes, the problem is not the instruction set. The problem is available execution units. If you have a load/store unit, an integer/address unit, and an FP unit - you'll get some benefit from SMT. If you have two of each, you'll get more benefit.

8-way SMT is almost ludicrous - instead of putting 8 execution units of each type in each core, use the transistors for four cores with two units per.

NOH8 on x64.

BTW - there's one important use case where Intel hyperthreading gives 2X performance, know what it is?
Aiden, I know about Transcoding is right now the mainstream best use case for hyperthreading, but SMT Is far beyond hyperthreading, it's purpose is to run a 2nd thread while 1sr thread still waits for i/o or other resources (as fp integer etc), even to share resources on other thread requiring more execution pipelines to solve an instruction.

Ia64 theoretically allowed to execute "efficiently" more than 2 SMT but compiler issues never allowed more than 1 extra SMT.

Trade SMT for extra cores it's logical unless you reached the maximum theoretical efficiency and you need to squeeze more juice from the silicon, but even an advanced post x86 architecture foresee some dynamic SMT <=> out of order, however going beyond 2x (which basically splits fp and integer execution queue) you need to switch back to wisc instead risc (as you cited modern x86-64 cpu translates x86 instructions to its RISC equivalent code) but following Itanium concept at least instead to translate each instruction into n RISC (a key factor on ARM efficiency is it don't need to translate CISC to RISC), as I cited it requires each instruction it's very specialized execution pipeline and only works together an efficient compiler.

WISC offers the only theoretical possibility to go beyond in IPC, it's very like to axe all old x86 instructions and work only on a more general purpose AVX like instruction set (at least seems Intel understand better the concept) but as with AVX it's nothing easy to deploy on the field, a lot of work has to be done on compilers.

An example one of the concepts on a WISC multi core cpu with widely shared execution pipelines where some task in a thread could use one or more unused fpu or integer units unused from other cores (as when calculating with very large integers something required by criptocurrency, you'll like to use more integer units ), of course this requires even more special instructions set and very optimized compilers where itanium failed miserably, there is the challenge.

As I see the future in long term for the last Von Neumann computer generation will lie on advanced WISC or WISC-like cpu.
 
I do know what ARM means, and by the way that was the original name, it's now Advanced RISC Machines - look it up. :)
And, besides ARM - which has no space in the desktop yet, what else do you see? Really?! And I'm not talking about kick ass servers of course. If you took the time to read you'd see I was talking about desktop RISC machines / workstations.
And yes, Core and Xeon are "sort" of RISC CPUs all right but as I mentioned before, it's in fact a patchwork and that's my grunt with the x86 arch. Why not do it properly from the ground up instead of relying in old tech and need a translation layer anyway? Itanium was in the right track, too bad it failed. I still have hope, like Mago.
But enough of this already. For those who understand why I think we should be well past x86 should be good enough. For the rest, well if you're comfortable with it so be it. Won't hold it against you :)
 
If you took the time to read you'd see I was talking about desktop RISC machines / workstations.

ARM will catch (and Pass) on IPC the x86 architecture the by next 3-4 years, ARM dont use SMT on none of its products mostly on power concerns, but as long they move deeper on HPC they'll find a solution to enable SMP w/o spend too much power.

Right now the ARM Cortex A72 core its at about the half IPC than a single Xeon e5v4 core (moreless) at lesss than 20% watts spent, even some companies are developing server-grade ARM cpus mostly based on 48 cores cortex A72, those chips outclass xeons on efficiency, its just matter of time it to jumpp to the desktop.

PD there are some proposals for XeonPhi-like CPU on 48/64+ cortx A72, the cn Thiane 2 super computer uses ARM architecture only.
 
Aiden, I know about Transcoding is right now the mainstream best use case for hyperthreading.
When can transcoding go twice as fast with HyperThreading enabled?

There is another mainstream use that gets 200% (full use of both logical cores) on Intel today. An important use that goes twice as fast with HyperThreading enabled.

..., but SMT Is far beyond hyperthreading, it's purpose is to run a 2nd thread while 1sr thread still waits for i/o or other resources (as fp integer etc), even to share resources on other thread requiring more execution pipelines to solve an instruction.
This sounds like the perfect definition of "Hyper-Threading" - Intel's marketing buzzword for SMT.

(See https://en.wikipedia.org/wiki/Simultaneous_multithreading for quotes like "The Intel Pentium 4 was the first modern desktop processor to implement simultaneous multithreading"....)
[doublepost=1473650080][/doublepost]
the cn Thiane 2 super computer uses ARM architecture only.
There are many reasons that an architecture for a TOP500 supercomputer would be inappropriate for a workstation. Two very different usage scenarios.

What do you mean by PD ? "Police Department" and "Periodontal Disease" seem to be the most common.
 
Last edited:
ARM will catch (and Pass) on IPC the x86 architecture the by next 3-4 years,

Like alot of other stuff posted above this is nonsense. There is an upper cap on IPC based on the code that folks write. There is no way to pull out instruction level parallelism where it doesn't exist almost all mainstream code has a maximum cap which the most of the modern x86 implementations are pretty close to.

ARM does better on power because historically they don't try to get close to the max edge. As they try to get to the same maximums as x86 they will run into the same problems that has plateau x86. They don't have magical pixie dust in the instruction set. It is somewhat cleaner and has a bit less cruft ( and doesn't have quite as large of a legacy binary baggage anchor dragging it down..... along growing quite fast at this point. )


the cn Thiane 2 super computer uses ARM architecture only.

No.

https://en.wikipedia.org/wiki/Tianhe-2 ( Xeon E5 and Xeon Phi based )

Perhaps thinking of the current (2016) "top dog" Chinese Supercomputer

https://en.wikipedia.org/wiki/Sunway_TaihuLight

But again no..... not ARM based. [ Few of the single thread drag racers around here are going to be happen with
"... The entire system is built from the 1.45 GHz SW26010 processors. ... " http://www.nextplatform.com/2016/06/20/look-inside-chinas-chart-topping-new-supercomputer/

There is no individual fast here .... just huge volume at "more affordable" prices. ]


Might be thinking of upcoming Fujitsu ( Japan ) SuperComputer ....

http://www.nextplatform.com/2016/06/23/inside-japans-future-exaflops-arm-supercomputer/

but ARM with a huge HPC vector side car is going to operate in a relatively narrow ( to overall PC market ) subset.


Right now the ARM Cortex A72 core its at about the half IPC than a single Xeon e5v4 core (moreless) at lesss than 20% watts spent, even some companies are developing server-grade ARM cpus mostly based on 48 cores cortex A72, those chips outclass xeons on efficiency, its just matter of time it to jumpp to the desktop.

There are ARMs in some Chromebooks/Chromebox sitting on desktops now. ARM has a long way to go before it is in same ballpark as the mid-upped end desktop offerings of Intel ( and probably AMD with Zen).


To try to keep this from drifting out of the Mac Pro discussion zone entirely.... there aren't any short-intermediate term CPUs on track to displace the high mixed loads that upper half iMacs and Mac Pro operate on. If Apple can't flip the whole Mac line up it really makes no sense to switch off of x86 ( especially if there are two viable vendors in hot competition with each other in that space. )
 
Last edited:
dec, what's your feeling on x86? Is it here to stay still for long?

It's sad to see KL based on Atom (sorry, Silvermont) cores.

If the Chinese could somewhat give life back to Alpha and make it count again, why is it that no one can do something new of the sort?
Maybe AMD could take the opportunity and ditch x86 altogether, forget about the license.
But going ARM doesn't cut it for me also.
Well, ...
[doublepost=1473721370][/doublepost]PM961 iin the next nMP?
http://www.tomshardware.com/reviews/samsung-960-evo-pm961,4737.html
 
dec, what's your feeling on x86? Is it here to stay still for long?

It's sad to see KL based on Atom (sorry, Silvermont) cores.

If the Chinese could somewhat give life back to Alpha and make it count again, why is it that no one can do something new of the sort?
Maybe AMD could take the opportunity and ditch x86 altogether, forget about the license.
But going ARM doesn't cut it for me also.
I'm not dec, but I worked for DEC.

x86 is gone - we're on like the sixth to tenth generation of x64. New processors are rolling out with AVX512 - 512-bit register instructions that can do 16 simultaneous 32-bit integer or floating point operations per instruction. Few people care if the ISA isn't elegant - that's a problem for the compilers and they seem to be doing a great job.

Alpha died not because it was a bad design, but because Intel with P6 shifted to RISC processors that matched Alpha on performance but were far less expensive. DEC couldn't afford to keep up with the FAB race, and couldn't keep up with the architecture changes. (Although, DEC pioneered the modern SMT designs - but the Alpha disappeared before the SMT Alphas showed up.)
 
  • Like
Reactions: ssgbryan
You know what I mean. I'm not that absent minded that I haven't heard of x86 and AVX.
I'm talking about the whole arch in general, call it what you will. OK, it's not accurate to just call it x86, and I'm sure not referring to 8088/8086/80186/80286/80386/80386SX/80386DX... should I go on?!
The thing is, the old, legacy stuff is still there, no matter how you wrap it up to make it into something else. The patching alone gives me the creeps. Does it work? Sure. Does it work well? No doubt. Is it the most efficient and lean design possible some 40 years or so later?
It might not be important to you if you still boot in real mode, switch modes, translate instructions, whatever.
I admit I have a problem with legacy stuff, patch works, amended stuff to look like something better. And that's what x86-64 is now, layers on top of each other.
The same goes for Windows and the same reason I'd like to drop it altogether. You just need to dir the Windows folder and see what I mean.
OK, you'll now tell me that Unix (and variants) are older and suffer the same illness. Fair enough. But it seems easier to live with it.
What I like (at least more) about Apple is that they're not afraid to cut with the past. As much as it hurts some.
It will cost you money buying new stuff. Sure, but eventually you will have to anyway.
Enough rant for today.

Too bad Alpha couldn't keep up indeed. Great design at the time. Had it survived and maybe we'd now have an alternative. There are still designs based on it though.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.