Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
So you are saying that Apple consumers are dumb enough so they will not notice, that everybody can do the same thing they do 2-3 times faster, for less?

Nobody is THAT stupid to risk putting their computers into complete irrelevancy when it goes to performance. For the time being Intel is "that'll-do" option for Macs. But for how much longer?

ARM NEVER will be serious contender into high-performance market. They will be good option where you need reliability, like Data Center jobs. But absolutely nothing that requires highest possible performance.

And that last thing is actually what consumers and clients demand. And no, Intel does not guarantee that last thing right now.

They even do not guarantee power efficiency.

Never say never. Arm has come a long way, particularly with apple's chip designers. I don't think it's ready for prime time enterprise metal yet, but you can see it coming. It's probably good enough for laptop use already.

One of the big problems is virtualization. A lot of people still need intel binaries of one form or another to be 'runable' on their systems. ARM will likely suck quite a bit at trying to do that with any reasonable speed throughput for quite a while (of course I'd love to be wrong about that). So for a lot of people, they are going to want an intel/AMD solution for a good long bit before ARM can handle the virtualization duties at some reasonable clip.
 

askunk

macrumors 6502a
Oct 12, 2011
547
430
London
Nobody is THAT stupid to risk putting their computers into complete irrelevancy when it goes to performance. For the time being Intel is "that'll-do" option for Macs. But for how much longer?

calm down, man :D You are fighting over your own words. I never said that. It will take some time before ARM works well enough to cover the whole range, but we could see an ARM Macbook earlier than we think, imho.
I don't know what are you talking about in your answer. Do you really think an Intel chip right now beats an AMD for general processing, content creation or rendering? They have lower TDPs, are cheaper and tend to keep chipset compatibility which is pure bliss for Apple's periodical refresh of the line.

I'm just saying that until ARM gets over (for me it could easily be in less than 10 years, for you is in 2054 apparently, ok :D) Apple could switch to AMD, instead of lagging with hot CPUs that are good only for gaming for what they cost. Think at what a MBP with a Ryzen 4000 could do.
 
  • Like
Reactions: ZombiePhysicist

Zdigital2015

macrumors 601
Jul 14, 2015
4,144
5,624
East Coast, United States
calm down, man :D You are fighting over your own words. I never said that. It will take some time before ARM works well enough to cover the whole range, but we could see an ARM Macbook earlier than we think, imho.
I don't know what are you talking about in your answer. Do you really think an Intel chip right now beats an AMD for general processing, content creation or rendering? They have lower TDPs, are cheaper and tend to keep chipset compatibility which is pure bliss for Apple's periodical refresh of the line.

I'm just saying that until ARM gets over (for me it could easily be in less than 10 years, for you is in 2054 apparently, ok :D) Apple could switch to AMD, instead of lagging with hot CPUs that are good only for gaming for what they cost. Think at what a MBP with a Ryzen 4000 could do.

People might want to take a wait and see approach to the Ryzen 7 4800H.

Higher clock speed than the 9980HK, but half the L3 cache, built-in Radeon graphics, still PCIe 3.0, 7nm versus 14nm, much lower boost clock, better TDP.

I am interested to see how the 4800H matches up against the 9980HK, but anyone here expecting a slam dunk needs to temper their expectations with reality after it starts shipping.

Beyonda couple of advantages to the 4800H, I don’t see anything remotely special that it would add to a theoretical 16” AMD MacBook Pro.
 

DoofenshmirtzEI

macrumors 6502a
Mar 1, 2011
862
713
Amazon. 10% discount does not seem much if it is the same configuration.
It actually seems pretty generous to me. It wouldn't surprise me if AMD was paying something towards that.

AWS isn't selling these machines, they are renting them, together with a bunch of other AWS services to make it usable in a datacenter. So the only thing that goes down is the cost of the processor, nothing else goes down. The rest of the physical components, rack space, power, cooling, networking, etc., none of those costs go down.

Edit:
Price of 3 year RI (basically buying for 3 years):
m5.24xlarge - $46,482
m5a.24xlarge - $41,504

This size is essentially the whole computer. I would compare the bare metal instances, but they're not available in the AMD class. Difference in price: $4,978.
 
Last edited:

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Good lord, you have to recompile the app for the distribution. Leave it to desktop linux to make even windows seem like icewater in hell.
But, to be fair, most apps that need Linux's horrifically unportable kernel interfaces are distributed as source code and makefiles to easily rebuild for the current system.

For example, the Nvidia graphics drivers have to build against the current kernel. The Nvidia kit (the ".run" file) does this by itself. No big deal (although it is a butt pain that updating the kernel requires reinstalling the Nvidia kernel driver to get the same bits rebuilt with the new kernel header files).

Windows has an opaque kernel API so that kernel code "simply works" from build to build. Sad that Linux doesn't, but well packaged packages make it a rather minor inconvenience.
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
But, to be fair, most apps that need Linux's horrifically unportable kernel interfaces are distributed as source code and makefiles to easily rebuild for the current system.

For example, the Nvidia graphics drivers have to build against the current kernel. The Nvidia kit (the ".run" file) does this by itself. No big deal (although it is a butt pain that updating the kernel requires reinstalling the Nvidia kernel driver to get the same bits rebuilt with the new kernel header files).

Windows has an opaque kernel API so that kernel code "simply works" from build to build. Sad that Linux doesn't, but well packaged packages make it a rather minor inconvenience.

To be fair, if they distributed it in assembly it would be far more resource efficient, but I basically will pass on both.
 

ssgbryan

macrumors 65816
Jul 18, 2002
1,488
1,420
So you are saying that Apple consumers are dumb enough so they will not notice, that everybody can do the same thing they do 2-3 times faster, for less?

Nobody is THAT stupid to risk putting their computers into complete irrelevancy when it goes to performance. For the time being Intel is "that'll-do" option for Macs. But for how much longer?

ARM NEVER will be serious contender into high-performance market. They will be good option where you need reliability, like Data Center jobs. But absolutely nothing that requires highest possible performance.

And that last thing is actually what consumers and clients demand. And no, Intel does not guarantee that last thing right now.

They even do not guarantee power efficiency.

Have you not been reading the threads here? There are a number of folks here, in this very forum, that will defend the right to spend 2 - 3 times the money for less performance.
[automerge]1579583011[/automerge]
Good lord, you have to recompile the app for the distribution. Leave it to desktop linux to make even windows seem like icewater in hell.

That isn't a bug - it is a feature.

Linux assumes a certain level of technical competence. Heavy lifting gets done on Linux, not Windows, and certainly not OSX.
 
  • Like
Reactions: throAU

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
Have you not been reading the threads here? There are a number of folks here, in this very forum, that will defend the right to spend 2 - 3 times the money for less performance.
[automerge]1579583011[/automerge]


That isn't a bug - it is a feature.

Linux assumes a certain level of technical competence. Heavy lifting gets done on Linux, not Windows, and certainly not OSX.

Desktop Linux is still a giant joke. Ymmv
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Have you not been reading the threads here? There are a number of folks here, in this very forum, that will defend the right to spend 2 - 3 times the money for less performance.
Yeah...

Which is actually baffling considering they constantly talked about stuff they cannot do with Mac Hardware on GPU side... O_O

Maybe it has nothing to do with actual performance then, and more with attachment to brands?

That isn't a bug - it is a feature.

Linux assumes a certain level of technical competence. Heavy lifting gets done on Linux, not Windows, and certainly not OSX.
Isn't what Linux represents exact opposite of this very ecosystem?

Apple is for people who cannot do things themselves with computers. Linux is for people who can do that, and actually like it.
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
That isn't a bug - it is a feature.

Linux assumes a certain level of technical competence. Heavy lifting gets done on Linux, not Windows, and certainly not OSX.

Yea, all the "real" heavy lifting is on the command line. And those guys are wimps. The really heavy lifting, done flipping switches. But those are wimps too. The really really heavy lifting is done using carving rock marks with heavy boulders. And even they are wimps, because the really REALLY heavy lifting is done with lava, into trenches, forged by hand:

the-leidenfrost-effect-allows-this-man-to-pass-his-hand-through-molten-metal----but-only-briefly.gif


Look at him bare hand editing the heavy metal lava into machine code.

The rest of you are all wimp poseurs.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053

Because of Intel CPU drought going into all of 2020 HPE suggest to turn into "alternative brands" for CPUs.

Why would there be a shortage of Xeon chips if just about "everybody" wasn't buying them ? If there are tons fewer folks buying the then there typically is a surplus not a shortage. A shortage is indicative that demand remains relatively high; not that it has utterly crumbled and fallen off a cliff.

Yes AMD will get a fair share of this supply-demand gap that Intel is leaving on the table. But they too can't get enough wafers to completely fill the gap...... Soo.... to some extent " pot meet kettle" . Similar issue, different degrees .

Another "take your head out of the Charlie overhype" messaging smoke. Skylake and Cascade lake are on the same fabs ( 14nm ). If Skylake is available and Cascade isn't that it is more so what is being baked at the 'fab' and not a fab screw up.

Note also that Skylake and Cacade Lake are socket compatible so customers with equipment that has that socket .... Skylake would be an alternative. ( in contrast the Cooper Lake wouldn't because the socket changed unless did a whole board change. ).

Also from same article

"... The scarcity of server silicon isn't helped by forecasting from hyperscalers described as "rubbish" by a source close to the situation. "And then they land huge orders on Intel," they added, meaning the cloud giants aren't very helpful at predicting their chip needs until they place significant orders for parts. ..."

The biggest hyperscalers ( AWS , Google , Facebook , etc. ) largely don't buy from HPE ( or Dell or Lenovo) anymore. The hyperscalers (and Supercomputer ) get 'first crack' at the Xeon SP rollouts. Those old school players are drifting toward 2nd tier. If there are hyperscalers who want to throw mega bucks at incremental better AI workload Cooper Lake versus vendors looking for a deal on Xeon SP pricing.... Intel is probably chasing the higher margin right now. They will burn some overall market share, but numbers wise it is the better 'hand to play" right now (along with some price cuts in other places where they were too greedy that have less traction at the moment. ).

At the hyperscaler level the alternatives are not just AMD solutions.

"... the hyperscalers are also looking to other CPU architectures. Amazon, Google, Microsoft, and others, continue to make noises about using AMD, Arm, Power, and RISC-V designs in their operations. ..."

For 'edge' severs the window for the current set of leading edge ARM server options is at least as good as the AMD options. AMD isn't going to get all of the gap window that Intel is leaving here.
 
Last edited:

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853

I suggest watching this video about Intel Graphics, and adding the comments about the culture which is inside Intel, and thinking about the 10 nm fiasco, and the reason behind it.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Yea, all the "real" heavy lifting is on the command line. And those guys are wimps. The really heavy lifting, done flipping switches. But those are wimps too. The really really heavy lifting is done using carving rock marks with heavy boulders.
Heavy Lifting on Linux can be done without command line. Also I am completely baffled - what is so difficult in the command line? When you get used to it - you want to do everything on your system through it, because its the fastest and easiest, and most realiable way to do it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.