Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I should have listened to people and not gotten myself a first gen product. As fast as the M1 is, without fully functional Adobe products (Ai, Id, Pr, Ae), it is like having a nice netbook. Borrowed my friend's old 2017 MBP and it feels like a proper computer. Awful battery but I don't see myself traveling in the near future. I know it is up to Adobe but are there enough of us M1 users to put pressure on them?

What doesn’t function about those adobe products?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I've worked in my field with religious people and while yes, there are a few who can't help but be biased because of their religion, the vast majority were professional about it and didn't let it affect their interactions with other employees during work hours. The CEO is definitely religious. I read his book just to get an idea (I'm an avid reader) - while I was hoping for more of a history on Intel, it was mostly a book about his spiritual accomplishments and life.


Dang! A netbook! Hah, I remember those things. I guess I should be happy that my workflow/apps I use regularly doesn't involve Adobe. :/ - Good luck.

Being religious is one thing - but starting an organization with the stated goal of CONVERTING a million people is entirely another. There is nothing more despicable than proselytizing. If people have religious beliefs, what sort of ego does it take to believe that your personal beliefs are more correct than their’s, and that you need to correct their’s? And how many current and potential employees grew up in religions with histories of people forcibly converting them? Are those people going to think kindly about such a CEO?
 

jerryk

macrumors 604
Nov 3, 2011
7,421
4,208
SF Bay Area
I should have listened to people and not gotten myself a first gen product. As fast as the M1 is, without fully functional Adobe products (Ai, Id, Pr, Ae), it is like having a nice netbook. Borrowed my friend's old 2017 MBP and it feels like a proper computer. Awful battery but I don't see myself traveling in the near future. I know it is up to Adobe but are there enough of us M1 users to put pressure on them?
Adobe has never been quick to update for new platforms. They ported Lightroom and Photoshop to the M1 as a learning experience for their developers. I suspect the other products will be M1 native soon. I would not be surprised to see something from Adobe at WWDC.
 

Mistborn15

macrumors regular
Feb 5, 2021
216
257
What doesn’t function about those adobe products?
Out of these only Pr works to a reasonable extent. Even in that, bit of lumetri color and it is beachball time! Rest of them, anything more than flat design and couple of artboards is almost guaranteed to crash the app.

Adobe has never been quick to update for new platforms. They ported Lightroom and Photoshop to the M1 as a learning experience for their developers. I suspect the other products will be M1 native soon. I would not be surprised to see something from Adobe at WWDC.
Yes. Blaming neither Apple nor Adobe. Apple can't launch an M2 without an M1 and Adobe probably have other priorities than ~1-2% of their customers with M1s. I should have waited
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
And now we see this morning that M2 entered volume production. Based on new cores. So expect improved single and multi core performance. Performance per watt will also increase. Single core improvement around 20 percent. Multi core not clear - I don’t know how many cores this thing will have.

Everyone else is skating to where the puck is, while Apple is busy moving it down the ice.

If this report is accurate do you think they’ll skip production on the larger M1’s entirely? I mean it wouldn’t make sense to continue with production on larger M1’s if the M2 is already out. Of course we’ve heard from multiple sources and even suggestive codes buried in macOS that there are larger M1’s. But this is now the second time that we’ve also heard the next node’s production was going earlier than planned so maybe Apple pulled the trigger on going to M2 early too and dropped larger M1’s? Can plans just change like that? I was given to understand that making such large changes was difficult with silicon production? Of course Apple being so huge, maybe they just bite the bullet on the cost?
 
Last edited:

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
You think Intel’s Israeli design team wants to be lectured to about how great Jesus is and why they should convert? I don’t even think their Portland design team will put up with such a CEO. And you’d be surprised how often political affiliations and outside activities by the CEO are a factor in recruiting and continuing employment in Silicon Valley. Even if he keeps his mouth shut at the office it will become an issue. Employees who are passed over for promotions and raises will start suing claiming it’s religious discrimination. It’s going to become an issue in time.

Absolutely I can imagine that even if the CEO keeps the two separate, externalities can affect morale if they become problematic or toxic. I was just noting that it hasn’t happened *yet*, not from the get go. So far reports are that morale is up in Intel and they’ve hired some good people (though most of those are coming out of retirement some are new). But of course what happens over time is quite different.

In the premier league (heck football, soccer, in general), they hire and fire coaches with an alarming regularity. There is almost always a performance bump and huge optimism after they hire a new one - especially a big name. This almost always fades. How the team performs after that is how you know if the coach is helping or if it’s more of the same ... or worse.
 

Doc_DC

macrumors newbie
Apr 27, 2021
1
0
Out of these only Pr works to a reasonable extent. Even in that, bit of lumetri color and it is beachball time! Rest of them, anything more than flat design and couple of artboards is almost guaranteed to crash the app.


Yes. Blaming neither Apple nor Adobe. Apple can't launch an M2 without an M1 and Adobe probably have other priorities than ~1-2% of their customers with M1s. I should have waited
I believe that this is why Apple launched its silicon to target the home consumer market first (Most of the Pro-sumer to Professional Apps would take some time to migrate over, while Rosetta2 would be fast enough for the home consumer markets).

I myself picked up one of the 16gb M1 Mac mini's. It runs Final Cut X and Logic Pro amazingly fast. It's literally cut the time to complete my weekly workflow down to 1/4 of what it used to be. Seriously. It's cut the time down so much that my main customers now realize that potential changes can be made later in the process while keeping the same audio/video quality. So much for that M1 speed, I guess I could charge more... Hmmm.

Anyways, I can't wait for the MacBook Pro FX (You know, the one with the M2 silicon and 16inch display)...
Yeah, the MacBook Pro FX...
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
If this report is accurate do you think they’ll skip production on the larger M1’s entirely? I mean it wouldn’t make sense to continue with production on larger M1’s if the M2 is already out. Of course we’ve heard from multiple sources and even suggestive codes buried in macOS that there are larger M1’s. But this is now the second time that we’ve also heard the next node’s production was going earlier than planned so maybe Apple pulled the trigger on going to M2 early too and dropped larger M1’s? Can plans just change like that? I was given to understand that making such large changes was difficult with silicon production? Of course Apple being so huge, maybe they just bite the bullet on the cost?

I have absolutely no idea what they're up to :) All I "knew" was that there is never going to be an "M1X" and that "M2" was going to be next. Maybe next year they upgrade m1-products to "big" M1-products and keep M2 derivatives at the high end. Maybe they were never going to do an M1 with more cores because it wasn't differentiated enough. Maybe today's rumors are wrong. Have no idea.
 
  • Like
Reactions: crazy dave

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
@cmaier, with you knowledge of these things, what is the likelihood that Apple could develop a hybrid core that could be run as either big or little?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
@cmaier, with you knowledge of these things, what is the likelihood that Apple could develop a hybrid core that could be run as either big or little?

In a sense, all cores are hybrid in the sense that they run only when they have something to do, and, at least for the last 15 years or so, they are physically disconnected from the power supply when they aren’t needed. But the “big” “little” distinction that has emerged over the last half decade or so usually refers to things like how many cycles it takes to perform a calculation, whether every operation is available in a particular core, sometimes whether instructions are issued in-order or out-of-order, how many instructions can be run in parallel, etc.

It’s certainly possible to make cores that benefit from some of these tricks in an optional sense, depending on whether a core is told to run fast or slow. But I don’t think it would make a lot of sense. Why have a complicated scheduling unit that, when operating in slow mode, still requires an extra pipeline stage just to do not much of anything? Why put in all the circuitry to be able to disconnect portions of circuits instead of entire circuits from the power rails? There just doesn’t seem to be much benefit to trying it.

Putting it more concretely - we always designed cores to use as little energy as possible (well, not always. Starting in the early 1990’s we did), by turning stuff off, slowing clocks, etc. But even with all those tricks, these cores were never as low power as if we designed a true heterogeneous core. (We considered doing that sort of thing back then, but transistor count was such that we couldn’t quite afford it yet).
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
But also keep in mind that cheap hardware are where the bulk of the growth has been during the pandemic. So it's not terribly surprising to see these sort of numbers, IMO. (Edit: If there's details on how much of this is from Intel's weaker margins, versus shifts in the types of sales in market, I'd love to see it)

So AMD’s numbers confirm, the problem is with Intel. Average selling prices and margins were up for AMD. So Intel is being forced to sell their systems for cheaper and/or selling only the weaker, lower margin units.

 

SlCKB0Y

macrumors 68040
Feb 25, 2012
3,431
557
Sydney, Australia
First, I agree those big data center want efficient chips and that's important to them, but Intel really hasn't been dominant in that market to begin with. Before it was mainframes and midranges, now I expect ARM-like processors already outnumber Intel processors in that space.
Sorry, but this is total bollocks. I've worked in commercial colocation data centres for the last 15 years and in the 2000s everything was rack mount Intel (in DELL, HP, IBM etc servers), then in the 2010s everything was blade Intel, its only in the last few years ARM is making any sort of real inroads there. Almost all of Google, Amazon and Azure run on Intel as well and a lot of their servers are in these same colocation data centres as everyone else's gear as well as having their own dedicated DCs.

You certainly don't walk around these data centres and see "mainframes". Thats an absolute load of rubbish. Maybe these exist in on-premise data centres of financial institutions or government and the like but nowhere else.

You're working in a "manufacturing plant" and you super feel confident in telling people who work in enterprise data centres every day what is going on in there? How arrogant can you be?
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Sorry, but this is total bollocks. I've worked in commercial colocation data centres for the last 15 years and in the 2000s everything was rack mount Intel (in DELL, HP, IBM etc servers), then in the 2010s everything was blade Intel, its only in the last few years ARM is making any sort of real inroads there. Almost all of Google, Amazon and Azure run on Intel as well and a lot of their servers are in these same colocation data centres as everyone else's gear as well as having their own dedicated DCs.

Precisely. All supercomputers I have ever worked with were Intel-only. It's just this year that my university added some AMD EPYC nodes.

ARM is quickly gaining market share, especially with giants like Amazon, but it will be years before it can hope to match x86 in this space. And it solely depends on whether ARM can deliver with their upcoming Neoverse chips and what Intel's and AMD response will be.

Part of the problem is that people spend five minutes reading stuff online and then come back saying "look, fastest supercomputer on the planer is ARM, so ARM is doing great in the server space", while completely ignoring the fact that Fugaku is a one of it's kind system, build for a specific purpose, using a custom CPU that was specifically designed for it. But then when you actually look at TOP500, like 90% of the systems are using Intel CPUs...
 
  • Like
Reactions: BigMcGuire

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Sorry, but this is total bollocks. I've worked in commercial colocation data centres for the last 15 years and in the 2000s everything was rack mount Intel (in DELL, HP, IBM etc servers), then in the 2010s everything was blade Intel, its only in the last few years ARM is making any sort of real inroads there. Almost all of Google, Amazon and Azure run on Intel as well and a lot of their servers are in these same colocation data centres as everyone else's gear as well as having their own dedicated DCs.

You certainly don't walk around these data centres and see "mainframes". Thats an absolute load of rubbish. Maybe these exist in on-premise data centres of financial institutions or government and the like but nowhere else.

You're working in a "manufacturing plant" and you super feel confident in telling people who work in enterprise data centres every day what is going on in there? How arrogant can you be?
You can be a little nicer on your correction, you go WAY too far into attack territory, but anyway I stand corrected. If you'll look at my phrasing, you can see I was guessing and just stating an opinion from what I see. (I don't work on the provider side, I'm on the other side of the equation.)

And also fwiw, I go back to the middle 70's with computers, and we did have real "mainframes" back then sitting in the big data centers -- it was before the PC architecture was even born. (IBM 360, 370's and lots of other kinds)
 
  • Like
Reactions: SlCKB0Y

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Precisely. All supercomputers I have ever worked with were Intel-only. It's just this year that my university added some AMD EPYC nodes.

ARM is quickly gaining market share, especially with giants like Amazon, but it will be years before it can hope to match x86 in this space. And it solely depends on whether ARM can deliver with their upcoming Neoverse chips and what Intel's and AMD response will be.

Part of the problem is that people spend five minutes reading stuff online and then come back saying "look, fastest supercomputer on the planer is ARM, so ARM is doing great in the server space", while completely ignoring the fact that Fugaku is a one of it's kind system, build for a specific purpose, using a custom CPU that was specifically designed for it. But then when you actually look at TOP500, like 90% of the systems are using Intel CPUs...

In the 1990s there were a ****-ton of AMD supercomputers, using Opterons. (Just throwing that out there because, you know why).
 

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
Precisely. All supercomputers I have ever worked with were Intel-only. It's just this year that my university added some AMD EPYC nodes.

ARM is quickly gaining market share, especially with giants like Amazon, but it will be years before it can hope to match x86 in this space. And it solely depends on whether ARM can deliver with their upcoming Neoverse chips and what Intel's and AMD response will be.

Part of the problem is that people spend five minutes reading stuff online and then come back saying "look, fastest supercomputer on the planer is ARM, so ARM is doing great in the server space", while completely ignoring the fact that Fugaku is a one of it's kind system, build for a specific purpose, using a custom CPU that was specifically designed for it. But then when you actually look at TOP500, like 90% of the systems are using Intel CPUs...
Not me, mine were all POWER. All the Intel boxen were low-end or small servers and not super computers. You can't call small x86 systems super computers. Not even the same concept or design. Now doesn't mean they don't exist just not the vast number of servers out there I'd call super computers.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Not me, mine were all POWER. All the Intel boxen were low-end or small servers and not super computers. You can't call small x86 systems super computers. Not even the same concept or design. Now doesn't mean they don't exist just not the vast number of servers out there I'd call super computers.

I think it's a bit of a different era :) My current university uses a computing cluster that is mostly Xeons and some EPYCs (which are a new addition). I also used to work with Piz Daint (which at one time was in the top 5 supercomputers of the world), which is all Sandy Bridge EP...
 

pasamio

macrumors 6502
Jan 22, 2020
356
297
The Top 500 list from November 2020 seems to have ARM in first place, Power takes 2nd and 3rd, a Chinese chip for fourth and EPYC in fifth and seventh. Intel then fills out the rest of spots (sixth, eighth, ninth and tenth). Looking further out to the top 100, Intel is certainly dominant and SPARC makes an appearance in 99th.
 
Last edited:

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
In a sense, all cores are hybrid in the sense that they run only when they have something to do, and, at least for the last 15 years or so, they are physically disconnected from the power supply when they aren’t needed. But the “big” “little” distinction that has emerged over the last half decade or so usually refers to things like how many cycles it takes to perform a calculation, whether every operation is available in a particular core, sometimes whether instructions are issued in-order or out-of-order, how many instructions can be run in parallel, etc.

It’s certainly possible to make cores that benefit from some of these tricks in an optional sense, depending on whether a core is told to run fast or slow. But I don’t think it would make a lot of sense. Why have a complicated scheduling unit that, when operating in slow mode, still requires an extra pipeline stage just to do not much of anything? Why put in all the circuitry to be able to disconnect portions of circuits instead of entire circuits from the power rails? There just doesn’t seem to be much benefit to trying it.

Putting it more concretely - we always designed cores to use as little energy as possible (well, not always. Starting in the early 1990’s we did), by turning stuff off, slowing clocks, etc. But even with all those tricks, these cores were never as low power as if we designed a true heterogeneous core. (We considered doing that sort of thing back then, but transistor count was such that we couldn’t quite afford it yet).

I was thinking along similar lines to @Sydde and maybe this is a dumb idea but where I would see the advantage is if you could effectively segment say a single firestorm-like core into 3 icestorm-like cores - not like SMT where it is all still single and shared, but they would truly be effectively 3 cores (I chose 1 to 3 ratio since iirc right now a firestorm core is roughly 2.5 - 3x bigger than icestorm). That way you would have just say 5 cores and flexibility for how they arranged (ie how many in “performance” mode and how many in “power saving” mode) using the same techniques as they already use to determine where threads run and migrate to. The advantage is that you would be able to tailor your execution mode on the fly to what the user actually needs resulting in better power to performance for different tasks. Don’t know if it really gets you that much in practice (your post indicates not much) and how much redundancy the bigger core would need to make that actually work, but thought it was a neat idea.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
I think it's a bit of a different era :) My current university uses a computing cluster that is mostly Xeons and some EPYCs (which are a new addition). I also used to work with Piz Daint (which at one time was in the top 5 supercomputers of the world), which is all Sandy Bridge EP...

Mine started adding EPYC nodes to its cluster earlier than yours, but yes much of it is still Intel and no idea if or when they’ll add ARM chips. Can’t remember if we have any POWER nodes or not. I don’t think so, but maybe. I only used a small fraction of it.

Edit: in the future, Grace might drive some adoption given the focus on GPGPU.

PS: I don’t think it was this thread we were discussing it in, but doesn’t really matter. Did you see the Asahi Linux team got glxgears working?


I know I was more optimistic than you, but I have to admit this way faster progress than even I thought.
 
Last edited:

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I was thinking along similar lines to @Sydde and maybe this is a dumb idea but where I would see the advantage is if you could effectively segment say a single firestorm-like core into 3 icestorm-like cores - not like SMT where it is all still single and shared, but they would truly be effectively 3 cores (I chose 1 to 3 ratio since iirc right now a firestorm core is roughly 2.5 - 3x bigger than icestorm). That way you would have just say 5 cores and flexibility for how they arranged (ie how many in “performance” mode and how many in “power saving” mode) using the same techniques as they already use to determine where threads run and migrate to. The advantage is that you would be able to tailor your execution mode on the fly to what the user actually needs resulting in better power to performance for different tasks. Don’t know if it really gets you that much in practice (your post indicates not much) and how much redundancy the bigger core would need to make that actually work, but thought it was a neat idea.

That would be no more performant than just having triple the number of ice storm cores. You wouldn’t improve “single core” performance that way, only multi-core performance (and single core performance would go way down). And since multi-core performance = single core performance times the number of cores (more or less), you’d be lowering multi-core performance unless you increased the number of cores a lot.

The difference between fire and ice-storm is very different than having three cores-in-one vs not.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.