I should have listened to people and not gotten myself a first gen product. As fast as the M1 is, without fully functional Adobe products (Ai, Id, Pr, Ae), it is like having a nice netbook. Borrowed my friend's old 2017 MBP and it feels like a proper computer. Awful battery but I don't see myself traveling in the near future. I know it is up to Adobe but are there enough of us M1 users to put pressure on them?
I've worked in my field with religious people and while yes, there are a few who can't help but be biased because of their religion, the vast majority were professional about it and didn't let it affect their interactions with other employees during work hours. The CEO is definitely religious. I read his book just to get an idea (I'm an avid reader) - while I was hoping for more of a history on Intel, it was mostly a book about his spiritual accomplishments and life.
Dang! A netbook! Hah, I remember those things. I guess I should be happy that my workflow/apps I use regularly doesn't involve Adobe. :/ - Good luck.
Adobe has never been quick to update for new platforms. They ported Lightroom and Photoshop to the M1 as a learning experience for their developers. I suspect the other products will be M1 native soon. I would not be surprised to see something from Adobe at WWDC.I should have listened to people and not gotten myself a first gen product. As fast as the M1 is, without fully functional Adobe products (Ai, Id, Pr, Ae), it is like having a nice netbook. Borrowed my friend's old 2017 MBP and it feels like a proper computer. Awful battery but I don't see myself traveling in the near future. I know it is up to Adobe but are there enough of us M1 users to put pressure on them?
Out of these only Pr works to a reasonable extent. Even in that, bit of lumetri color and it is beachball time! Rest of them, anything more than flat design and couple of artboards is almost guaranteed to crash the app.What doesn’t function about those adobe products?
Yes. Blaming neither Apple nor Adobe. Apple can't launch an M2 without an M1 and Adobe probably have other priorities than ~1-2% of their customers with M1s. I should have waitedAdobe has never been quick to update for new platforms. They ported Lightroom and Photoshop to the M1 as a learning experience for their developers. I suspect the other products will be M1 native soon. I would not be surprised to see something from Adobe at WWDC.
And now we see this morning that M2 entered volume production. Based on new cores. So expect improved single and multi core performance. Performance per watt will also increase. Single core improvement around 20 percent. Multi core not clear - I don’t know how many cores this thing will have.
Everyone else is skating to where the puck is, while Apple is busy moving it down the ice.
You think Intel’s Israeli design team wants to be lectured to about how great Jesus is and why they should convert? I don’t even think their Portland design team will put up with such a CEO. And you’d be surprised how often political affiliations and outside activities by the CEO are a factor in recruiting and continuing employment in Silicon Valley. Even if he keeps his mouth shut at the office it will become an issue. Employees who are passed over for promotions and raises will start suing claiming it’s religious discrimination. It’s going to become an issue in time.
I believe that this is why Apple launched its silicon to target the home consumer market first (Most of the Pro-sumer to Professional Apps would take some time to migrate over, while Rosetta2 would be fast enough for the home consumer markets).Out of these only Pr works to a reasonable extent. Even in that, bit of lumetri color and it is beachball time! Rest of them, anything more than flat design and couple of artboards is almost guaranteed to crash the app.
Yes. Blaming neither Apple nor Adobe. Apple can't launch an M2 without an M1 and Adobe probably have other priorities than ~1-2% of their customers with M1s. I should have waited
If this report is accurate do you think they’ll skip production on the larger M1’s entirely? I mean it wouldn’t make sense to continue with production on larger M1’s if the M2 is already out. Of course we’ve heard from multiple sources and even suggestive codes buried in macOS that there are larger M1’s. But this is now the second time that we’ve also heard the next node’s production was going earlier than planned so maybe Apple pulled the trigger on going to M2 early too and dropped larger M1’s? Can plans just change like that? I was given to understand that making such large changes was difficult with silicon production? Of course Apple being so huge, maybe they just bite the bullet on the cost?
@cmaier, with you knowledge of these things, what is the likelihood that Apple could develop a hybrid core that could be run as either big or little?
But also keep in mind that cheap hardware are where the bulk of the growth has been during the pandemic. So it's not terribly surprising to see these sort of numbers, IMO. (Edit: If there's details on how much of this is from Intel's weaker margins, versus shifts in the types of sales in market, I'd love to see it)
Sorry, but this is total bollocks. I've worked in commercial colocation data centres for the last 15 years and in the 2000s everything was rack mount Intel (in DELL, HP, IBM etc servers), then in the 2010s everything was blade Intel, its only in the last few years ARM is making any sort of real inroads there. Almost all of Google, Amazon and Azure run on Intel as well and a lot of their servers are in these same colocation data centres as everyone else's gear as well as having their own dedicated DCs.First, I agree those big data center want efficient chips and that's important to them, but Intel really hasn't been dominant in that market to begin with. Before it was mainframes and midranges, now I expect ARM-like processors already outnumber Intel processors in that space.
Sorry, but this is total bollocks. I've worked in commercial colocation data centres for the last 15 years and in the 2000s everything was rack mount Intel (in DELL, HP, IBM etc servers), then in the 2010s everything was blade Intel, its only in the last few years ARM is making any sort of real inroads there. Almost all of Google, Amazon and Azure run on Intel as well and a lot of their servers are in these same colocation data centres as everyone else's gear as well as having their own dedicated DCs.
You can be a little nicer on your correction, you go WAY too far into attack territory, but anyway I stand corrected. If you'll look at my phrasing, you can see I was guessing and just stating an opinion from what I see. (I don't work on the provider side, I'm on the other side of the equation.)Sorry, but this is total bollocks. I've worked in commercial colocation data centres for the last 15 years and in the 2000s everything was rack mount Intel (in DELL, HP, IBM etc servers), then in the 2010s everything was blade Intel, its only in the last few years ARM is making any sort of real inroads there. Almost all of Google, Amazon and Azure run on Intel as well and a lot of their servers are in these same colocation data centres as everyone else's gear as well as having their own dedicated DCs.
You certainly don't walk around these data centres and see "mainframes". Thats an absolute load of rubbish. Maybe these exist in on-premise data centres of financial institutions or government and the like but nowhere else.
You're working in a "manufacturing plant" and you super feel confident in telling people who work in enterprise data centres every day what is going on in there? How arrogant can you be?
Precisely. All supercomputers I have ever worked with were Intel-only. It's just this year that my university added some AMD EPYC nodes.
ARM is quickly gaining market share, especially with giants like Amazon, but it will be years before it can hope to match x86 in this space. And it solely depends on whether ARM can deliver with their upcoming Neoverse chips and what Intel's and AMD response will be.
Part of the problem is that people spend five minutes reading stuff online and then come back saying "look, fastest supercomputer on the planer is ARM, so ARM is doing great in the server space", while completely ignoring the fact that Fugaku is a one of it's kind system, build for a specific purpose, using a custom CPU that was specifically designed for it. But then when you actually look at TOP500, like 90% of the systems are using Intel CPUs...
Not me, mine were all POWER. All the Intel boxen were low-end or small servers and not super computers. You can't call small x86 systems super computers. Not even the same concept or design. Now doesn't mean they don't exist just not the vast number of servers out there I'd call super computers.Precisely. All supercomputers I have ever worked with were Intel-only. It's just this year that my university added some AMD EPYC nodes.
ARM is quickly gaining market share, especially with giants like Amazon, but it will be years before it can hope to match x86 in this space. And it solely depends on whether ARM can deliver with their upcoming Neoverse chips and what Intel's and AMD response will be.
Part of the problem is that people spend five minutes reading stuff online and then come back saying "look, fastest supercomputer on the planer is ARM, so ARM is doing great in the server space", while completely ignoring the fact that Fugaku is a one of it's kind system, build for a specific purpose, using a custom CPU that was specifically designed for it. But then when you actually look at TOP500, like 90% of the systems are using Intel CPUs...
In the 1990s there were a ****-ton of AMD supercomputers, using Opterons. (Just throwing that out there because, you know why).
Not me, mine were all POWER. All the Intel boxen were low-end or small servers and not super computers. You can't call small x86 systems super computers. Not even the same concept or design. Now doesn't mean they don't exist just not the vast number of servers out there I'd call super computers.
Silicon graphics machines I remember took that route.
That's right! yes Just before Cray was sold they switched to AMD. Such cool systems.So did Cray. I seem to remember we had many of the top 10, including, I think the top couple.
In a sense, all cores are hybrid in the sense that they run only when they have something to do, and, at least for the last 15 years or so, they are physically disconnected from the power supply when they aren’t needed. But the “big” “little” distinction that has emerged over the last half decade or so usually refers to things like how many cycles it takes to perform a calculation, whether every operation is available in a particular core, sometimes whether instructions are issued in-order or out-of-order, how many instructions can be run in parallel, etc.
It’s certainly possible to make cores that benefit from some of these tricks in an optional sense, depending on whether a core is told to run fast or slow. But I don’t think it would make a lot of sense. Why have a complicated scheduling unit that, when operating in slow mode, still requires an extra pipeline stage just to do not much of anything? Why put in all the circuitry to be able to disconnect portions of circuits instead of entire circuits from the power rails? There just doesn’t seem to be much benefit to trying it.
Putting it more concretely - we always designed cores to use as little energy as possible (well, not always. Starting in the early 1990’s we did), by turning stuff off, slowing clocks, etc. But even with all those tricks, these cores were never as low power as if we designed a true heterogeneous core. (We considered doing that sort of thing back then, but transistor count was such that we couldn’t quite afford it yet).
I think it's a bit of a different era My current university uses a computing cluster that is mostly Xeons and some EPYCs (which are a new addition). I also used to work with Piz Daint (which at one time was in the top 5 supercomputers of the world), which is all Sandy Bridge EP...
I was thinking along similar lines to @Sydde and maybe this is a dumb idea but where I would see the advantage is if you could effectively segment say a single firestorm-like core into 3 icestorm-like cores - not like SMT where it is all still single and shared, but they would truly be effectively 3 cores (I chose 1 to 3 ratio since iirc right now a firestorm core is roughly 2.5 - 3x bigger than icestorm). That way you would have just say 5 cores and flexibility for how they arranged (ie how many in “performance” mode and how many in “power saving” mode) using the same techniques as they already use to determine where threads run and migrate to. The advantage is that you would be able to tailor your execution mode on the fly to what the user actually needs resulting in better power to performance for different tasks. Don’t know if it really gets you that much in practice (your post indicates not much) and how much redundancy the bigger core would need to make that actually work, but thought it was a neat idea.