Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

orionquest

Suspended
Mar 16, 2022
871
791
The Great White North
1. Apple said M1 Ultra is the last member of the M1 family
2. M1 Ultra can't match the current Intel Mac Pro maximum RAM capacity (by a lot - a factor of 12!)
3. M1 Ultra can't match the current IMP on GPU expansion
4. M1 Ultra can't match the current IMP on general purpose PCIe expansion

All these things add up to Apple planning on shipping the first Apple Silicon Mac Pro with a M2 family chip. It will probably be in a new category beyond the current M1 Ultra and next-gen M2 Ultra, with its own marketing name (M2 Extreme?).
I don't think it does. I highly doubt Apple is going to create another CPU line for this market, which is tiny. Or introduce a new set of processors going forward with this product introduction, instead of building on what they have now. The M1 series was just finally fleshed out. The timing of a new introduction, to me, doesn't fit.

They will have to address those points but I believe they will leverage what they have now with the Ultra and reveal more of it's capabilities. Specifically PCI expansion and GPU expansion. GPU and RAM expansion are both contentious from what we have seen from the lineup so far. If they offer either kind of those, it will be a privilege, with huge $$$ attached to it. Like I asked who is going to make a GPU for the Mac market? Certainly not Nivida, and is there enough business left for AMD to bother? The RAM ceiling, Apple probably doesn't care and maybe their data show most users never reached that max threshold and possibly that number represents 1-3% of users. Making a decision to let those users go and focus on internal PCI expansion and making a GPU cards possible and turn off the built in GPU cores and redirect them to performance cores instead.

Just my guess. Curious to find out.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I don't think it does. I highly doubt Apple is going to create another CPU line for this market, which is tiny. Or introduce a new set of processors going forward with this product introduction, instead of building on what they have now. The M1 series was just finally fleshed out. The timing of a new introduction, to me, doesn't fit.

if the M1 family of SoCs is done it is going to be hard for Apple introduce something else that using M1 family tech
that is a different SoC and still call it 'M1'.

Pretty likely like whatever Apple does is going to be something "> M1" . Whether that is M2 or M3 is perhaps up in the air but M1 is basically 'done'.



They will have to address those points but I believe they will leverage what they have now with the Ultra and reveal more of it's capabilities. Specifically PCI expansion and GPU expansion. GPU and RAM expansion are both contentious from what we have seen from the lineup so far. If they offer either kind of those, it will be a privilege, with huge $$$ attached to it.

GPU is basically hung up on 3rd party GPU drivers. without GPU drivers there are not dGPUs. There aren't any.. Perhaps that changes at WWDC 2022 , but I wouldn't hold my breath.

Apple photoshopped out the UntraFusion connector from their initial M1 Max die photos. I think there is hype now that Apple has photoshopped or grossly under revealed UltraFusion and that will magically spring forth when the Mac Pro ships.

Two Maxes with a I/O provisioning die stuck in-between them. Or the interposer is much bigger than Apple photos have shown so far with a huge set of hidden features.

That seems doubtful since Apple appears to be so focused on 'unifying' the memory between the GPUs on the two Max dies in an 'Ultra SoC' that they are willing to throw away external, off-package I/O to keep that unified-ness at max. ( grand claims taking out the 3090.... )


Like I asked who is going to make a GPU for the Mac market? Certainly not Nivida, and is there enough business left for AMD to bother? The RAM ceiling, Apple probably doesn't care and maybe their data show most users never reached that max threshold and possibly that number represents 1-3% of users. Making a decision to let those users go and focus on internal PCI expansion and making a GPU cards possible and turn off the built in GPU cores and redirect them to performance cores instead.

Why is Apple going to make GPU cards possible if the absolutely priority #1 is Perf/Watt? Apple isn't likely going to build some higher power consuming GPU card. Don't care about > 512-1,000 GB memory capacity but does care about high power consuming GPU? Probably not. If willing to throw the high RAM workings set users out the door, then probably willing to throw super high end GPU users out the door too. ( probably far more higher priority is scaling their own iGPU up to 128 cores than letting other GPU folks 'in' )

The PCI-e expansion is about stuff than is higher bandwidth that Thunderbolt 4 can handle and things Apple doesn't want to touch. That is mostly multiple storage drives ( RAID , double digit TB (and up) storage capacity , etc. ) , High end networking > 10GbE , A/V I/O ( e.g., 8K (and up) raw video capture, legacy Audio I/O cards , etc. ),

Could/should that include high TDP GPGPU/ML compute cards? that would help with some of the blowback. But it doesn't fit the Apple saving the planet with lower power consuming systems themes. If not full fledged GPU-display cards then doesn't really disrupt Apple driving the general Metal interface to just Apple GPU only. But it wouldn't close the door on a MI200 or MI300 'compute card'.



Just my guess.

I think folks are presuming that "for another day for Mac Pro " implied 'real soon now'. That could easily be an October reveal and still hit their "around two years" timeline (.e.g, in 2022).

I don't think they have to cram some M1 hack into a Mac Pro if it doesn't fit. Even more so if there is a better 'natural' fit with what is in the M2 or M3 line up.

For M3 ( on TSMC N3 ) Apple could probably shrink the Utlra class performance onto one die with even better Perf/Watt than the M1 Ultra SoC. the Mac Pro sliding into 2023 so can pick that up (with an PCI-e I/O die between them. and probably would have lined up with early 2019 roadmaps as being late 2022 pre-pandemic. ).
 
Last edited:
  • Like
Reactions: orionquest

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
The Graviton doesn't have a GPU cohort on same memory bus problem and tops out at 300GB/s (with always on memory encryption on. )

aws-graviton3-c7g-instances.jpg


The graviton has 32 PCI-e v5. lanes to connect to any 3rd party GPU want to add to the mix. The Ultra has something fewer than x8 PCI-e v4 .

Graviton 3 runs at about 100W so it is lower power than M1 Ultra too. More CPU cores , more CPU core bandwidth , lower power. Amazon made different design choices and got to a different outcome. They do not have a good 'single user at a time' SoC.
For more on Graviton 2 & Graviton 3 (and AWS's ARM strategy) see:

 
  • Like
Reactions: Fomalhaut

orionquest

Suspended
Mar 16, 2022
871
791
The Great White North
Why is Apple going to make GPU cards possible if the absolutely priority #1 is Perf/Watt? Apple isn't likely going to build some higher power consuming GPU card. Don't care about > 512-1,000 GB memory capacity but does care about high power consuming GPU? Probably not. I willing to throw the high RAM workings set users out the door, then probably willing to throw super high end GPU users out the door too. ( probably far more higher priority is scaling their own iGPU up to 128 cores than letting other GPU folks 'in' )

The PCI-e expansion is about stuff than is higher bandwidth that Thunderbolt 4 can handle and things Apple doesn't want to touch. That is mostly multiple storage drives ( RAID , double digit TB (and up) storage capacity , etc. ) , High end networking > 10GbE , A/V I/O ( e.g., 8K (and up) raw video capture, legacy Audio I/O cards , etc. ),

Could/should that include high TDP GPGPU/ML compute cards? that would help with some of the blowback. But it doesn't fit the Apple saving the planet with lower power consuming systems themes. If not full fledged GPU-display cards then doesn't really disrupt Apple driving the general Metal interface to just Apple GPU only. But it wouldn't close the door on a MI200 or MI300 'compute card'.
Is Perf/Watt that much of a priority, honestly I haven't paid that much attention. But could they let things loose a bit more for the next MP (power consumption wise). You do raise and interesting point regardless, I didnt think of it that way.


I think folks are presuming that "for another day for Mac Pro " implied 'real soon now'. That could easily be an October reveal and still hit their "around two years" timeline (.e.g, in 2022).

I don't think they have to cram some M1 hack into a Mac Pro if it doesn't fit. Even more so if there is a better 'natural' fit with what is in the M2 or M3 line up.

For M3 ( on TSMC N3 ) Apple could probably shrink the Utlra class performance onto one die with even better Perf/Watt than the M1 Ultra SoC. the Mac Pro sliding into 2023 so can pick that up (with an PCI-e I/O die between them. and probably would have lined up with early 2019 roadmaps as being late 2022 pre-pandemic. ).
True they could release it by the end of the year like they did for the current MacPro (like practically the last week of the year) and still fit within the transition of 2 years. But didn't they preview the current Mac Pro during WWDC prior to releasing it? Not to say they will follow the same release/reveal steps.

Anyhow I guess thinking about it more it could be possible to introduce a new series of chips and one of those could land into the next Mac Pro later in the year. But I still believe they will or might squeeze a bit more out of the M1 series. Unless they do like everyone else, new chip updates every year like clockwork. But this is Apple they don't tend to follow what other typically do.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Is Perf/Watt that much of a priority, honestly I haven't paid that much attention. But could they let things loose a bit more for the next MP (power consumption wise). You do raise and interesting point regardless, I didnt think of it that way.

Apple spends 2-5 mins on Perf/Watt in every single Apple Silicon keynote presentation they do. It is all over their marketing material. They spend way too much time discussing it for it not to be an internal priority.

Apple uses perf/watt to get to higher performance. If they were uncompetitive at single threaded workloads it would be a problem. If they didn't have the fastest iGPU , then it would be a problem. Apple is trading CPU package and RAM DIMM modularity for higher performance results. They probably aren't going to give that up just worhship at the alter of ultimate modularity Mac Pro.

If Apple attaches a PCI-e I/O chiplet that does x16 PCI-e v4 lanes (or maybe two x16 PCIe- v4 ) to the more common mac "Max or Ultra" like dies, then that SoC's perf/watt would go down a bit. But the CPU+GPU die wouldn't any more than it does when using the UltraFusion connector. That would give them high reuse in the Studio product for that CPU+GPU die and only a modest 'hit' on the Mac Pro.

M1 , Pro , Max just have x4 PCI-e v4. (and some Thunderbolt with some embedded v3 ). That is more to keep the pin out number lower to the die than to get to 'workstatoin' like general bandwidth I/O ( e.g., single x1 links out to 10GbE controller ). To get to something worth provisioning out at least 2-4, x8 and/or x16 slots, they'd need to take some hit. But Apple probably isn't going to try to go toe-to-toe with modern Epyc and Xeon SP and go PCI-e v5 and over 80+ lanes. That is a substantially bigger power hit to get to really broad general I/O.



True they could release it by the end of the year like they did for the current MacPro (like practically the last week of the year) and still fit within the transition of 2 years. But didn't they preview the current Mac Pro during WWDC prior to releasing it? Not to say they will follow the same release/reveal steps.

Last two Mac Pros ( 2013 and 2016 ) did "sneak peaks" in June and released late. Apple will "sneak peak" some major change about 6 months in advance. But longer lead times that , they usually don't. Back in April 2017 they mentioned they were working on a new Mac Pro. There was lots of buzz on these forums that they 'just gotta' show it at WWDC 2017 ... didn't happen. Then in April 2018 Apple mentioned that "not this year" ( mainly because from Jan - April there was lots of buzz again that they 'just gotta' show it at WWDC 2018. )

If the M-series Mac Pro has slid to Feb-April 2023 then October would be closer to a 6 month lead time "sneak peek" intro. Apple could toss out a M1 Pro in the current Mini chassis to make some folks happy at WWDC 2022 and cross another Intel system off the list and say again ("another day for Mac Pro " to kick the can to the Fall. )

Like the iMac Pro (2017) came substantially before the Mac Pro (2019) , I suspect Apple would like to find out just how well the Studio can do before put final touches on how they are going to 'place' the Mac Pro in the line up. Not a two year gap, but somewhat into 2023.

Earlier rumors had a. "Jade4C" that was a something like a 4 chiplet/tile set up. I suspect that didn't work out the way they wanted it to. [ e.g., some reports of apps not scaling over the UltraFusion connector in an Ultra to the second large cluster of GPU cores. That only gets worse if there are 3 'non hyperlocal' GPU core clusters. ) . Apple pushed the Mac Pro 2013 out the door with some 'bet the farm' on then new OpenCL and a Compute GPU having high utility before the software stack could leverage it. That lead to hiccups in product adoption for that Mac Pro. There is no good reason to push a new Mac Pro out the door before have a decent software foundation ready to go. Especially if going to be charging $4-9K for those super monster GPU core counts and soldered on RAM. Buy all of that and software only runs on 32 GPU cores and probably not going to be happy.


Anyhow I guess thinking about it more it could be possible to introduce a new series of chips and one of those could land into the next Mac Pro later in the year. But I still believe they will or might squeeze a bit more out of the M1 series. Unless they do like everyone else, new chip updates every year like clockwork. But this is Apple they don't tend to follow what other typically do.

I'm a bit skeptical that M2 is going to bring anything to the table that would solve any deep problems that might have popped up in a Jade4C solution that caused it not make the cut. WWDC 2022 may reveal that there are some large software gaps they have backfill also before have really prepped for a creditable Mac Pro replacement (and will need a stable macOS 13 and that probably won't be done by late Fall ; although they'll release something 'stable enough' as usually do in the Fall. )
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Bad news? A Docker Linux image in a virtual machine image doesn't work on a Apple Silicon system?
Developing on/for a VM image instance is not a big problem.
It is more convenient to develop a container for a specific architecture on a computer with that architecture.

So really the forecast is really just with AWS mainly pushing. If Ampere and Nvidia really deliver in 2022-2024 that is probably a low-ball estimate of share. Depends in part where those two come in on pricing. I have not clue why they think other viable data center ARM cpus packages won't show up until 2025. That is some lame analysis.
It seems that only AWS offers ARM-based virtual machines among the big players. I can't find any ARM-based virtual machines in GCP and Azure.

Besides Oracle Cloud (2% market share), does Ampere have any other big customers?
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
It is more convenient to develop a container for a specific architecture on a computer with that architecture.

Yet another reason why should not throw Mac OS on Intel under the bus as fast as possible. But this is a growth market for M-series systems running Arm Linux vm instances. If Apple handles these both paths badly they just drive more developers to Windows faster.


It seems that only AWS offers ARM-based virtual machines among the big players. I can't find any ARM-based virtual machines in GCP and Azure.

Besides Oracle Cloud (2% market share), does Ampere have any other big customers?





Tencent and Alibaba already have Arm instances doing work . It will not take 2-4 years for them to get started.




https://www.tomshardware.com/news/alibaba-unveils-128-core-server-cpu
( making progress have chips in use and about to go volume
https://www.linleygroup.com/newsletters/newsletter_detail.php?num=6416 )

Alibaba isn’t betting the whole farm on Arm . It has been used as a trade tool .

that is one thing that may put brakes on Arm tracking more server share outside USA .

MS Azure seems to be ramping on using Ampere for internal infrastructure ( virtual networking , storage , etc ) more so than vm instance services. Won’t so up on instance for rent page.

there are ‘content delivery networks’ (CDN) are potential large users also

“ …
Our first deployment of an Arm-based CPU, designed by Ampere, was earlier this month – July 2021.
…. Using Arm, Cloudflare can now securely process over ten times as many Internet requests for every watt of power consumed, than we did for servers designed in 2013. … “

( I think some of Apples cloud services run through Clouflare already so……. Getting to Arm with no Apple Arm server chip effort required at all. )
 
  • Like
Reactions: Xiao_Xi

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
The cloud behind the scenes seems to be moving faster than the cloud we can see. Cloud service providers seem to see the clear advantage of using ARM SOCs, but except for AWS, they seem to think developers are not ready for them. I can't find any ARM-based virtual machines in Azure, Google Cloud, Alibaba Cloud and Tencent Cloud.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
MS Azure seems to be ramping on using Ampere for internal infrastructure ( virtual networking , storage , etc ) more so than vm instance services. Won’t so up on instance for rent page.
It appears that the transition is speeding up. Azure now offers Ampere-based virtual machines.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
It appears that the transition is speeding up. Azure now offers Ampere-based virtual machines.

Progressing more so than speeding up. ( 'eat your own dogfood' first and then roll it out the other folks when know what you are doing and have quantitative data behind your cost metrics. )

"...
The VMs currently in preview support Canonical Ubuntu Linux, CentOS, and Windows 11 Professional and Enterprise Edition on Arm. Support for additional operating systems including Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Debian, AlmaLinux, and Flatcar is on the way. ... "
https://azure.microsoft.com/en-us/b...chines-with-ampere-altra-armbased-processors/


Still haven't finished Red Hat or SuSE. Although not surprising they finished Windows Enterprise edition before those.

But yeah the notion that nothing was going to happen on Arm server deployments outside of Amazon for 3-4 years... way off base. Apple is no where even remotely near ahead of the curve in server land.
 

darngooddesign

macrumors P6
Jul 4, 2007
18,362
10,114
Atlanta, GA
Why is Apple going to make GPU cards possible if the absolutely priority #1 is Perf/Watt? Apple isn't likely going to build some higher power consuming GPU card. Don't care about > 512-1,000 GB memory capacity but does care about high power consuming GPU? Probably not. If willing to throw the high RAM workings set users out the door, then probably willing to throw super high end GPU users out the door too. ( probably far more higher priority is scaling their own iGPU up to 128 cores than letting other GPU folks 'in' )
Apple may decide to relax on their perf/watt advantage when it comes to a workstation computer. Even if it uses more power than a theoretical equivalent Mac Studio, it will likely still use a lot less power than competing systems.
 

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
It would make sense for Apple to release servers for Small business/companies.

A Server that preserve your company's data & privacy locally instead of using data harvesting cloud hosting.

And the software support is right there, as MacOs can run almost anything made for Linux with little tweaking.
Err...you do realise that enterprise cloud hosting companies are not permitted by law to access their customers' data, and in any case such data is encrypted with keys held by the customer. Businesses are not storing data on "open" content hosting sites like Facebook. I'm talking about data rather than web browsing analytics, Alexa voice recording analysis etc.

For an example, have a look at https://aws.amazon.com/compliance/data-privacy-faq/

Sure, some industries may have data that is so secret that they don't want any kind of public cloud storage (even though this can be made just as secure), and might want "private clouds" on their own premises. This can often be a false sense of security because most enterprises do not have highly secure networks designed, maintained and monitored by security experts with multiple layers of protection and attack detection.

Bear in mind that even the CIA makes extensive use of public cloud hosting services - albeit with some extra layers of isolation provided by the big vendors: CIA awards multibillion C2E cloud contract to AWS, Microsoft, Google ...https://www.datacenterdynamics.com › news › cia-awards...
 

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
What are talking about? Any example?


I don't think they compete with each other. People choose x64-based virtual machines in the cloud because they can't choose ARM-based virtual machines. ARM-based instances are much cheaper than x64-based instances on AWS.


Xserve predates AWS, so it made more sense then. What company today buys a server instead of using a cloud service provider?
Well, the ARM-based AWS instances are "somewhat" cheaper than x86, and you have to weigh up the real-world performance for specific workload to determine whether it's a actually a better deal.

Here's a simple example for AWS EC2 instances with 64GB RAM and 16 vCPUs (from https://instances.vantage.sh)

1649323983588.png


The Graviton2 is the cheapest for sure, but it's only about 10% less than the next cheapest AMD CPU, and the AMDs are generally a bit faster. Some of the Intel instances are faster still, but more expensive, so you need to balance these factors. Have a look at https://www.percona.com/blog/comparing-graviton-arm-performance-to-intel-and-amd-for-mysql-part-3/ for an example.

Yet for some workloads, the ARM chips are both faster *and* cheaper - https://blog.cloud-mercato.com/amazon-web-services-m5-vs-m5a-vs-m6g/

In short, you need to do your homework when choosing a server type :)
 
  • Like
Reactions: Xiao_Xi

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
I also have one. It works ok most of the time, but software quality is poor. I also had a file system failure once that completely destroyed all my data: neither RAID1 nor btfs helped, the support just shrugged and said these things can happen. I also experience regular issues with users and connecting to server. And performance is rather poor, despite using fast HDDs.
I hope you also had backup disks for your NAS. My Synology had an unrecoverable failure due to a (rare) failure of 2 disks in the 4-disk array. Fortunately, it's connected to 2 x 4TB USB backup drives and I was able to recover everything.

Just a good reminder that "RAID is not a backup" :cool:
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Err...you do realise that enterprise cloud hosting companies are not permitted by law to access their customers' data, and in any case such data is encrypted with keys held by the customer.
Issues with cloud hosting companies usually arise in international situations. Which is the default, because the vast majority of data is international. There are often at least three parties involved: the hosting company, the customer, and the entity the data is associated with. Those three are often based in different countries.

Then, there is the rule of thumb that it's impossible to secure computers against people who have physical access to them. Especially against people who are supposed to maintain those computers in the first place. It should be assumed that some employees of the cloud hosting company have unlimited access to the computers and the data, including the encryption keys when they are used to decrypt the data.

Those employees are sometimes compromised by foreign intelligence agencies or organized crime. Which country makes the background checks for them? Which country has physical access to the data center and the ability to arrest employees? If it's only the country of the host company, it's insufficient in many cases.

In many cases, the legal system itself is the threat. In particular, US cloud hosting companies cannot legally host many kinds of European data, at least in US data centers. They cannot protect the data from US intelligence and law enforcement agencies that can legally (by US laws) access it without sufficient legal safeguards (by European laws).
 

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
Issues with cloud hosting companies usually arise in international situations. Which is the default, because the vast majority of data is international. There are often at least three parties involved: the hosting company, the customer, and the entity the data is associated with. Those three are often based in different countries.

Then, there is the rule of thumb that it's impossible to secure computers against people who have physical access to them. Especially against people who are supposed to maintain those computers in the first place. It should be assumed that some employees of the cloud hosting company have unlimited access to the computers and the data, including the encryption keys when they are used to decrypt the data.

Those employees are sometimes compromised by foreign intelligence agencies or organized crime. Which country makes the background checks for them? Which country has physical access to the data center and the ability to arrest employees? If it's only the country of the host company, it's insufficient in many cases.

In many cases, the legal system itself is the threat. In particular, US cloud hosting companies cannot legally host many kinds of European data, at least in US data centers. They cannot protect the data from US intelligence and law enforcement agencies that can legally (by US laws) access it without sufficient legal safeguards (by European laws).
You make some good points, and are correct that "data sovereignty" is often an issue - I have clients who will only use AWS/Azure services limited to national boundaries (e.g. tied to specific AWS regions).

However, physical security is actually pretty good with a lot of control and auditing of access, so it is hard (but not impossible) for rogue actors to get at user data. As regards encryption keys, these can be "customer managed" so the hosting company has no access to them at all.

That's not to say that the possibility of back-doors or brute-force attacks by government agencies is impossible...but any hosting company risks complete ruin if they are found to permit this on their customers' data.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
US cloud hosting companies cannot legally host many kinds of European data, at least in US data centers. They cannot protect the data from US intelligence and law enforcement agencies that can legally (by US laws) access it without sufficient legal safeguards (by European laws).
I have clients who will only use AWS/Azure services limited to national boundaries (e.g. tied to specific AWS regions).
To be honest, I don't know if data hosted on a server in Europe by a US cloud service provider is secure or not. Schrems II outlawed the transatlantic data flow, however, the Cloud Act requires U.S. companies to share data with federal authorities.
 

joema2

macrumors 68000
Sep 3, 2013
1,646
866
Unfortunately, Apple wound down their Server software years ago….
Good point, and I don't see much discussion of this. While I don't think Apple is interested in the server market, it's not just a hardware problem.

You can't run a server very well without extensive performance monitoring tools. Suppose you need to determine if a task is bottlenecked on a disk. It's very useful to check the queue depth on each disk to see which ones have a backlog of I/O requests. Or maybe you suspect some hardware component is flooding the system with interrupts and slowing it down.

On Windows, monitoring disk queue depth, processor interrupts/sec, and many other things have been possible with the built-in Performance Monitor since 1993 on NT 3.1. A vast number of parameters can be monitored on a local OR remote machine, graphed, logged to a file, and played back for later analysis. It's built-in, you don't have to buy anything or install some 3rd-party component.

Since 2019 there's an improved Performance Monitor on Windows, and the old one is still there: https://techcommunity.microsoft.com...w-performance-monitor-for-windows/ba-p/957991

On MacOS, there is Activity Monitor which is toy-like in its capability. Formerly there were command-line Dtrace tools but that now requires booting with System Integrity Protection disabled, and many of those tools no longer work properly.

MacOS has Instruments but that is intended for developers not end users. To my knowledge there is nothing remotely like Windows Performance Monitor available on MacOS. I don't see how you could run a serious server installation without something like that.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
TL;DW - Nobody knows nothing... ;^p


Isn't this the same guy who absolutely guaranteed ( something like " I'll shave something off if it doesn't happen' ) that the MBP 14"/16" would ship at WWDC 2021?

The M1 Max die still doesn't have support for proper interrupt handling for more than 2 chips. So four? Probably not.

Something next generation. Could be. But also could be not "soon" either. (just like last year).
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Apple may decide to relax on their perf/watt advantage when it comes to a workstation computer. Even if it uses more power than a theoretical equivalent Mac Studio, it will likely still use a lot less power than competing systems.

Relax to add support for more general purpose I/O ( e.g., provision out two x16 PCI-e v4 bundles ) . Maybe add some secondary RAM via DIMMs (e.g., for optional file cache and/or memory backing store handled by the OS/File system. ) could be.

But toss out the. CPU-AMX, GPU , and/or NPU core designs to burn more power on 'user random' RAM configurations? Probably not.

Apple could 'slide in' some I/O chiplet/tile on an augmented. "UltraFusion" connection in a future version. The overall package Perf/Watt would slide from what Apple's plain "Max" variant die would be. Ultra is a small backslide from Max ( UltraFusion is relatively lightweight Perf/Watt hit but it is something. ). Sliding a limited range more would be expected if scale up to 4 dies cluster. Apple will accept some loss there. But very unlikely they are going to go up to the level of losses that AMD takes on their 4+ chiplet designs.

There is a pretty decent chance that Apple will do a modified Max like design that scales up to a 4 cluster rather than just cap out at a 2 cluster. And they will leverage a very high amount of design reuse across the upper end of the SoC range. ( augment the "Max like" chiplet/tile with something for more I/O than laptops, but basically same building blocks as the laptops. Amazon's Graviton 3 uses chiplets for Memory and PCI-e . Apple could do some 'add ons" for the Mac Pro but leave the basic core of the baseline chip die design the same. Primary memory still soldered on LPDDR and colocated CPU/GPU/NPU/etc cores using the soldered on RAM to do "unified memory passing". )

As long as Apple is pulling around a die that is constructed from the GPU as the main focus with CPU cores sprinkled on top they won't have a mainstream cloud services server SoC competitor. The "Max" die is anchored around the GPU. Need a basically different fabric and speeds/feeds to do more scaled and intense CPU focused work.

If want to run "macOS" rental as a cloud service then Apple has a great match for "server processors". But running generic linux cloud server stuff. Not really. Ampere Computing has competitive stuff. Amazon has competitive stuff. Nvidia will have stuff more skewed to HPC cloud services workloads. Apple's won't be "a lot less power" than those systems at all.

Additionally, if AMD's Bergamo's solution is decently tuned specifically for some secure cloud loads may not beat them either. ( e.g. AMD removes SMT, AVX-512 , tunes cache hierarchy for web loads, tunes for virtualization overhead, etc. ). Depends upon how far AMD 'forks' the 4c cores off the basic Zen 4 design. ( AMD has potentially lots more SoC products to weave a second core design into than Apple does. Apple only makes a small handful of Mac systems. )
 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
Well, the ARM-based AWS instances are "somewhat" cheaper than x86, and you have to weigh up the real-world performance for specific workload to determine whether it's a actually a better deal.

Here's a simple example for AWS EC2 instances with 64GB RAM and 16 vCPUs (from https://instances.vantage.sh)

...

The Graviton2 is the cheapest for sure, but it's only about 10% less than the next cheapest AMD CPU, and the AMDs are generally a bit faster. Some of the Intel instances are faster still, but more expensive, so you need to balance these factors. Have a look at https://www.percona.com/blog/comparing-graviton-arm-performance-to-intel-and-amd-for-mysql-part-3/ for an example.

Yet for some workloads, the ARM chips are both faster *and* cheaper - https://blog.cloud-mercato.com/amazon-web-services-m5-vs-m5a-vs-m6g/

In short, you need to do your homework when choosing a server type :)
One thing to be aware of when comparing Graviton vCPU prices with Intel. On Graviton, each vCPU represents one physical core. On Intel it represents a hyperthread on that core (2 per core).
 

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
One thing to be aware of when comparing Graviton vCPU prices with Intel. On Graviton, each vCPU represents one physical core. On Intel it represents a hyperthread on that core (2 per core).
That's a very good point to remember. Most benchmarks are comparing against the same number of vCPUs, which might indicate that the Intel machines are actually doing better because the comparison is essentially "half an Intel core" vs a whole ARM core.

It doesn't really matter though, because the VMs are priced according to vCPUs (& other parameters of course) and at the end of the day, users are calculating "how fast for what cost" or throughput per dollar.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
The Graviton2 is the cheapest for sure, but it's only about 10% less than the next cheapest AMD CPU, and the AMDs are generally a bit faster. Some of the Intel instances are faster still, but more expensive, so you need to balance these factors.
It seems that you can squeeze more out of AMD and Intel based vCPUs than Graviton based vCPUs. So unless you can squeeze every last drop of performance out of AMD or Intel based vCPUs, you would choose Graviton based vCPUs because they are cheaper.

vCPU-comparison.png

 
  • Love
Reactions: Fomalhaut
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.