Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

dmccloud

macrumors 68040
Sep 7, 2009
3,138
1,899
Anchorage, AK
Maybe I'm just showing my age here, but a hand-built Windows machine without drive bays just seems odd to me. And I've noticed, too, that the number of PCI-E slots on motherboards is dropping. The modern enthusiast machine seems to be a giant PCI-E GPU, possibly even mounted sideways, some very funky cooling solutions, and NVMe SSDs.

(... then again, if motherboards still had floppy controllers, my main Windows machine probably still would have one of those lovely 3.5" Mitsumi combo card reader/floppy drive things. It's a Windows machine with a giant case, not 12-inch MacBook, what's the downside to keeping another drive type or two around?)

There's still a plethora of motherboards out there with multiple PCI-E slots. Most have two long (x16/x8) slots and 2-3 x1 slots. As far as drive bays go, these motherboards also have multiple M.2 slots, and the cases themselves usually have SSD mounts on the backside of the motherboard tray. The only time you really see the number of PCI-E slots dropping is when you're looking at smaller form factor motherboards that simply do not have the room for more than the one x16 slot.

I have two custom built PCs right now - one for gaming, the other as a media server and smart home control hub. Both of them support a minimum of two M.2 drives onboard, and have a total of five PCI-E slots. Both cases also have a cage for dual HDDs, which are hidden from sight at the bottom of the case. Neither case has bays for optical drives, but when was the last time anyone truly needed one of those?
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
Haha! Maybe? When was the last time you hand-built a PC? NVMes have been around for over a decade. I've used them in the last two builds ( a new one about every three years ).
I last built a Windows PC in January 2017. Thought I might be able to get ten years out of it before the Great Big Windows 11 insult.

And it has one NVMe, along with a bluray drive, a bunch of 2.5 SATA SSDs (at the time I got them, the price was much cheaper than an additional NVMe), I even threw a random 3.5" hard drive from an older system in there. Sadly no floppy drive...

Now the new systems seem to be NVMe-only.
 
Last edited:

Internaut

macrumors 65816
Some apps literally won't run in Windows ARM.
My use case is actually virtualisation on Intel, for an Open Stack lab On Linux. The secondhand ThinkPad Yoga, with dual boot to Ubuntu and Windows 11, is perfect. Too much technical mumbo jumbo? That’s edge cases for you. As an added bonus, Office on Windows is slightly better for a corporate Windows/Office environment.

But I do reiterate it’s fine to have both Mac and Wintel. I just prefer Mac most of the time. It gives me 99% the benefits of the benefits of Mac/Win/Linux given my usage. I think it will be a few years before I’m 100% ARM.
 
  • Like
Reactions: MRMSFC

iJest

Suspended
Jul 27, 2023
186
223
My use case is actually virtualisation on Intel, for an Open Stack lab On Linux. The secondhand ThinkPad Yoga, with dual boot to Ubuntu and Windows 11, is perfect. Too much technical mumbo jumbo? That’s edge cases for you. As an added bonus, Office on Windows is slightly better for a corporate Windows/Office environment.

But I do reiterate it’s fine to have both Mac and Wintel. I just prefer Mac most of the time. It gives me 99% the benefits of the benefits of Mac/Win/Linux given my usage. I think it will be a few years before I’m 100% ARM.
I have a mini PC next to my Mac set up for running the Windows software that simply won't launch in Windows ARM via Parallels. So I have a solution, but I just wish I could do literally everything on my Mac.
 
  • Like
Reactions: Internaut

tubuliferous

macrumors member
Jul 13, 2011
78
81
Battery life is better, but not enough of a game changer compared to your typical ultrabooks with U-series chips (which aren't terrible), and for most users it doesn't really justify the disadvantages.
I've often wondered about this. Since ARM also has better heat characteristics, what's to keep Apple or another company from massively scaling up the number of ARM cores at the cost of more robust cooling requirements and/or lower battery life? I suppose Nvidia has already gone there with its Grace CPU, which has 144 cores (and which it claims smokes Intel Ice Lake Xeon chips).

Perhaps a radical increase in performance on ARM would drive consumers to adopt hardware with imperfect backwards compatibility, thus encouraging efforts to achieve a more comprehensive backwards compatibility.
 
  • Like
Reactions: ArkSingularity

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
I've often wondered about this. Since ARM also has better heat characteristics, what's to keep Apple or another company from massively scaling up the number of ARM cores at the cost of more robust cooling requirements and/or lower battery life? I suppose Nvidia has already gone there with its Grace CPU, which has 144 cores (and which it claims smokes Intel Ice Lake Xeon chips).

Perhaps a radical increase in performance on ARM would drive consumers to adopt hardware with imperfect backwards compatibility, thus encouraging efforts to achieve a more comprehensive backwards compatibility.
That's what Ampere and others are doing in the datacenter sphere.

Big public cloud, e.g. Office 365, is the perfect scenario for this: few dependencies and if you can cut down your power and/or hardware bill by half (or even 10%), on the scale of Office 365, that'll get a project to port the whole stack to ARM approved yesterday.
 
  • Like
Reactions: tubuliferous

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
I've often wondered about this. Since ARM also has better heat characteristics, what's to keep Apple or another company from massively scaling up the number of ARM cores at the cost of more robust cooling requirements and/or lower battery life? I suppose Nvidia has already gone there with its Grace CPU, which has 144 cores (and which it claims smokes Intel Ice Lake Xeon chips).

Perhaps a radical increase in performance on ARM would drive consumers to adopt hardware with imperfect backwards compatibility, thus encouraging efforts to achieve a more comprehensive backwards compatibility.
The server world is very much already going in this direction. For most cloud services, mid-tier single threaded performance (which ARM can easily provide) is perfectly sufficient. Multithreaded performance (and the ability to handle sustained workloads that scale well across a massive number of cores) is much more important.

ARM can do this extremely well, so a lot of chip manufacturers and designers have already started experimented with it. I think ARM is going to be huge in the server world in the years to come (only time will tell, I suppose). Who knows, it might eventually even trickle down to some HEDT workflows on workstations also, but I think that software compatibility will have to improve massively before that sort of thing is able to happen on a widespread scale.
 
  • Like
Reactions: tubuliferous

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
There's still a plethora of motherboards out there with multiple PCI-E slots. Most have two long (x16/x8) slots and 2-3 x1 slots. As far as drive bays go, these motherboards also have multiple M.2 slots, and the cases themselves usually have SSD mounts on the backside of the motherboard tray. The only time you really see the number of PCI-E slots dropping is when you're looking at smaller form factor motherboards that simply do not have the room for more than the one x16 slot.

I have two custom built PCs right now - one for gaming, the other as a media server and smart home control hub. Both of them support a minimum of two M.2 drives onboard, and have a total of five PCI-E slots. Both cases also have a cage for dual HDDs, which are hidden from sight at the bottom of the case. Neither case has bays for optical drives, but when was the last time anyone truly needed one of those?
Oops, I hadn't seen that...

The motherboards I was mostly looking at were full-ATX boards with 10GbE. (Yes, for the same reason I bought my iMac with 10GbE, I would want the next Windows box to have 10GbE, even though I suspect Microsoft and Apple will drop OS support for both machines before I can find an affordable 10GbE copper switch) Big pricy boards and my recollection is that they were mostly missing x1 slots and had something like 4+ NVMe. I think you are right, though - lower end, more reasonable boards (which are still 2-3X the price of a comparable board 15 years ago) tend to have a little more PCIe, especially of the x1 kind.

I've actually been using optical drives and my 15-20 year old stacks of burnable media more in the last few months, but that's due to my growing interest in vintage Macs (a third vintage Mac may be arriving today! and one that will be very mildly annoying to network) so I don't think it counts. :)

But the thing is, if you want a computer based on what you "truly need", well, that's what Macs are for. :) Although some people would argue you 'truly need' a USB-A port on a laptop...
 

dmccloud

macrumors 68040
Sep 7, 2009
3,138
1,899
Anchorage, AK
I've often wondered about this. Since ARM also has better heat characteristics, what's to keep Apple or another company from massively scaling up the number of ARM cores at the cost of more robust cooling requirements and/or lower battery life? I suppose Nvidia has already gone there with its Grace CPU, which has 144 cores (and which it claims smokes Intel Ice Lake Xeon chips).

Perhaps a radical increase in performance on ARM would drive consumers to adopt hardware with imperfect backwards compatibility, thus encouraging efforts to achieve a more comprehensive backwards compatibility.
On the datacenter side of things, Amazon has already done this with their Gravitron2 SoC (custom built ARM64 based silicon) and EC2. Companies such as Epic Games, Discovery (now Warner Bros - Discovery) and DirecTV Stream are using EC2 and Gravitron exclusively. The big advantage of ARM over x86 in a datacenter environment is twofold. First is the upfront energy savings over x86. The second advantage is that because ARM uses less wattage, it generates less heat. That means it is easier and cheaper to keep the datacenters cool as a whole.
 

pshufd

macrumors G4
Oct 24, 2013
10,145
14,572
New Hampshire
"I've often wondered about this. Since ARM also has better heat characteristics, what's to keep Apple or another company from massively scaling up the number of ARM cores at the cost of more robust cooling requirements and/or lower battery life?"

Nothing; though there are diminishing returns as the biggest power consumption can come from the display.

The base Apple Silicon chips are already more CPU than the vast majority need (I'd argue that they should put in 16 GB of RAM in the base models). I don't think that there are many that need more power than an M2 or M3 Max in a laptop.

They may also want bragging rights for Geekbench Multicore.

I have an M1 Pro MacBook Pro but I suspect I could run fine with 8 efficiency cores and 2 or 4 performance cores just fine. But I had to get this CPU power if I wanted the 16 inch screen.

Intel does have a bunch of efficiency cores and they appear to want to be able to claim power-efficiency but also high potential multicore scores. BTW, anyone notice that 14th gen is basically 13th gen with boosted clocks?
 

darngooddesign

macrumors P6
Jul 4, 2007
18,362
10,114
Atlanta, GA
I've often wondered about this. Since ARM also has better heat characteristics, what's to keep Apple or another company from massively scaling up the number of ARM cores at the cost of more robust cooling requirements and/or lower battery life? I suppose Nvidia has already gone there with its Grace CPU, which has 144 cores (and which it claims smokes Intel Ice Lake Xeon chips).

Perhaps a radical increase in performance on ARM would drive consumers to adopt hardware with imperfect backwards compatibility, thus encouraging efforts to achieve a more comprehensive backwards compatibility.
Apple had this opportunity with the new MacPro…but they didn’t.
 

960design

macrumors 68040
Apr 17, 2012
3,795
1,674
Destin, FL
Now the new systems seem to be NVMe-only.
They are so cheap and insanely fast ( I think read times are something like 5,150MB/s ).
I put a couple of 2TB M.2s in my build for about $100 each:
There is a HUGE load time difference... it is nearly instantaneous.

Screenshot 2023-08-09 at 9.58.37 AM.png
 

pshufd

macrumors G4
Oct 24, 2013
10,145
14,572
New Hampshire
They are so cheap and insanely fast ( I think read times are something like 5,150MB/s ).
I put a couple of 2TB M.2s in my build for about $100 each:
There is a HUGE load time difference... it is nearly instantaneous.

View attachment 2243707

I received a 4 TB Crucial P3 (3,000 MBPS) NVMe yesterday and copied the stuff from my 2 TB SATA3 SSD. The 4 TB was $190. These are on my Studio and the read speeds on the NVMe are a lot quicker than the SATA3. I'm giving the SATA3 to my daughter for her gaming system.
 
  • Like
Reactions: 960design

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
Perhaps a radical increase in performance on ARM would drive consumers to adopt hardware with imperfect backwards compatibility, thus encouraging efforts to achieve a more comprehensive backwards compatibility.
I believe things are going this way, Microsoft has their own ARM dev kit, and there’s plenty of other companies that are eyeing this market.

Hell, it wasn’t without precedent even before Apple switched. I recall having to run games for Windows 98 in “compatibility mode” on Vista because the NT kernel was different from the DOS. Another non-Apple example would be the plethora of emulators for various video game consoles.

The big hurdle is performance. Currently only Apple has ARM cpus that can compete with x86, with the other players at a disadvantage. And if the high performance ARM cpus can’t be cost competitive with x86 offerings then they’re gonna have a hard time pushing adoption.
 

ChrisA

macrumors G5
Jan 5, 2006
12,917
2,169
Redondo Beach, California
I guess I didn't realize how much of a compatibility issue using an ARM M1 MAC would really be. When my Intel refurb Mac burned up 60 days in, I figured I should grab the most updated model of the Mac Studio. And that was a mistake, it appears.

So I ran into a couple of issues lately with some past Windows software that I wanted to use again. To be fair, I didn't think I'd want to use the software back when I bought this computer back earlier in the year. But, now I do. Anyway.

Is there any possible way to force these programs to run on Windows 11 in the Parallels software?

Or am I screwed?

It was a really great thing to have Intel Macs that could run any software imaginable. Sigh.
Buy any cheap Windows 11 computer, run it with no monitor or keyboard and "share" the display to your new Mac. I see that Newegg was a used mini-PC for under $200. As long as you are not doing games, this works well. If you are worried about lag, use a 10 GBE wired connection, but again, as long as this is not for games even a GOOD WiFi connection is OK.

I run Windows (on VM Ware) on an old 2014 Mini and remote share the display to my new M2 Pro mini, works well enough for some things.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Apple had this opportunity with the new MacPro…but they didn’t.

Where do you see that opportunity? Massively increasing the number of cores was never in the cards with Apples SoC design. That would require designing completely new family of chips, and would mean exorbitant costs. And what about the GPU? No, Mac Pro as we got it was certainly disappointing, but this type of expectation was always a pipe dream.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Perhaps a radical increase in performance on ARM would drive consumers to adopt hardware with imperfect backwards compatibility, thus encouraging efforts to achieve a more comprehensive backwards compatibility.

How do you plan to achieve such a radical increase? By scaling up the cores? X86 can play that game too (and they do - see Zen 4c). The current incentive for ARM in server space is not performance but cost (Graviton is cheaper to rent than x86 instances) or custom feature stack (like Nvidia Grace/Hopper with its ultra-wide CPU/GPU interface).
 

foo2

macrumors 6502
Oct 26, 2007
499
280
Buy any cheap Windows 11 computer, run it with no monitor or keyboard and "share" the display to your new Mac. I see that Newegg was a used mini-PC for under $200. As long as you are not doing games, this works well. If you are worried about lag, use a 10 GBE wired connection, but again, as long as this is not for games even a GOOD WiFi connection is OK.

I run Windows (on VM Ware) on an old 2014 Mini and remote share the display to my new M2 Pro mini, works well enough for some things.
For standard office things but not games, even slow wifi is more than fast enough for everyday, constant use. I routinely RDP into machines on connections far slower than wifi, and it works perfectly well. No one should worry about connection speeds when considering this (again, for normal office use, not games).
 

darngooddesign

macrumors P6
Jul 4, 2007
18,362
10,114
Atlanta, GA
How do you plan to achieve such a radical increase? By scaling up the cores? X86 can play that game too (and they do - see Zen 4c). The current incentive for ARM in server space is not performance but cost (Graviton is cheaper to rent than x86 instances) or custom feature stack (like Nvidia Grace/Hopper with its ultra-wide CPU/GPU interface).
I thought Apple's approach was going to be having two or more high bandwidth slots in the MacPro which could accept additional Ultras. Might not be as fast as a single big ass SoC, but it would be a lot faster than a single Ultra.
 
Last edited:

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
I thought Apple's approach was going to be having two or more high bandwidth slots in the MacPro which could accept additional Ultras. Might not be as fast as a single big ass SoC, but it would be a lot faster than a single Ultra.
The issue with that is that it runs into the reason multi cpu workstations became less popular, making two or more cpus work together is just not as effective as building one big one.

It also flies in the face of Apple’s unified memory architecture, which they heavily rely on to get performance.

I have doubts that having multiple Ultras on a backplane would significantly increase performance, at least to the level people are expecting. And on top of all that, MacOS only supports up to 64 cpus. Two Ultras would go beyond that limit.

I think a better strategy would be to have some sort of PCI compatible interface allowing a connection to a board with more graphics cores honestly. The main complaint people seem to have is with the graphical performance of Apple Silicon anyway.

The new Mac Pro may have disappointed people who weren’t paying attention to Apple Silicon, but I think anyone who’s taken a hard look at what Apple Silicon is saw this coming.
RAM was never going to be replaceable, there’s no external gpu support, and the Max cpu only had one interconnect.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
None of us have the slightest need to get a new computer every month...
I was wondering why that beer-of-the-month-club subscription I signed up for late one night was so expensive, until I realized it was PC-of-the-month.
 
  • Like
Reactions: VivienM
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.