Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Whatever. You are completely missing the point. But maybe that is the point.

Apple silicon is great but on desktop hardware Intel and AMD are still faster. You can talk fans and thermals all you want but as long as the cou can maintain a higher clock and be faster than Apple silicon the rest is just irrelevant.

Factor in faster GPU with Ray tracing and it is not as much a competition.

Again I love Apple silicon in general and specifically on mobile but you have to call it as it is.
Not really being read up on it much, does RT have any benefits outside the gaming world (including gaming development)?
 
I live in a cold part of the country but the weather in some parts of the US this past summer was very hot. On the west coast, combine that with drought and more expensive hydro and my question is why deal with lower efficiency when you don't have to.
What do you mean? My apple systems are actually faster at 85% of the work. And the remaining 15% it’s just a few minutes slower. So why would on those very hot days use my system to heat up my office vs the quiet and cool MacBook Pro?
 
I have limited my use of my 12th core i9 and 3080Ti due to my electric bill increasing. Also in the summer I have had to ban my use of that system as my AC couldn’t keep my office cool.

That seems drastic. Have you tried scaling down the power instead?

For example, in UEFI BIOS I can limit CPU by power or temperature. Don't need 30K Cinebench R23 with PBO in a mini-ITX so decided to run eco mode on 5950x by capping temperature at 64C and it still scores 23K Cinebench R23 with just a single-fan air cooler so similar performance to M1 Ultra.

For GPU, laptop Nvidia has function-Q hotkey profiles for 40W, 70W, 80W and 100W but I prefer 70W since it's the sweet spot for performance/watt. You can try command-line 'nvidia-smi' equivalent to set power limit. For AMD desktop GPU, it's done through software control panel that's part of the driver.
 
That seems drastic. Have you tried scaling down the power instead?

For example, in UEFI BIOS I can limit CPU by power or temperature. Don't need 30K Cinebench R23 with PBO in a mini-ITX so decided to run eco mode on 5950x by capping temperature at 64C and it still scores 23K Cinebench R23 with just a single-fan air cooler so similar performance to M1 Ultra.

For GPU, laptop Nvidia has function-Q hotkey profiles for 40W, 70W, 80W and 100W but I prefer 70W since it's the sweet spot for performance/watt. You can try command-line 'nvidia-smi' equivalent to set power limit. For AMD desktop GPU, it's done through software control panel that's part of the driver.
I’m not going to spend hours with settings. This is an advantage of my workflow containing so many systems. My macs are picking up the slack. 0 time wasted in bios for this issue.
 
  • Like
Reactions: pshufd
I’m not going to spend hours with settings. This is an advantage of my workflow containing so many systems. My macs are picking up the slack. 0 time wasted in bios for this issue.

That's my approach - mixed systems to run different workloads. Move workloads to the best platform for that workload.
 
I tend to powdered in PC business Intel lost when most professional gamers moved from Intel towers to AMD towers! Anyone following Intel saw the last few years the drop in marketshare handed over to AMD! That is real war that's going on and what APPLE Sam chips are really up and Intel look like dinosaurs today and no real plan but put more power toward their stale chip design
 
Not really being read up on it much, does RT have any benefits outside the gaming world (including gaming development)?

I would say that the major area of interest for hardware RT currently is 3D rendering. RT in games is still mostly a gimmick. It will surely change gradually, but RT is still too expensive to be used to its full potential in real time.

There is very little doubt in my head that Apple's hardware RT is coming. They were one of the first industry players to roll out a standard RT API which is clearly designed with hardware support in mind, and they are investing major resources into Blender — a senseless move unless their upcoming hardware will dramatically increase RT performance.
 
  • Like
Reactions: NT1440
I’m not going to spend hours with settings. This is an advantage of my workflow containing so many systems. My macs are picking up the slack. 0 time wasted in bios for this issue.

If it's taking hours then it's not for you. Took me a few minutes the first time. Agree with distributed systems so if I'm just browsing I don't use my desktop but grab the ChromeOS tablet or if it's charging the M1 Macbook.
 
I’m not going to spend hours with settings.
Minutes, nor hours

Plus it's my opinion that is in everyone's best interest to customize the settings to ensure the computer operated optimally. What is optimal is different for everyone. For me power effeciency and cool running is my major take aways

I spent maybe 5 to 10 minutes configuring the bios to have my desktop run the way I want it too

Other options are Intels XTU or AMDs Ryzen Master.
 
  • Like
Reactions: Technerd108
That's how I feel about Nvidia.

I think Intel and AMD's paths are converging. I'm really surprised (and a bit disappointed) at how hot Ryzen 9 runs. Its been reported that Raptor lake chips will not run hotter even though they'll be running faster, so intel made some progress.

I'm not doubting Apple, and competition is good for the consumer, but its clear that both Intel and AMD are pushing hard at making their CPUs faster, regardless of power consumption or heat.

For Apple, the M series really shines in the laptop
Apple has a habit of running the Intel Macs close to about 100C before kicking the fans at full blast. Thermal Junction (the temp at which the processor is supposed to fry itself) is typically around 105-110C, and Apple gets dangerously close. It seems to work out for them somehow, I haven't heard of very many processor failures on them. Some of the M2 Macs run even hotter at 108C before throttling.

I was always a bit surprised that Apple took this approach though. The processors physically use more power when they get this hot (the internal resistance changes and they literally consume more power to perform the same workloads when operating at these kinds of temps). I assume Apple knows what they are doing, but in my limited understanding, it still doesn't make a whole lot of sense from an efficiency standpoint.
I don’t know if AMD’s approach/mindset is the same with Ryzen 7000 as Apple’s, however, the high temp on the new Ryzen series is a bit misleading. See here:


Additionally, at least for Ryzen 7000, having the CPU find its thermal max immediately appears to provide the benefit of a very consistent performance — which seems like a more than fair tradeoff. See here:


Nonetheless, it has caused plenty astir, several enthusiasts mimicking @ArkSingularity response — triggering constant ‘warnings' by influencers that “this is by design and okay.” With that I say, the component temp chasing is product marketing + XOC obsessor based as well (i.e., needless for the vast majority of computer users).

But as already discussed in the other thread, the power draw inflation has reach such ridiculous figures on the desktop that Apple will also have to review their TDPs if they want to compete at the high-end.
Even though I don’t have any need for a computer in that tier (i.e., Mac Pro), it is the area with legitimate intrigue yet. Efficiency is a great pursuit, however, production has other (equally concerning) factors to judge and knowingly willing to make greater sacrifices on certain corners.
 
  • Like
Reactions: maflynn
Even though I don’t have any need for a computer in that tier (i.e., Mac Pro), it is the area with legitimate intrigue yet. Efficiency is a great pursuit, however, production has other (equally concerning) factors to judge and knowingly willing to make greater sacrifices on certain corners.

Efficiency and performance don’t have to be contradictory. We are used to look at it this way since that’s how big industry players do it: push above the optimal efficiency point to outgun your competitor. The thing is however, with how much more efficient Apple’s hardware is they could probably do the same and still sit well below industry average power consumption. So there should be enough wiggle room for them to crank up the power while still claiming industry leading efficiency- by a wide margin.
 
I take a different view with performance. My 12th gen Intel 3080Ti desktop is faster on benchmarks and on paper than my M1 Mac mini. Yet when I do video editing my Mac Mini blows it out of the water thanks to everything else the M1 SoC has. Someone mentioned it above, that’s similar to what I do “does this computer perform my work faster”. I don’t take the “ooohhhh high numbers” approach.
Yep, the problem with benchmarks is while they do provide a mostly fair method of comparison, they are far from comprehensive (i.e., not a great gauge of perceived performance).

As for noise, my desktop PC is whisper quiet, I have an I7 11700k, with an RTX 2060, using an air cooler with noctua fans.

Cooling the new CPus are getting harder, so more fans may be needed and of course that means more noise
My desktop is quiet too. I have 12th gen i9 and a 3080 Ti with noctua as well. In fact I prefer the fan noise to my Mac Studio which is more silent, but a severely irritating high pitch. Good thing I have my headphones on 95% of the time because the performance of the Mac Studio is amazing!
The benefit of larger fans. Unfortunately, something most Apple designs can’t accommodate — but that seems an okay trade-off thus far with Apple Silicon.

Large fans is why I want to do my next build in the Fractal Design Torrent (Compact). Additionally, more Noctua versions of graphics cards or similar concepts would be nice. I don’t need a eye catching, curvy, RGB shroud with tiny fans.
 
Yep, the problem with benchmarks is while they do provide a mostly fair method of comparison, they are far from comprehensive (i.e., not a great gauge of perceived performance).



The benefit of larger fans. Unfortunately, something most Apple designs can’t accommodate — but that seems an okay trade-off thus far with Apple Silicon.

Large fans is why I want to do my next build in the Fractal Design Torrent (Compact). Additionally, more Noctua versions of graphics cards or similar concepts would be nice. I don’t need an eye catching, curvy, RGB shroud with tiny fans.
I guess I am the only one here on team water loop.
 
  • Like
Reactions: maflynn
I guess I am the only one here on team water loop.

I went with larger fans on my quiet desktop. My case has plenty of room for water cooled but I didn't want to deal with it. Non-K CPU, 75 watt GPU, lots of fans keeps it cool and quiet, at least below 50% CPU utilization.
 
I guess I am the only one here on team water loop.
I toyed with the idea of a water loop, but overall I felt air cooled is a simpler and safer approach. Pump failure, at worst you damage your system, at the very least your machine is out of action until repaired. Air cooled - you lose a fan, you can still use it. Leaks = short circuit.

I'm generally risk averse, so air cooled machines make the most sense.
 
If it's taking hours then it's not for you. Took me a few minutes the first time. Agree with distributed systems so if I'm just browsing I don't use my desktop but grab the ChromeOS tablet or if it's charging the M1 Macbook.

Minutes, nor hours

Plus it's my opinion that is in everyone's best interest to customize the settings to ensure the computer operated optimally. What is optimal is different for everyone. For me power effeciency and cool running is my major take aways

I spent maybe 5 to 10 minutes configuring the bios to have my desktop run the way I want it too

Other options are Intels XTU or AMDs Ryzen Master.
It won't take me minutes. I have never done it before. I don't want to cause an issue. What if I make settings too low and it causes performance issues? Well back to BIOS to tweak it!

It is not worth it. I use my systems for work. Any minute I spend not working playing around in BIOS I am not getting paid.
 
I want industry leading efficieny in a Macbook M1/M2. I turn on Low Power Mode in Settings. Tops the M2 CPU package peaks at 7.5watts. Still at 7.5 watts very good CPU power.



1664763361953.png
 
I want industry leading efficieny in a Macbook M1/M2. I turn on Low Power Mode in Settings. Tops the M2 CPU package peaks at 7.5watts. Still at 7.5 watts very good CPU power.



View attachment 2086660
How does that compare with regular mode? I know I can look it up but for the interest of this thread some easily visible side-by-side might be helpful (at the very least it’s interesting).
 
This has been on my mind for a while. How does Apple follow up its success of the M1 or does it have too? I think their main bread and butter for Macs is the laptop sector, and this falls right into Apple's strengths with its ARM processors. The issue is they could be viewed as falling behind if they roll out an M2 pro/max/ultra that can't compete with the latest offerings from intel and AMD.

Performance wise we're seeing significant gains from Intel and AMD. I could spend hours and hours going through each benchmark, but Cinebench r23 is one that is generally accepted. The Intel 13th gen numbers fall into the unverified category - so take them with a grain of salt. Also these numbers are not written stone. I can run Cinebench 10 times and get slightly different results, so keep that in mind.

View attachment 2083674
The ONLY *interesting* benchmarks in this space are single-threaded benchmarks. Cinebench multicore and similar only tell us that Intel and AMD have made a *business choice* to place some large number of cores on what they call HEDT chips; they tell us nothing interesting about technology. Maybe you're angry that Apple doesn't also offer a chip with 32P cores and no GPU (or whatever), but that's not an technology complaint, it's a business complaint.

Even then Cinebench is uninteresting insofar as it is a (component of) a large program whose programmers have unclear incentives. Does Maxon *sell* enough copies of Cinema 4D (as opposed to benchmark downloads) to care much about Apple Silicon-specific optimizations (as opposed to eg AVX512 optimizations)? We know, for example, that Cinebench is at R23, while Cinema 4D is already at Release 26; this strongly suggests that EVEN IF Cinema Bench is being rapidly optimized for Apple Silicon, that won't necessarily translate to improvements in Cinebench until Cinebench itself is updated. And why should that be a priority for Maxon?

As for the basic point, there's a *massive* space of options for making Apple's particular cores faster. I don't believe there is any sort of "slowdown", either because of supposedly leaving engineers, or because of supposedly slowed down TSMC, or because of supposedly no more good ideas left.
Rather, look at the situation the way Apple does, as a company that cares nothing about creating premature hype (unlike Intel) or constant churn. What good does it do them or their customers to waste scarce engineering resources on new chips at the high end that are just *slightly* better?
From the outside it's clear that Apple already have multiple somewhat independent groups that work together very well, but on separate schedules. So, for example, the GPU or NPU or ISP may take a big jump, then "stagnate" for two or even three years as what was delivered gets minor improvements, perhaps a few tweaks and bug fixes, but the serious work goes into the next version. You also see this pattern in the patent record.

So why be surprised at the same thing in the cores? Both the A15 and A16 cores are big leaps forward in energy efficiency, along with small improvements in other aspects. Good, necessary work, but not overly interesting for the desktop (meaning there's no strong incentive to go to the expense of moving them to the desktop). Meanwhile, I imagine, a separate team is working on the next cores for the desktop. This team will doubtless roll in the good energy saving ideas adopted by the A15 and A16, but is not simply refining the A16.
I suspect, going further (because Apple is extremely efficient this way) that some aspects of what we will see in the new desktop chips (for example a variety of page sizes, some substantial changes to virtual machines, and even a new cache protocol) were test-run in these phone chips precisely because if problems are discovered on the phone, it's no big deal – it's not like anyone using a phone is engaged in fancy virtual machine tricks or requires a sophisticated cache protocol!

In other words, all this weeping and wailing is simply the impatience of three-year olds.
A new desktop SoC is coming. It will be spectacular along all dimensions. BUT
even if it is ready within Apple (unclear) it's surely designed targeting N3 (because duh! why target the old node when we all know 2023 and 2024 are N3 years?) so there's the requirement of flowing through N3 before it can be shipped to users. TSMC tells us N3 is in volume now. But there's a long path from a wafer going into the factory to that wafer coming out, and then having to flow through PCB and then system manufacturing...
If Gurman is correct that there will be no October event, I think that also tells us that there will be no new desktops in October. When the new desktops do ship (and who knows when that will be in terms of what Apple is trying to achieve. Is it important to hit Christmas? Are there advantages in delaying till January?) I expect their WILL be a big event, simply because I expect the (single-threaded) performance boost to be massive, and for Apple to want to make a big deal about it.
 
  • Like
Reactions: altaic and jdb8167
How does that compare with regular mode? I know I can look it up but for the interest of this thread some easily visible side-by-side might be helpful (at the very least it’s interesting).
~1880 single core and ~8710 multi-core according to Mactracker and:

Efficiency and performance don’t have to be contradictory. We are used to look at it this way since that’s how big industry players do it: push above the optimal efficiency point to outgun your competitor. The thing is however, with how much more efficient Apple’s hardware is they could probably do the same and still sit well below industry average power consumption. So there should be enough wiggle room for them to crank up the power while still claiming industry leading efficiency- by a wide margin.
True though they don’t scale equally. Similarly, the best value are the mid-range models. Nonetheless, there are users that can still make the entry and top-spec configs worthwhile. Of course, most user needs fall within the middle range.

For me, on the Windows desktop side, I’m willing to allow/enable the auto OC/boost, but don’t feel it’s worth the hassle spending hours trial and erroring each little increment of each parameter. And I did have a reminder within the past year trying to under volt an 11700K. No matter how I went about it, the system always had at least one hard crash (i.e., reboot to BIOS menu) within 48 hours under high(er) load adjusting as little as a -0.05 offset.

It’s why I can (at least somewhat) support:
It won't take me minutes. [...] What if I make settings too low and it causes performance issues? Well back to BIOS to tweak it!

It is not worth it. I use my systems for work. Any minute I spend not working playing around in BIOS I am not getting paid.
 
I want industry leading efficieny in a Macbook M1/M2. I turn on Low Power Mode in Settings. Tops the M2 CPU package peaks at 7.5watts. Still at 7.5 watts very good CPU power.



View attachment 2086660

Runs like poop though compared to old AMD 4650U. All running Raze 1.5.0 port of Duke Nukem 3D.

M1 Macbook Air full power mode (Duke Nukem 3D 35.7 fps)
Screen Shot 2022-10-02 at 8.07.57 PM - Copy.png


M1 Macbook Air low power mode (Duke Nukem 3D 26.2 fps)
Screen Shot 2022-10-02 at 8.12.21 PM - Copy.png


AMD 4650U battery efficient mode ~5.5W (Duke Nukem 3D 96.4 fps)
Duke Nukem 3D_ Atomic Edition 10_2_2022 8_33_01 PM - Copy.png

Screenshot 2022-10-02 210459 - Copy.png
 
Runs like poop though compared to old AMD 4650U. All running Raze 1.5.0 port of Duke Nukem 3D.

M1 Macbook Air full power mode (Duke Nukem 3D 35.7 fps)
View attachment 2086727

M1 Macbook Air low power mode (Duke Nukem 3D 26.2 fps)
View attachment 2086729

AMD 4650U battery efficient mode ~5.5W (Duke Nukem 3D 96.4 fps)
View attachment 2086731
View attachment 2086758
Why are you comparing a x86 Windows native game which runs thru Rosseta on M1 but native on the AMD chip.

Also I compared the CPU in LPM not the GPU. I know the M1 GPU is weak compared to AMD U chips and M2.

Let me do the comparison for you. The AMD 4650U chip scores 1111 in Single-Core Score and 5652 in Multi-Core Score in normal mode so in Low power mode the M2 CPU is still faster than 4650U while using less power.

 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.