Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
This is Apple just getting started. Imagine the beast of a desktop CPU they will have for iMac and the Mac Pro. They won't be restricted to low wattage but still will be low wattage compared the comparable Intel and AMD CPUs.
 

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,296
How about start discussing real world usage.


Needs to be confirmed by the community to see if he's running the free version which doesn't use GPU acceleration and if settings are done properly to utilize CUDA. There have been other comparisons where the person didn't set up the test environment properly, made a video claiming the Macbook Pro is faster then had to put out an update retracting.

 
  • Like
Reactions: g75d3

JimmyjamesEU

Suspended
Jun 28, 2018
397
426
Needs to be confirmed by the community to see if he's running the free version which doesn't use GPU acceleration and if settings are done properly to utilize CUDA. There have been other comparisons where the person didn't set up the test environment properly, made a video claiming the Macbook Pro is faster then had to put out an update retracting.
Could you give me a reference to these fake comparisons?

He is a professional editor and did test with both Premiere and Resolve (if you watched the video), and to my knowledge there isn’t a free version of Premiere
 

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,296
Could you give me a reference to these fake comparisons?

He is a professional editor and did test with both Premiere and Resolve (if you watched the video), and to my knowledge there isn’t a free version of Premiere

Look back above. Davinci Resolve free doesn't utilize GPU. Premiere Pro has a reputation of being a pig so not surprising.
 

Surne

macrumors member
Sep 27, 2020
76
57

Attachments

  • Screenshot 2021-10-28 210003.png
    Screenshot 2021-10-28 210003.png
    122.5 KB · Views: 50

Frankied22

macrumors 68000
Nov 24, 2010
1,788
594
Apple has just turned the whole PC market on its ear. It has proved that you can do both performance and energy efficiency at the sam time. A question for you. When you were doing video rendering with Da Vinci with both systems, did you hear fan noise on both systems and how loud?

By the way, congrats on your new MacBook Pro M1 Max.

I heard no fans on the Mac. Also, I am playing LoL rght now with a friend and I have the mac plugged into a 144hz monitor and the built in screen open too and it's getting about 130 fps in 1440p. I am using discord, streaming thru discord, and other programs open and the fans ARE NOT EVEN ON.
 

JimmyjamesEU

Suspended
Jun 28, 2018
397
426
Look back above. Davinci Resolve free doesn't utilize GPU. Premiere Pro has a reputation of being a pig so not surprising.
I just saw the video you referenced above now thanks.

From watching it, it referred to Blender, not Premiere or Resolve.

In the conclusion he stated that the Mac doesn’t have gpu support in Blender.

You complained that people were cheating there Resolve benchmark tests by not enabling gpu acceleration on the PC. You then post a video about an entirely different application, which has gpu acceleration on pc but not Mac.

I genuinely don’t know what to say about this.
 

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
Needs to be confirmed by the community to see if he's running the free version which doesn't use GPU acceleration and if settings are done properly to utilize CUDA. There have been other comparisons where the person didn't set up the test environment properly, made a video claiming the Macbook Pro is faster then had to put out an update retracting.

Are you kidding me. That guy does video editing for a living and you think he is using a free version. You are a laugh riot. Maybe try harder. I know it is hard for you to accept. It is too bad you want to stay in denial land.
 
Last edited:
  • Like
Reactions: EPO75 and Romain_H

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
Needs to be confirmed by the community to see if he's running the free version which doesn't use GPU acceleration and if settings are done properly to utilize CUDA. There have been other comparisons where the person didn't set up the test environment properly, made a video claiming the Macbook Pro is faster then had to put out an update retracting.

Watch this:

 

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
Since you brought up blender. This is a M1 Pro and a AMD Ryzen 9 5900HS. Not even the M1 Max but the M1 Pro.

 

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
Some people gotta keep moving the goalposts about whenever a macOS product performs well...?!?
But this is even more fundamental, this is about Apple Silicon vs Intel and AMD. MacOS just happens to go along for the ride.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
This is Apple just getting started. Imagine the beast of a desktop CPU they will have for iMac and the Mac Pro. They won't be restricted to low wattage but still will be low wattage compared the comparable Intel and AMD CPUs.
The existing "open" PC architecture is full of bottlenecks.

The RAM to CPU bottleneck results in the CPU having to crank up to ridiculous frequencies to compensate.

The PCIe bottleneck results in dGPUs having to use ridiculously high bandwidth, expensive and power hungry VRAMs to compensate.

Apple's approach is a departure from this architecture, which frankly is quite refreshing. You get power and save on energy usage at the same time, resulting in longer battery life for mobile applications and less cooling and noise for desktop solutions.

Can't wait to see what the 27" iMac and Mac Pro replacement will look like.
 

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
The existing "open" PC architecture is full of bottlenecks.

The RAM to CPU bottleneck results in the CPU having to crank up to ridiculous frequencies to compensate.

The PCIe bottleneck results in dGPUs having to use ridiculously high bandwidth, expensive and power hungry VRAMs to compensate.

Apple's approach is a departure from this architecture, which frankly is quite refreshing. You get power and save on energy usage at the same time, resulting in longer battery life for mobile applications and less cooling and noise for desktop solutions.

Can't wait to see what the 27" iMac and Mac Pro replacement will look like.
The only downside with the Apple Architecture is upgradability. I mean if you think about what Apple has managed to accomplish is simplify the architecture. Hot wire everything to the CPU. And having 32-channel RAM is unheard of in a laptop let along PC desktops. You need to go to something like the AMD EPYC processors and those use 225 watts of power.

With M1 Pro and Max you can get peak performance running on battery alone.

And Apple is just getting started.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
The only downside with the Apple Architecture is upgradability.
You have a point there, but frankly, IMHO, with the advancement of technology at current pace, when I'm ready to upgrade my supposedly upgradeable computer, it probably doesn't buy much in terms of speed. Probably makes more sense to upgrade the entire widget and recycle or re-purpose the old hardware.
 
  • Like
Reactions: Taz Mangus

JouniS

macrumors 6502a
Nov 22, 2020
638
399
The existing "open" PC architecture is full of bottlenecks.

The RAM to CPU bottleneck results in the CPU having to crank up to ridiculous frequencies to compensate.

The PCIe bottleneck results in dGPUs having to use ridiculously high bandwidth, expensive and power hungry VRAMs to compensate.
Intel and Apple Silicon both have the same RAM to CPU bottleneck. There are some small quantitative differences, but you can safely ignore them most of the time, even if you are writing performance-critical software. The high frequencies of Intel CPUs are unrelated to the bottleneck. Intel simply does it because it's possible and beneficial in some consumer applications.

The primary effect of the PCIe bottleneck is the programming model where the CPU and the GPU are effectively separate computers connected over a fast network. That's not a real problem in many applications, because they have to scale to multiple systems anyway. Other applications would benefit from the simplicity and performance of shared memory. The bottleneck doesn't increase the need for memory bandwidth, but it may increase the need for the amount of memory.

You have a point there, but frankly, IMHO, with the advancement of technology at current pace, when I'm ready to upgrade my supposedly upgradeable computer, it probably doesn't buy much in terms of speed. Probably makes more sense to upgrade the entire widget and recycle or re-purpose the old hardware.
Upgradeability is more about providing cost-effective devices to many different use cases rather than upgrading systems that are already in use. Tight integration makes sense when you are an average user or close to it. A modular architecture quickly becomes more competitive when one user requires a lot of RAM, another needs several high-end GPUs, and yet another needs a lot of local disk space.
 
  • Like
Reactions: Surne and kvic

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Intel and Apple Silicon both have the same RAM to CPU bottleneck. There are some small quantitative differences, but you can safely ignore them most of the time, even if you are writing performance-critical software. The high frequencies of Intel CPUs are unrelated to the bottleneck. Intel simply does it because it's possible and beneficial in some consumer applications.
I agree both have bottlenecks. Intel's one is a lot smaller compared to AS, and Intel's problem is that their CPUs has to cater to a lot of variations, and therefore cannot freely reduce the bottlenecks compared to AS. Apple can go wild, only limited by physics and cost. Increasing CPU clock frequencies only benefits workloads that fits into L1, L2 and L3 cache. Once you need more, you're hit with the RAM to CPU (i.e. L3 cache) bottleneck. When a 3.2 GHz processor can process more data than a CPU running at 5GHz, it shows that there bottleneck with the 5GHz CPU.

Engineering is always about trade offs.

The primary effect of the PCIe bottleneck is the programming model where the CPU and the GPU are effectively separate computers connected over a fast network. That's not a real problem in many applications, because they have to scale to multiple systems anyway. Other applications would benefit from the simplicity and performance of shared memory. The bottleneck doesn't increase the need for memory bandwidth, but it may increase the need for the amount of memory.
Programming models can be abstracted from the underlying implementations. APIs can be written to masked away the RAM to PCIe transfer, but it doesn't mean the bottleneck is not there. Let take this hypothetical scenario. Let's say we load a 10MB texture that's required by the GPU. In the UMA case, it's loaded into RAM, and the SLC cache already have a copy of that texture. The GPU doesn't have to wait for it anymore and access it directly. The higher AS RAM bandwidth go, the better the performance. And again Apple's GPU doesn't need to clock as high to maintiain the same performance, like the CPU case. Now with the M1 Max, Apple's GPU have access to more than 40GB of data to process.

In the case of dGPU with VRAM, transfer is limited by it 32GB/s limit. So to mitigate this, the VRAM has to go as fast as possible to feed data to the GPUs. PCIe is great for expansion, but not so good for extreme bandwidth data transfers.

Upgradeability is more about providing cost-effective devices to many different use cases rather than upgrading systems that are already in use. Tight integration makes sense when you are an average user or close to it. A modular architecture quickly becomes more competitive when one user requires a lot of RAM, another needs several high-end GPUs, and yet another needs a lot of local disk space.
Which is why Apple doesn't go into every market there is. They choose to play in the market they think they can make a profit. Apple most likely decided that they market they want to play in doesn't value upgradeability. I don't see anything wrong with that.

We still have not seen what Apple's AS Mac Pro looks like, but again, Apple is playing in a niche segment of the market here. Definitely not in the enthusiasts market.
 
  • Like
Reactions: Taz Mangus

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
You have a point there, but frankly, IMHO, with the advancement of technology at current pace, when I'm ready to upgrade my supposedly upgradeable computer, it probably doesn't buy much in terms of speed. Probably makes more sense to upgrade the entire widget and recycle or re-purpose the old hardware.

Buy Mn Max powered Mac minis with 10Gb Ethernet, every time your upgrade to a new unit your personal renderfarm increases in size...?!? ;^p
 
  • Haha
Reactions: quarkysg

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Programming models can be abstracted from the underlying implementations. APIs can be written to masked away the RAM to PCIe transfer, but it doesn't mean the bottleneck is not there.
That's exactly what I meant. All good abstractions are leaky. Everything can be abstracted away, but often you have to understand how things really work beneath several abstraction layers if you want to make informed decisions. Especially if you care about performance.

In the case of dGPU with VRAM, transfer is limited by it 32GB/s limit. So to mitigate this, the VRAM has to go as fast as possible to feed data to the GPUs. PCIe is great for expansion, but not so good for extreme bandwidth data transfers.
Faster VRAM doesn't help if the bottleneck is getting the data to the GPU in the first place. More VRAM may help if it allows you to keep the data there for later use.

We still have not seen what Apple's AS Mac Pro looks like, but again, Apple is playing in a niche segment of the market here. Definitely not in the enthusiasts market.
The workstation market is just a collection of tiny niches. If Apple takes the easy "4x M1 Max" approach, the new Mac Pro may be underwhelming to many current users. Not enough GPU power, not enough RAM, and so on. It may also lose some of the conceptual simplicity of a true unified memory architecture if accessing the RAM behind another chiplet is slower than accessing local memory.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.