Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Bodie CI5

macrumors 6502
Apr 22, 2014
255
144
How so? Their first mainstream Mac capable of gaming was released in late 2020
How so? I remember my brother played for a while on his PPC PowerBook both WoW as well as Halo.

For a short while, as a muck around, I ended playing WoW and Deus Ex on my iMac 2014 for a couple of months. Granted, this at 5k wasn’t the best, but it could be done.
 

jujoje

macrumors regular
May 17, 2009
247
288
Why wouldn’t Apple build their own 3D renderer…? When you consider their ‘long game’ approach to most things, it seems logical that they would put some effort into doing something that takes advantage of and shows off what their GPU can really do. Besides it is even more conspicuous in its absence considering their focus on AR/VR as a vital part of Apples future. Heck even AMD have a rendering engine, and they’re coming from a position where they were looking for loose change between the couch cushions to pay rent!

As I was wittering on somewhere else in the forums, would love Apple to release a reference Metal render / GPU renderer to see what we could get performance wise out of something coded to to the hardware (perhaps replacing the iffy Metal Storm delegate that Xcode and Preview use). I suspect, or perhaps hope, that we'll be seeing a lot more at WWDC with regards to pro apps, and rendering.

Apples AR already has a lot of cool rendering technology, from realtime object occlusion, ambient occlusion, motion blur etc. They do seem to have some weird notions sometime; over on the developer forums someone was asking for displacement in AR and the apple developers were asking why you'd want that.

AMD Pro Render is... not great. I applaud and appreciate the effort, and their materialX library is really nice, but as a renderer it's underwhelming. I think it helps AMD work through how to implement raytracing / path tracing but isn't particularly robust.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Why wouldn’t Apple build their own 3D renderer…?

They kind of did, in a way. Metal RT is geared towards production renderers and significantly simplifies development.


How so? I remember my brother played for a while on his PPC PowerBook both WoW as well as Halo.

For a short while, as a muck around, I ended playing WoW and Deus Ex on my iMac 2014 for a couple of months. Granted, this at 5k wasn’t the best, but it could be done.

Sure, back in PPC days it was a bit different, since the most powerful GPU in the world only used around 50 watts of power and could fit in a laptop. Things were a bit different last 15 years though. Apple has always prioritized having good GPU performance on their low end, even making some unpopular decisions (such as sticking with slower CPUs for a while so that they could use faster Nvidia iGPUs), but third-parry offerings just weren’t that good. Baseline Intel Macs we’re absolutely capable of running some simpler Indy games or titles like civilization as long as you were ok with reduced quality, but that’s about it. M1 was a tremendous improvement that boosted the gaming capability of these consumer Macs, catapulting them into an entirely different category. Even more than a year after its release base M1 is at least 50% faster than any other low-power GPU on the market.
 

MayaUser

macrumors 68040
Nov 22, 2021
3,178
7,199
M1 was a tremendous improvement that boosted the gaming capability of these consumer Macs, catapulting them into an entirely different category. Even more than a year after its release base M1 is at least 50% faster than any other low-power GPU on the market.
And lets wait for the M2 in the fall...there everyone expect the gpu to have around 40-50% improvement
So if Apple can keep this up from generation to generation, at least until M4, the performance it will be incredible for ultrabooks as well...i mean can we expect the M3 to have 2x the M1 gpu i wonder ?! And at what power consumption..can Apple keep the W around the same as the M1? or it will be doubled ?!
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,677
And lets wait for the M2 in the fall...there everyone expect the gpu to have around 40-50% improvement
So if Apple can keep this up from generation to generation, at least until M4, the performance it will be incredible for ultrabooks as well...i mean can we expect the M3 to have 2x the M1 gpu i wonder ?! And at what power consumption..can Apple keep the W around the same as the M1? or it will be doubled ?!

I think Apple still has some tricks in their sleeves to improve the efficiency of their GPUs, but the laws of physics still apply… I would expect the base M-series to be equivalent to whatever the contemporary Nvidia entry-midrange graphics is, e.g. M2 should be in the ballpark of the 3050 (at least the lower TDP versions)
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
If you go to gfxbench.com you get other results. There M1 Ultra is faster than RTX 3080 and close to 3090. Also Anandtech explained last time reviewing M1 Max/Pro that M1 seems to be CPU bound in games and can't use all the memory bandwidth to feed the GPU.

View attachment 1975514
GXFBench has lower DX11 scores than Vulkan. At this point I don't understand why they insist on defaulting to DX11 instead of Vulkan on Windows (or fix their DX12 renderer and tests). That and there is no way the 3080ti is faster than the 3090....
 
  • Like
Reactions: Homy

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
If I remember correctly, folks familiar with circuit design claimed that it can be approached differently, resulting in different properties. Maybe someone like @cmaier can comment on this.
I’m pretty convinced apple simply chose the knee of the curve. Increasing voltage further to increase clock speed has diminishing returns past the point they chose - in other words, they picked the most energy efficient point. Why they didn’t ramp up more for the big boxes is a marketing decision, I guess.

The maximum speed at a given voltage is a physical characteristic of the design, and there is a maximum voltage that works before you fry the chip, so everything that everyone said was right.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
I’m pretty convinced apple simply chose the knee of the curve. Increasing voltage further to increase clock speed has diminishing returns past the point they chose - in other words, they picked the most energy efficient point. Why they didn’t ramp up more for the big boxes is a marketing decision, I guess.

The maximum speed at a given voltage is a physical characteristic of the design, and there is a maximum voltage that works before you fry the chip, so everything that everyone said was right.
I've always wondered, why do CPU's draw so little voltage, but pull so many amps? Wouldn't it be better to draw more voltage to lower the amperage needed?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I've always wondered, why do CPU's draw so little voltage, but pull so many amps? Wouldn't it be better to draw more voltage to lower the amperage needed?
It doesn‘t work that way mostly because transistors don‘t behave like resistors, but like amplifiers.

You increase the voltage in order to increase the current. You increase the current because current is, by definition, a change in electrical charge. So the more current, the faster you can charge and discharge the transistor drains.
 
  • Like
Reactions: philosophree

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
It doesn‘t work that way mostly because transistors don‘t behave like resistors, but like amplifiers.

You increase the voltage in order to increase the current. You increase the current because current is, by definition, a change in electrical charge. So the more current, the faster you can charge and discharge the transistor drains.
Interesting, thanks for the explanation!
 
  • Like
Reactions: dgdosen

jeanlain

macrumors 68020
Mar 14, 2009
2,461
954
Why wouldn’t Apple build their own 3D renderer…? When you consider their ‘long game’ approach to most things, it seems logical that they would put some effort into doing something that takes advantage of and shows off what their GPU can really do.
Who is going to use a renderer that only works on Macs?
Unity runs decently on macOS, but most Unity games aren't ported anyway...
 

perplx

macrumors member
Aug 2, 2013
65
203
I’m pretty convinced apple simply chose the knee of the curve. Increasing voltage further to increase clock speed has diminishing returns past the point they chose - in other words, they picked the most energy efficient point. Why they didn’t ramp up more for the big boxes is a marketing decision, I guess.

The maximum speed at a given voltage is a physical characteristic of the design, and there is a maximum voltage that works before you fry the chip, so everything that everyone said was right.

I think it's a decision to have an extremely simple cpu lineup, every m1 from the ipad air to the m1 ultra has the same single core speed and they just add cpu and gpu cores going up.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I think it's a decision to have an extremely simple cpu lineup, every m1 from the ipad air to the m1 ultra has the same single core speed and they just add cpu and gpu cores going up.

Well, i guess, but changing the voltage (or binning the processors by performance) doesn‘t complicate the cpu lineup. It‘s the exact same chips.
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
Well, i guess, but changing the voltage (or binning the processors by performance) doesn‘t complicate the cpu lineup. It‘s the exact same chips.
It would complicate the product line-up a bit, but it would also provide options for differentiation and creating tiers, so in itself that might even out, or even be in favour of offering a range of frequencies. However, and this is what I think clinches the deal for Apple, that would also mean having to cater to a range of power draws and cooling requirements design wise.

And moving beyond the power draw "knee" would also cost them in efficiency which is arguably their strongest sales argument (along with design). A product like the MacBook Air (which has sold really well) is compelling in large part because it is performant and fanless and has a great battery life. Compromising the last two factors for an increase in performance would arguably make the product less attractive to its target demographic. The MacBook Pros (supply line sources anticipated 2 million produced Q122) are likewise very appreciated for their combination of performance, battery life and low operational noise. Would they sell better with the power draw, battery lives and noise levels of their old Intel versions? Would they be better products for their users? I very much doubt it. I even doubt that most owners of the new Studio systems (no battery concerns) would be happy to trade quiet operation for a bit more performance.

So for Apple, trying to create product tiers based on processor frequencies seems as if it would not only create issues with form factors, but simply not be very likely to be a successful business approach.

I don't find their choice strange at all, really, and I'm strongly in favour of it as a user.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Nonsense.
Nvidia GeForce 4 Ti and ATI Radeon 9700 Pro the fastest GPUs of 2001-2002 consumed around 40-50 watts. PowerBook G4 had ATI 9000, which was probably the fastest (relative to contemporary standard) graphics in a Mac notebook ever and consumed less than 30W.

GPUs as we know them are fairly new devices. They really took off in early 2000 and as 3D gaming became a huge industry GPUs would also get faster, bigger and hotter. GeForce Ultra 6800 Ultra (2004) was 90W, 9800 GTX (2008) was 140W etc.
 

jjcs

Cancelled
Oct 18, 2021
317
153
Nvidia GeForce 4 Ti and ATI Radeon 9700 Pro the fastest GPUs of 2001-2002 consumed around 40-50 watts. PowerBook G4 had ATI 9000, which was probably the fastest (relative to contemporary standard) graphics in a Mac notebook ever and consumed less than 30W.

Your frame of reference is consumer-grade lowend hardware, not "the most powerful GPU" at the time. Especially if we're discussing hardware transform and lighting. The huge gaming market just made the technology cheaper.

Power consumption for consumer grade stuff was limited by the bus.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Your frame of reference is consumer-grade lowend hardware, not "the most powerful GPU" at the time. Especially if we're discussing hardware transform and lighting. The huge gaming market just made the technology cheaper.

Power consumption for consumer grade stuff was limited by the bus.

Ok. What was the non “low-end” GPU hardware of 2002?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.