Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Confused-User

macrumors 6502a
Oct 14, 2014
854
988
In many office jobs, people connect cheap laptops to multiple cheap monitors. They don't need fancy computers, and even the MBA is high-end from their perspective. They just need to have several documents visible at once.
I'm skeptical about this claim that multimonitor setups are common, but I have no data. Do you? I have only ever seen multimonitor setups common in a few verticals like trading, but that's not data, that's anecdote.
Tight integration means making compromises, and the same compromise never works for everyone. Multi-monitor support is a low-end requirement, but it's particularly expensive for Apple, due to their insistence on very high resolutions and minimizing power consumption.
I dunno how expensive in die area it would be to support a third display, but apparently it's been too expensive so far. I'd love to see Apple support a third screen with the base M3, but I'm not holding my breath.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
for what it’s worth, I’ve seen users here that have developed games state that MacOS isn’t really even a secondary priority because of the low RoI.

Which makes sense since Apples apparent strategy for games is to make porting easier (a la GPTK).

Porting is always going to be more expensive than Multiplatform development from the start, simply because you are less likely to make choices which will be problematic later on. But this requires careful planning and a lot of expertise, so barely anyone is doing that.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
In many office jobs, people connect cheap laptops to multiple cheap monitors. They don't need fancy computers, and even the MBA is high-end from their perspective. They just need to have several documents visible at once.

I think Apple intends this niche to be covered by Mac Mini.
 
  • Like
Reactions: Xiao_Xi

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
I think Apple intends this niche to be covered by Mac Mini.
I’m no expert on businesses strategy, but that seems like a boneheaded decision. (If true)

A laptop is far more flexible than a small form factor computer. Sure one could argue that it’s just as portable as a laptop, but it’s tethered to an outlet and has no built in screen.

Anecdotally, people at my job carry around pc laptops specifically because they can access files, schematics and emails without being tied to one spot. And when at their desk they plug into a dock (something which to my understanding is common in the workplace today.) And I can’t imagine Apple is any different in that regard.
 

dugbug

macrumors 68000
Aug 23, 2008
1,929
2,147
Somewhere in Florida
I’m no expert on businesses strategy, but that seems like a boneheaded decision. (If true)

A laptop is far more flexible than a small form factor computer. Sure one could argue that it’s just as portable as a laptop, but it’s tethered to an outlet and has no built in screen.

Anecdotally, people at my job carry around pc laptops specifically because they can access files, schematics and emails without being tied to one spot. And when at their desk they plug into a dock (something which to my understanding is common in the workplace today.) And I can’t imagine Apple is any different in that regard.

well this is only a limitation of the macbook air. They should address it IMHO.

-d
 
  • Like
Reactions: MRMSFC

caribbeanblue

macrumors regular
May 14, 2020
138
132
Don't expect too much as long as it's SoC. M1,2 already proves that the performance gains, especially for GPU, is quite low as M2 Ultra is only RTX 3060 ti. They are developing a new chip with TSMC 3D fabric but that won't be available till 2025.
In the attachment is how M1 and M2 compare to the latest RDNA3 iGPU from AMD.
The performance you see in games that are Rossetta-dependent and whose API calls are translated in 2 different layers is not representative of Apple's GPU performance. M2 Ultra can actually 2x faster than the M1 Ultra due to resolving some kind of bottlenecking in its design (seems to be most noticeable in 3D animation software), and this is not only seen on the Ultra but on the entire M-series lineup and M2 Ultra's Blender performance looks to be about on par with an RTX 3090 with OptiX on. What was your point again?

Credit:
1st image: Geekerwan
2nd image: Linus Tech Tips
3rd image: Techgage
 

Attachments

  • a.jpeg
    a.jpeg
    112 KB · Views: 124
  • b.jpeg
    b.jpeg
    107.5 KB · Views: 127
  • Blender-2.90-Classroom-CUDA-and-OptiX-Render-Time-Cycles-NVIDIA-GeForce-RTX-3090.jpg
    Blender-2.90-Classroom-CUDA-and-OptiX-Render-Time-Cycles-NVIDIA-GeForce-RTX-3090.jpg
    390.9 KB · Views: 118
Last edited:
  • Haha
Reactions: sunny5

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
In the attachment is how M1 and M2 compare to the latest RDNA3 iGPU from AMD.
The performance you see in games that are Rossetta-dependent and whose API calls are translated in 2 different layers is not representative of Apple's GPU performance. M2 Ultra can actually 2x faster than the M1 Ultra due to resolving some kind of bottlenecking in its design (seems to be most noticeable in 3D animation software), and this is not only seen on the Ultra but on the entire M-series lineup and M2 Ultra's Blender performance looks to be about on par with an RTX 3090 with OptiX on. What was your point again?
Screenshot 2023-10-08 at 11.31.29 AM.jpg

lol, you are only showing poor performance of Apple Silicon and M2 Ultra is already much slower than RTX 4090.

What is your point again?
 

jeanlain

macrumors 68020
Mar 14, 2009
2,461
954
lol, you are only showing poor performance of Apple Silicon and M2 Ultra is already much slower than RTX 4090.

What is your point again?
Can you give your source or is it too much to ask? You keep asserting stuff and posting bar charts that mean nothing since we don't know the test conditions. It's hard to take you seriously.
 

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
I know TFLOPs don't translate in fps, but the Z1 has like 3 times the TFLOPs of the M2. Something went wrong with the benchmark.
I don't think it is at all clear that the Z1 Extreme can maintain its max TFLOPs in games or benchmarks. It also depends on whether or not the implementation is Memory limited or not, the Z1 has like half of the memory bandwidth of the M2 so that is going to limit it.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
I don't think it is at all clear that the Z1 Extreme can maintain its max TFLOPs in games or benchmarks. It also depends on whether or not the implementation is Memory limited or not, the Z1 has like half of the memory bandwidth of the M2 so that is going to limit it.
RDNA3 TFLOP is assuming you are able to dual issue FP32 across the SIMD's. Most games/benchmarks don't, so you are leaving like "half the performance" on the table (so to speak).
 
  • Like
Reactions: name99 and bcortens

playtech1

macrumors 6502a
Oct 10, 2014
695
889
I'm skeptical about this claim that multimonitor setups are common, but I have no data. Do you? I have only ever seen multimonitor setups common in a few verticals like trading, but that's not data, that's anecdote.

I dunno how expensive in die area it would be to support a third display, but apparently it's been too expensive so far. I'd love to see Apple support a third screen with the base M3, but I'm not holding my breath.
My anecdote would be that peering into the many glass office blocks in London a dual monitor setup seems the norm (as it is in my office - which is not a finance or trading business).

Not that any of these offices use Macs of course - typically HPs or Dells with the odd Lenovo.
 
  • Like
Reactions: MRMSFC

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Yeah, it's difficult to speculate the 'A/Si family' expanding in any other direction in the near term. It's interesting, if this was the intent, that they wanted it known just after WWDC.

In the talk Srouji talks about how the Watch SoC was done at one of the Isreali R&D centers. The Vision Pro has the R1 chip. You don't need much speculation to see there are 'cousins' of the M-series sequence. Or that there are more than just the plain M1 ( Pro , Max , Ultra ) . He also said get into the hands of 'millions and millions'. An "extreme" is not going to get into the hands of 'millions and millions' at all.

Apple has 'A' , 'M' , 'R' , and 'S' series of SoCs. That is an extended family. No need to make up something else. And pretty good chance Apple expand much from that breadth at all. [ Again the video of hiring selectively and for stuff to make unique stuff for Apple products. Coupled with Apple's re-use deployment in systems. 'M' in both Macs and iPads. 'A' in both phones , iPads , and AppleTV. 'S' in watch and Homepods . etc. etc. The 'R' is probably only temporarily an outlier ( if boondoggle Apple Car showed up ... good chance it would get tossed into it. ). ]
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Well, their newer patents do describe quad-chip arrangements, so who knows…

Patents are pretty far from the concrete issues he was talking about for tape-out testing.

And packaging doesn't have to be the more ginormous possible packaging going forward either.

Apple has filed 'Car' patents too , but I wouldn't advise anyone to hold their breath waiting on one of those to arrive either.

P.S. Also makes a common in the video to the effect that "Large Language Models become smaller language models " over time . Most of the commentary chasing the "extreme" SoC are trying to chose mega sized memory footprints. That comment is suggestive that Apple is not chasing extremely huge data footprint tasks.

Somewhat doubtful Apple is chasing 'biggest package every made by anybody' crown.
 
Last edited:
  • Like
Reactions: Chuckeee

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
LPPDDR5X should allow up to 1TB with a Mn Extreme configuration (sixteen 64GB chips), and implementing in-line ECC should allow up to 960GB...
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
Can you give your source or is it too much to ask? You keep asserting stuff and posting bar charts that mean nothing since we don't know the test conditions. It's hard to take you seriously.
This is from Cinebench R24 and it runs for both CPU and GPU. It's a great proof why AS is slow.
 

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
I thought we didn't like Cinebench because it wasn't optimized for Apple Silicon?
The redshift GPU renderer (which powers the Cinebench GPU test) appears to be optimized almost exclusively for Nvidia. My point is that while sunny5 posted this benchmark chart to try and show how bad ASi Macs are at GPU compute, that since (based on theoretical compute capability) the Radeon cards should be far higher than they are in the rankings that this benchmark isn't actually capturing the capabilities of the GPUs under test.

If your workload involves the redshift renderer, then yes this is a useful benchmark, otherwise, not so much.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
And the RTX 4080 is 50Tflops compared to 30 on the M2 Ultra but it seems to get a similar score to it on GFXBench, so I can believe it. View attachment 2291561
For traditional 3D rasterization, Apple's TBDR GPU architecture has an inherent advantage over Nvidia's tiled IMR: much less overdraw. A lot of the RTX 4080's theoretical higher performance is being consumed drawing pixels you don't end up seeing.

This varies by scene. For example, scenes with a lot of transparent or translucent surfaces tend to favor brute force, since a TBDR GPU's efficiency gains are greatest when fully opaque polygons block out everything behind them.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.