Wouldn't you mean 3 displays? M1/M2 technically support 2 displays already right?
One external display only. Sure, if you also count the internal display then it's two, but I meant external ones.
Wouldn't you mean 3 displays? M1/M2 technically support 2 displays already right?
Very interesting. I hadn't thought about implications of the A17 driving an external monitor - the iPads already do, but those are Mx iPads.BTW, has it been mentioned that since A17 has external display support, M3 is likely to support two displays?
But it amuses me to no end that some random poster in an Internet forum thinks they are a lot smarter than Apple's SoC architects and designers tho.
If only Apple had some money, or a product that was attention grabbing. I’m sure they'd be able to get some good chip designers.Only smarter than the current sub-par architects/designers, not all the good ones that left Apple... ;^p
Very interesting. I hadn't thought about implications of the A17 driving an external monitor - the iPads already do, but those are Mx iPads.
Why do you think this has any implication for the M3 though? While Apple will surely reuse various blocks from the A17, I don't know why you'd think the inclusion of external display support on the A17 says anything about the M3. Perhaps it's the other way around - they've taken the display subsystem from the M2, with its support for external displays, and grafted it into the A17?
I only mentioned it because I thought the M1/M2 Mini support 2 displays with the obvious caveat of them having no internal one. I can see how that would be pedantic.One external display only. Sure, if you also count the internal display then it's two, but I meant external ones.
I don’t think it’s a stretch to think Apple’s engineers attempted to make an architecture that could be used for both purposes. And downclocked the phone version.Whatever Apple may do be assured their $200B iPhone bussiness always comes first.
So . . . was Srouji hinting the 4xMax (Extreme) at the 13m:50s mark?
Well, their newer patents do describe quad-chip arrangements, so who knows…
This is a good comparison that is not talked about much and that is an M2 Ultra. The M2 Pro and Max would have to be in 20xx series or lower. While Apple has done a good job with its chips, the graphics are still nothing compared to the current generation of dedicated GPUs. They are very focused on the media side, and less on the gaming or 3D side.Don't expect too much as long as it's SoC. M1,2 already proves that the performance gains, especially for GPU, is quite low as M2 Ultra is only RTX 3060 ti. They are developing a new chip with TSMC 3D fabric but that won't be available till 2025.
I don't think it is a good comparison because it depends on what software you're comparing. In terms of Blender for example the GPU compute capability (excluding OptiX) the M2 Ultra is pretty close to a 4080 not a 3060Ti. Apple is very focussed on the 3D rendering side and have really put a lot of effort into optimizations for that market. Apple seems to work more closely with Blender than they do with game companies for example. We'll have to see how M3 Ultra performs with its hardware RT acceleration but I don't doubt that it will pull Apple forward further.This is a good comparison that is not talked about much and that is an M2 Ultra. The M2 Pro and Max would have to be in 20xx series or lower. While Apple has done a good job with its chips, the graphics are still nothing compared to the current generation of dedicated GPUs. They are very focused on the media side, and less on the gaming or 3D side.
I don't think it is a good comparison because it depends on what software you're comparing. In terms of Blender for example the GPU compute capability (excluding OptiX) the M2 Ultra is pretty close to a 4080 not a 3060Ti. Apple is very focussed on the 3D rendering side and have really put a lot of effort into optimizations for that market. Apple seems to work more closely with Blender than they do with game companies for example. We'll have to see how M3 Ultra performs with its hardware RT acceleration but I don't doubt that it will pull Apple forward further.
It really, really, depends on optimization. Even before Apple switched to their own silicon exclusively for GPUs games ran better on Windows via boot camp than they did running natively in macOS. Unless/until Apple gets truly serious about ensuring games are just as well optimized for macOS as they are for Windows they are always going to perform worse. This however does not mean that the problem lies with Apple's GPU cores themselves.
I would also argue Apple isn't really aiming at the top flight discrete GPUs which consume more than twice as much power than a whole M2 Ultra does (never mind just the GPU part). I would like Apple to build a quad chip or move to a tile based chip packaging technology but as much as I might want this I don't know if the market for the top of the line discrete GPUs is big enough to get Apple to build something like that.
I don't want to go back to discrete GPUs and CPUs on separate packages because that will fragment optimization targets and introduce other inefficiencies (related to unified vs non-unified memory). If Apple wanted to they could build an absolutely massive tiled SoC with a monstrous GPU and CPU (still in the SoC package), they just don't seem to want to.
I agree that that appears to be their current approach. I think they could build a different type of tiled architecture though, they could split their chips into compute tiles, IO and SRAM tile, and GPU tiles and then mix and match the tiles to create SoCs with huge GPUs and small CPUs or huge CPUs and small GPUs. However I just don't think they have any interest in this kind of approach as the primary markets that need the extreme options are perceived by Apple as too small to bother with.I think, and correct me if I am wrong, that Apple is focused on having an overall optimized solution in place that is not a power hungry GPU and a weak CPU. I was thinking about the idea of a separate GPU but your point is correct as it would split up the resources and then become a little bit of a mess if they had Gx, Gx Pro, Gx Max, Gx Ultra chips and then every combo of how those would work in a system.
What Apple showed off from the new A17 Pro series was RT, but it was at a lower frame rate (it was a mobile chip) but maybe this will be something that is part of M3 line
This is a good comparison that is not talked about much and that is an M2 Ultra. The M2 Pro and Max would have to be in 20xx series or lower. While Apple has done a good job with its chips, the graphics are still nothing compared to the current generation of dedicated GPUs. They are very focused on the media side, and less on the gaming or 3D side.
However, what would be a good view of this is from a game dev side. What are their thoughts on Apple Silicon and is it something viable to use?
Apple can create all the tools and hardware it wants but if the developers go a different route then it is not a good solution.
Which is inline with the 20/30 Series "base" clocks.They also run their GPUs at very low frequencies of just 1.3-1.4Ghz, even on systems that could handle the extra heat.
I agree that that appears to be their current approach. I think they could build a different type of tiled architecture though, they could split their chips into compute tiles, IO and SRAM tile, and GPU tiles and then mix and match the tiles to create SoCs with huge GPUs and small CPUs or huge CPUs and small GPUs. However I just don't think they have any interest in this kind of approach as the primary markets that need the extreme options are perceived by Apple as too small to bother with.
Yeah - I was thinking they could build multi-chip just for desktops and keep their monolithic approach for mobile but I don't think the desktop market is perceived to be big enough to warrant such an approach.Multi-chip technology also has disadvantages (e.g. power consumption). Apple needs to find a way to balance economic and logistic factors against the nature of the products they ship. E.g. it probably doesn't make much business sense for them to build a faster dedicated desktop chip, since most of their business is mobile. But they are exploring ways to build more performant chips. One patent (published some time ago) mentions stacking two dies — potentially using different node sizes — to implement different components. E.g. compute could go on a 3nm die and most of the uncore, including memory controllers and caches, could be implemented on a cheaper 5nm die. Could be a nice solution that would allow them to optimally use the higher density of the new nodes while managing its higher cost.
Yeah - I was thinking they could build multi-chip just for desktops and keep their monolithic approach for mobile but I don't think the desktop market is perceived to be big enough to warrant such an approach.
for what it’s worth, I’ve seen users here that have developed games state that MacOS isn’t really even a secondary priority because of the low RoI.But many developers will probably choose to use DX12 instead simply because that's where the money is. And that's the core of the problem IMO.
In many office jobs, people connect cheap laptops to multiple cheap monitors. They don't need fancy computers, and even the MBA is high-end from their perspective. They just need to have several documents visible at once.The lack of 3rd display support on current M1 / M2 is due to silicon area priorities, such that Apple determines this class of chips do not need to waste die space for a 3rd display buffer on die since it mostly goes into iPads and MBAs where you can guess close to 100% users not needing that many displays attached.