Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thats why I am saying that we should get rid of these labels altogether and instead focus on performance and implementation characteristics of the processors themselves.

The implementations do relatively impact performance. Hiding head in the sand to say that it doesn't. There is overlap between the two. There is more overlap with latest modern technology and fabrication levels than in the past, but there will continue to be a predictable gap at the extremes.

Integrated is an implementation characteristic so it isn't going away.
 
...
How would you even know? Apple never used NUMA architecture in their computers. At any rate, these are things that can be tweaked in.

Not true on the "never". Mac Pro 2009-2012 used CPU packages with memory controller built into the package. The dual package models required a 'remote' memory access through QPI to get to the 'other' set of DIMM slots. And yes there were gaps between macOS and Windows/Linux on systems implemented with dual packages from that era. Huge no. macOS slowest of the pack on large footprint , highly parallel jobs? yes.

Frankly, I don't see any way for them to support unified memory on a Mac Pro class computer without adopting some variant of NUMA.

don't really need unified memory as much as flat , shared , vritual address space. Some data doesn't really need to move if have decent, coherent caching. And some data doesn't really need to hop through the system RAM to the VRAM from the storage drive ( PCI-e drive do a DMA move to VRAM . Or if drive to "dumb" then CPU driven DMA move to VRAM. ). Read only textures ... isn't that hard ( AMD had a pro product at some point with a SSD attached. ) .

Making a SoC powerful enough is not feasible (at least not on higher performance levels). Maintaining the modular design while also keeping unified memory is not feasible. This is why I believe we will see modules that contain CPU + GPU clusters, with shared on-board memory. These devices will be organized into "local groups" of some sort and you will have APIs to discover such groups and schedule work on them. Transferring data between groups is going to be slower.

If need 10x as many GPU cores you shouldn't be have to be saddled with having to take 10x as many CPU cores also. That artificial coupling is quite similar to the "painted into the corner" that Apple got into with the Mac Pro 2013. Unnecessarily coupling things that don't need to be coupled is dubious. The scaling benefit for GPU cores is way different than for general purpose CPU code.


As opposed to artificial tight couplings Apple should be looking at a roadmap with CXL or CCIX when they merge in along with adopting PCI-e v5 or v6. That's what other folks are doing.

Neoverse-crop-10.png






Apple already has elements of such API with their Metal Peer Groups for multi-GPU systems.

which is entirely between homogeneous AMD GPU cores.



How will Mac Pro will talk to GPU boards ?

Why do you think Apple developed MPX modules in the first place? They are free to implement whatever interface they see fit. Also check out how they implement Infinity Link connections — future Apple Silicon Mac Pro could use something similar to provide fast data transfer between modules.

First, Apple didn't implement Infintity Fabric links any more than Dell or HP did. It is an AMD implementation. AMD did a semi custom Fabric width for them that would inside the parameters of the Fabric.

Second, all data movement between MPX modules host systems travels via PCI-e. Apple's MPX implement is entirely synergistic with PCI-e. You can actually put a standard PCi-e card into a MPX bay in the same "slot" that the MPX module fits into. Add adds a second connector to provision data for Thunderbolt and to get out of the Molex power cable game. That's MPX.


Of course they would need more robust I/O. Why do you assume that they won't have it? This is not something that was important for the mobile SoC, but it is essential with the Macs.

One port wonder MacBook . (where Apple stripped away almost every port ) . Rumors of a no port future iPhone. Slot-less Mac Pro 2013. Four Port MBP 13" with one pair provisioned a different bandwidth than the other pair. ...
... It is Apple. They have a track record of blowing on I/O in the pro space.

Begrudgingly. more robust workstation level I/O will probably get pulled into doing a decent Mac Pro SoC , but quite likely Apple is going to move more systems to the "iGPU only and less than or equal to 4-5 ports " side as much as they can. The Mac Mini and iMac port level I/O and optional discrete Graphics probably will come some time later in 2021.


We already pretty much know that Apple is working on a custom I/O controller that provides better memory isolation to connected devices (it's in the video linked by the OP), and we know that Apple Silicon Macs will support high-performance external devices such as SCSI controllers.

IOMMU has nothing to do with this. They need PCI-e controllers. More than a couple memory controllers to DIMM slots. More than two USB ports. etc.




You don't need dedicated GPU memory to have a high-performance GPU. As Apple's graphics guy Gokhan Avkarogullari says "bandwidth is the function of ALU power". You need a certain amount of bandwidth to increase your GPU performance. And there are multiple ways of getting there. You can use faster system RAM, more caches, memory compression, etc. LPDDR5 already offers over 100Gb/s of bandwidth — this is competitive with GDDR5. For higher-end applications, Apple will need to use something faster or utilize more channels. Since they control the platform, there are many paths they can take here.

The leading edge AMD and Nvidia GPUs all used GDDR6 ( and GDDR6X). So LPDDR5 is showing up behind the curve. Apple isn't going to match the ALU power in their top end systems because they don't match the bandwidth.

And let's not forget that Apple GPUs need significantly less bandwidth to provide the same level of performance.

Not on everything. Apple has some vertex removal , tiling tricks that lower bandwidth but if transcoding 10K REDRAW they don't have anything special. Finite element analysis. Nope. Quantum chemistry modeling. Nope.


Why pay for features that don't make any sense? No current GPU has good support for FP64 anyway,

Chuckle. You know how many systems on the top500 list do not support FP64? Zero.
Nvidia's pro GPU line up that don't support FP64? Zero.
AMD's Pro GPU line up that don't support FP64? Zero.

Metal doesn't support FP64. No Apple GPU has FP64 support ( which makes the absence in Metal not surprising at all ) . Apple's Cupertino Kool-aid about how Metal is a compete replacement for OpenCL doesn't "need" FP64. The gamer tech porn crowd doesn't need FP64. But it is used every day by some pro users.

not that it's needed. If you need more precision — roll your own extended precision data structure. It's still going to be faster than native FP64 on most hardware

That is nonesense. Just more kool-aid. Software implemented data type is faster than fixed function logic.



and you can optimize it for your need. Memory controllers — why have a separate one on the GPU if you could have a multi-channel one on the SoC level? Things like multi-monitor support can be easily added depending on the platform needs.

Large resolution multiple monitor support requires bandwidth. The SoC has to be able to walk and chew gum at that same time. If put 4-5 highly bandwidth consumers on a single SoC eventually run out of bandwidth. It will work great at "one at a time" synthetic benchmarks but if trying to do lots of work it will probably throttle.[/quote]
[automerge]1601268878[/automerge]
 
Been using macs so long we seem to have gone full circle back to the PPc days ?

Not a bad thing as my iMac G3 233 wiped the floor with my Quadra 840av
 
So, it's official: no dGPUs.

At least, not in the traditional sense that we're used to with 68K, PowerPC, and Intel Macs. Apple is likely going to expand the Afterburner family for the Mac Pro to add co-processors that focus on specific sets of graphical tasks. But, it sounds like, so long as software is truly optimized for Apple Silicon and Metal, we'll have performance on all Mac models that rival said tradtitional dGPU-based Macs and not just the ones fortunate enough to not be stuck with an Intel iGPU only.

For now . There is nothing in that video that says that will be the case forever .

The DTK had no possiblity of a third party GPU ( nothing inside and no thunderbolt so no external one possible either) .
the first Mac that transitions is very likely in the same boat internallly . Half the current Mac line up has no dGPU . Developers really need to optimize their code for Apples GPU to perform well. By taking away distractions that don’t matter folks should get more work done , sooner rather than later .

So if the vast majority of the first systems don’t have it why would Apple spends tons of time on it at WWDC 2020 ? There will be WWDC presentations in 2021 and 2022 ( and after ) .

P.S. the party GPU driver probably are going to take much longer to do .

No, that video all but said "here is our implementation of GPUs with Apple Silicon in all Apple Silicon Macs". And if you watch the other videos that go more into depth about Metal performance optimization on Apple Silicon (Mac) GPUs, it further directly implies that they're not doing dGPUs, at least, not in the traditional sense. But we keep thinking about it in terms of current (and previous) Macs. If developers optimize for Apple Silicon, we may get so much performance out of their GPUs that a discrete GPU might be moot.

They may change course and have some form of dGPU implementation, but that won't be for a long while, if ever. I suspect this is their strategy for the ARM64 architecture. If the advancement of graphics on the iPad in the past ten years of its existence has proven anything, it's that they'll do just fine!

There is a rumor of a dGPU code-named "Lifuka". Whether this is a separate package or an optional GPU chipset that is part of the Apple Silicon SoC remains to be seen. I would expect any future ASi Mac Pro to have separate PCIe boards with some kind of dGPU or accelerator like the Afterburner.
Lifuka, as I understood it wasn't the codename of a supposed dGPU, but rather the codename of whatever GPU is launching with the 24" Apple Silicon iMac. Mind you, that came in a batch of rumors that all didn't seem terribly likely. I do believe that the graphics performance of whatever GPU they toss in the Apple Silicon replacement to the Intel 21.5" iMac will be enough to make customers not feel like they're getting a downgrade on that system (at least as far as non-Vega options are concerned).

Been using macs so long we seem to have gone full circle back to the PPc days ?

Not a bad thing as my iMac G3 233 wiped the floor with my Quadra 840av

The only way in which this is a return to anything that came before is that, in ditching Intel, we are seeing the end of native booting of other x86 operating systems, the end of Boot Camp, the end of x86 OS virtualization, the end of Hackintosh clones, and are going back to Apple using a closed architecture that will see its OS only run on the processors it releases in its hardware.

For all other intents and purposes, this is not going full circle back to anything other than the last time we had a processor architecture transition.
 
Last edited:
I have no problem with that as I’ve never used windows once in almost 30 years of using macs .

I really enjoy the stability and ease of use of iOS and am quite happy to use a closed system , if it ain’t broke don’t fix it . Last time I had a problem with my Mac I had to type in a load of archaic unix commands , not what I expect from a Mac , whereas on iOS you just do a reboot and voila !

Imo the future looks bright
 
  • Like
Reactions: AlphaCentauri
Not true on the "never". Mac Pro 2009-2012 used CPU packages with memory controller built into the package. The dual package models required a 'remote' memory access through QPI to get to the 'other' set of DIMM slots. And yes there were gaps between macOS and Windows/Linux on systems implemented with dual packages from that era. Huge no. macOS slowest of the pack on large footprint , highly parallel jobs? yes.

You are right, I forgot about the multi-CPU Mac Pro. Still, that is not the kind of system I am describing.


don't really need unified memory as much as flat , shared , vritual address space. Some data doesn't really need to move if have decent, coherent caching. And some data doesn't really need to hop through the system RAM to the VRAM from the storage drive ( PCI-e drive do a DMA move to VRAM . Or if drive to "dumb" then CPU driven DMA move to VRAM. ). Read only textures ... isn't that hard ( AMD had a pro product at some point with a SSD attached. ) .

And now you made the programming model much more complicated... should the driver guess what is your intended resource use or are you adding 100 different "usage' hints that the programmers will ignore anyway?

I can certainly imagine a Metal extension that would allow to define Metal resources on top of custom memory mappings. This would be exclusive to Apple Silicon and would allow the kinds of applications you describe. Still, it has to be explicit, not implicit.


As opposed to artificial tight couplings Apple should be looking at a roadmap with CXL or CCIX when they merge in along with adopting PCI-e v5 or v6. That's what other folks are doing.

Maybe what Apple has in mind for Mac Pro is something similar. Or maybe it's quite different. Let's not forget that other folks are targeting affordable and scalable server platforms. Apple doesn't need that much scalability and they don't have to care that much about affordability. They can definitely afford to make their designs more expensive — and still stay with a very healthy margin.

First, Apple didn't implement Infintity Fabric links any more than Dell or HP did. It is an AMD implementation. AMD did a semi custom Fabric width for them that would inside the parameters of the Fabric.

I am talking about the physical side connector between the modules that provides the Infinity Fabric link. I can imagine multiple modules connected via similar bridges (in addition to PCI-e) to seed up data transfers.


One port wonder MacBook . (where Apple stripped away almost every port ) . Rumors of a no port future iPhone. Slot-less Mac Pro 2013. Four Port MBP 13" with one pair provisioned a different bandwidth than the other pair. ...
... It is Apple. They have a track record of blowing on I/O in the pro space.

We are talking about professional systems and you come with the MacBook and the iPhone... Yes, the 13" Pro has some limitations on it's Thunderbolt — mainly because of the Item CPU and controller limitations. The MBP line still offers most connectivity in terms of external PCI-e lanes than any other laptop. And while Mac Pro is not an I/O monster — it doesn't have to be. It has plenty of I/O for it's intended purpose.



IOMMU has nothing to do with this. They need PCI-e controllers. More than a couple memory controllers to DIMM slots. More than two USB ports. etc.

Read between lines. What they are saying is that they have their own controllers, which are superior to intel's Thunderbolt controllers. And of course there will be PCI-e. How do you implement Thunderbolt without PCI-e?


The leading edge AMD and Nvidia GPUs all used GDDR6 ( and GDDR6X). So LPDDR5 is showing up behind the curve. Apple isn't going to match the ALU power in their top end systems because they don't match the bandwidth.

Who says that their top end will use LPDDR5? There are other options out there if you need high bandwidth. My point is that a lower-end Apple Silicon SoC with quad-channel LPDDR5 shouldn't have much issue reaching the performance levels of mid-range mobile graphical solution.

Not on everything. Apple has some vertex removal , tiling tricks that lower bandwidth but if transcoding 10K REDRAW they don't have anything special. Finite element analysis. Nope. Quantum chemistry modeling. Nope.

In terms of performance-per-watt their GPUs are not slower than anything else. For anything video related they will mot likely end up faster since they don't have to copy the data over the narrow bus to start processing it. iPad Pro already illustrates this. I am not familiar with other kind of GPGPU processing you mention so I don't know what the requirements are. If you need raw bandwidth, then yes, first generation of Appel Silicon won't fit the bill. Neither will any Intel or AMD GPU in the current mobile line.

Chuckle. You know how many systems on the top500 list do not support FP64? Zero.
Nvidia's pro GPU line up that don't support FP64? Zero.
AMD's Pro GPU line up that don't support FP64? Zero.

Man, you are all over the place. Who cares about top500 systems in this context? We are talking about general applications. Yes, professional GPUs support FP64 (usually at the rate of 1/2 or 1/4 of FP32). But customer GPU's don't have any meaningful double rescission support. Why would Apple need to compete in that area? If you use GPGPU workflows that rely on double precision you are most likely running it on a beefy cluster anyway. This is not Apple's niche. For their system, it makes much more sense to have a consistent programming model. Their decision to cut out double precision from GPUs completely makes perfect sense to me. Focus on the common case. Instead of sacrificing valuable die space to implement a feature that only few people will ever need, rather add more single-precision compute units.

That is nonesense. Just more kool-aid. Software implemented data type is faster than fixed function logic.

On GPUs like Ampere or Navi where double precision operations are 16 or 64 times slower, yes, it does.
 
  • Like
Reactions: 2Stepfan
I am talking about the physical side connector between the modules that provides the Infinity Fabric link. I can imagine multiple modules connected via similar bridges (in addition to PCI-e) to seed up data transfers.
I may be wrong, but the side connector seems unable to connects different modules. It just connects two parts of a single module and I have no idea why it even exists, since I don't see why one would remove that connector. Was space missing on the PCB?
big-MW732AMA.jpg


EDIT: I guess I was wrong.

7w5cxzohtnv41.jpg
 
Four Port MBP 13" with one pair provisioned a different bandwidth than the other pair. ...
... It is Apple.

No that’s Intel. The SoC in that generation, made by Intel, didn’t have enough PCI-e lanes. Most PC laptops offer one may be 2 thunderbolt ports. So your arguments makes absolutely no sense.
 
Last edited:
This is most likely about security issues with Thunderbolt interface. If I understand it correctly, Intel chips's IOMMU is flawed which allows DMA exploits such as Thunderclap. Apple has mot likely designed their own controllers that blocks these attacks.
.....

Apple has been using Intel's Vt-d ( IOMMU ) to do DMA memory protection for a long while.

"... Intel Virtualization Technology for Directed I/O (VT-d) is a mechanism by which the host can place restrictions on DMA from peripherals
...
We’ve used VT-d to protect the kernel since OS X Mountain Lion in 2012
..."

2012 is likely before any "Mac Apple Silicon" ever went to the drawing board. Apple Silicon has IOMMU because AMD ( and then Intel) bet them to including it in the CPU package. ( and IBM decades before them. )

Second, Thunderclap isn't just about Thunderbolt ports.


"...
Thunderclap – comprising PCIe slots and external Thunderbolt 2/3, USB-C interfaces.

Unfortunately, the researchers point out, it turns out that IOMMUs aren’t as effective as system designers have assumed for a complex web of reasons:

The software side of peripheral DMA interfaces is not implemented by carefully hardened kernel system-call code tested by decades of malicious attacks and fuzzing, but by thousands of device drivers that have been designed around historic mutual trust, hardware convenience, and performance maximization.
..."

Thunderclap is really about if you plug in a hostile peripheral into your system ( Thunderbolt or standard PCI-e card) then how much do you "trust it". Apple moving to completely get rid of kernel extensions is the real deep seated solution to Thunderclap. Apple Silicon IOMMU isn't going to do it by iself either.

The Apple Silicon "read only kernel code after loading" mode probably also helps somewhat after have let in some hostile peripheral into the limited address space.


Apple System Extensions. Where the kernel extensions and hardware interactions are kicked out of the complete span of the kernel are the path to limited trust virtual memory sharing.


"...
A System Extension runs in userspace.

Like other apps, it has to follow the rules of the System Security Policy.

Unlike other apps, System Extensions are granted special privileges to do special jobs. ... "

How would Apple give system extension up in "user space" full access to the hardware without a major performance penalty and limited overlap with the kernel without an IOMMU would be kind of curious. Apple doesn't explicitly mention it in this talk but it is likely part of the mechanisms used to implement this.

Closing up thunderclap also means the peripheral code ( drivers and/or on device firmware) needs to be stepped up to the new limited trust context. (e.g., prepare to be virtualized . Be viable in a PCI-SIG Single Root virtualization mode. SR-IOV. that's part of PCI-e standard independent of CPU instruction sets. )

The move to System Extensions started in 2019 (and 10.15) before macOS 11 arrived. They are present on both sides of 86-64 and ARM ports of macOS 11. Intel's vt-d implementation being deficient isn't really the core issue. Apple may make IOMMU faster. ( Intel's vt-x and vt-d got feature expansion and bug fixes over time. they also evolved over time at different rates. But at this point they are relatively mature. )
 
No that’s Intel. The SoC in that generation, made by Intel, didn’t have enough PCI-e lanes. Most PC laptops offer one may be 2 thunderbolt ports. So your arguments makes absolutely no sense.

All of SoCs in Intel's line up that generation did not have that limitation. The MBP 15" didn't. This is about Apple's choices for design option limitations they put on themselves. They didn't have to. (e.g, thin out the MBP 13" four port even more and jump on board with the butterfly keyboard.) Intel didn't "have to" either. That chip had a max of 12 PCI-e lanes.


Intel could have done a pin out with another 4 off the CPU and 4 less off the PCH embedded into the package. If Apple had asked very early in the design process and laid some money on the table to make it happen. The SATA pins/pads that Apple was never going to use


If it costs substantive more money and/or isn't on the Captain Ahab thinnest laptop quest, Apple tends to throw I/O on laptops under the bus.
 
All of SoCs in Intel's line up that generation did not have that limitation. The MBP 15" didn't.

So? We are talking about the 13" MBP here. The SoC that goes in it is not the same.

This is about Apple's choices for design option limitations they put on themselves.

Wrong again!

They didn't have to. (e.g, thin out the MBP 13" four port even more and jump on board with the butterfly keyboard.)

Changing subjects again. What does the keyboard have to do with thunderbolt PCI-e bandwidth?

Intel didn't "have to" either. That chip had a max of 12 PCI-e lanes.

Like I already said that was solely on Intel and nothing to do with Apple.

If it costs substantive more money and/or isn't on the Captain Ahab thinnest laptop quest, Apple tends to throw I/O on laptops under the bus.

More incorrect assertions. Can you point me to a competitor's laptop in that same size class of that same generation (model year) that had the same number of thunderbolt ports and has x4 PCI-e going to both controllers?

I'll save you the trouble.

Do you know how many non Apple laptops have 4 thunderbolt ports? Zero.
Many of the laptops in that list have a x2 going to their one thunderbolt port, forget only one side of the 4 on the Apple laptop!!
Further more, having less PCI-E lanes on one side doesn't gimp I/O unless you connect external PCI-E devices like eGPUs or NVMe drives. Regardless, Apple laptops have more Thunderbolt Connectivity and bandwidth than any other laptops on the market. Event the one model from a long time ago, still has more of it than the most recent non Apple laptops.

Your entire argument is utter nonsense!
 
Last edited:
Mac Pro is not something to make money off. Mac Pro is a symbol. It does't matter how many people actually use a Mac Pro — it is there to reinforce the "Apple is for Pros" brand. It means prestige. Abandon that and you are feeding the "Macs are just gadgets for hipsters with too much disposable income". It's all a psychological thing.

If it is such a psychological thing how come Apple increased the entry level pricing of the Mac Pro 100? ( from about $3K to about $6K). That shift is not to make less profit. If Apple wanted to "give away" Mac Pros at cost why crank up the price so high? Far more likely, the price took a large jump because Apple has applied a "low volume tax" to the Mac Pro. That 'tax' is applied primarily so that it doesn't loose money.

To the folks that wanted a relatively straightforward, "box with slots" similar to the 4-6 core Mac Pro models of old with 3-4 slots , the new Mac Pro was the "hipster with too much money " product. The Mac Pro was released along side the XDR. The XDR stand cost $999. $999. One of the primary broad reaction memes out of the WWDC session was "Holy Cow! who is going to pay $999 for a fraking monitor stand?". Close behind was the practicing very high end colorist reacting to the notion they should throw out their reference monitors for XDR because the range rendering was essentially exactly the same.

If anything was fanning the wide spread , "too much disposable income" flames it is Apple's highest end "Pro" products , not the rest of the line up. Apple walked away from a substantive number of classic Mac Pro users with the Mac Pro 2013. Apple walked that back partially with the Mac Pro 2019, but still left folks in the $2.5-4K range in an abandoned state.


Apple is out to make money on the XDR and Mac Pro. All Apple is perhaps not doing is to put some requirement that they bend the top line revenue number up substantially. However if they don't sell enough, Apple will kill them. Apple doesn't necessarily need the Mac Pro 2019 ( which 6 years with the Mac Pro 2013 and Mac unit sales and revenue grew just fine. A beefied up and up to date iMac Pro could sit at the top of the food chain just fine. )



I think Apple realizes this very much, otherwise they wouldn't go though the trouble of building the new pro hardware.

Err not really. If look at the April 2017 pow-wow Apple laid it out relatively straightwardly. First, iMac had made more penetration into the "Pro" space and they had put a higher priority of doing a literal desktop iMac Pro. (that would be their desktop solution).

Second, that many of entirely self imposed restrictions on the Mac Pro 2013 didn't pan out. A substantial number of people wanted the ever "bigger" , ever hotter single , high VRAM capacity GPU cards. And they wanted to move to newer cards without chucking the underlying infrastructure on each move. People had lots of non GPU cards they wanted to put in and Thunderbolt was a limitation for those in the x8-16 lanes and up range. Also that packing the CPU in very close proximity to the dGPU runs into thermal problems as crank both higher. ( and GPUs are generally getting hotter faster than CPU ares so it isn't even ( which means probably will get 'backwards' thermal bleed than what you'll want versus if they both were radiating at the same rate (which would push heat flow "out" as opposed to "across". )

There is a market of folks with money to spend here. And Apple wanted to sell to them. ( and that would be easier than the more generic 'box with slots" market since some macOS skew in places on software. )

At any rate, they have more then enough money to both subsidize the Mac Pro development and use approaches that other companies might consider not economically viable,

Unless the Mac Pro is paying for itself, then it is likely dead. Apple partially walked away from Mac Pro before. At each point they came back 2013 and 2019 they have cranked up the price. if Apple was primarily relying on other products to pay the freight they wouldn't need to make that move. ( since it is a subsidized product after all. Just "give it" away).

and they have scalable CPU, GPU and cache technology (their memory level parallelism in particular is off the charts) to build powerful chips. The last bit of the puzzle they need is a fast interconnect technology, and given the fact that Apple has even aggressively hiring interconnect engineers for a while, it's almost certain that an Apple Silicon Mac Pro is in the pipeline.

There is probably on in the pipeline. But it is probably not the "King Kong" implementation to beat everything in AMD and Intel's workstation (single socket server) line up kind of SoC.

Apple sizably scaling up their AI/ML subsystem they already needed serious interconnect and bandwidth throughput improvements without the Mac Pro just inside the 5 and 3 nm implementations on die.
 
but if transcoding 10K REDRAW they don't have anything special.
This is just a massive SIMD action right? (Haven’t google’d 10K REDRAW) If Apple has heard from their customers that this is important (over the two year transition time), I’m thinking it would be relatively low effort to implement. Then again... Apple could simply concede this level of performance. Folks that need it could always just buy a dumb fast system to transcode REDRAW into an FCPX happy format, but still work from their Mac.
 
Apple has been using Intel's Vt-d ( IOMMU ) to do DMA memory protection for a long while.

[...]

Apple moving to completely get rid of kernel extensions is the real deep seated solution to Thunderclap. Apple Silicon IOMMU isn't going to do it by iself either.

The things you say are factually correct but they don't change the overall point. Facts are: driver models on Intel and Apple Silicon Macs are identical, yet it seems that external devices on Intel Macs are not isolated properly and suffer from vulnerabilities as a result. I do not know the exact reason, but it's obvious that it has something to do with hardware. based on what Apple is saying we can conclude that they have designed in-house controllers for external interfaces.

All of SoCs in Intel's line up that generation did not have that limitation. The MBP 15" didn't.

Again a factually correct statement that obfuscates the truth. The MBP 15" used a different SKU, with more PCI-e lanes. There might have also been a difference in the associated chipset, I didn't look it up. Again, as @raknor says, Apple's implementation from 2016 is superior to virtually very other laptop even in 2020.

If it is such a psychological thing how come Apple increased the entry level pricing of the Mac Pro 100? ( from about $3K to about $6K). That shift is not to make less profit. If Apple wanted to "give away" Mac Pros at cost why crank up the price so high?

Who says anything about giving stuff away? I never claimed that it's in their best interest to hand these things to anyone who happens to come along. What I am saying is very simple: Mac Pro has to be a part of Apple's lineup, and it has to be powerful and head-turning. Because it's a symbol. High price is also part of the branding. Current Mac Pro has it's buyer of course, but it's still a very niche machine. Which is exactly my point. It doesn't matter much whether it sells well or not. The value of Mac Pro to Apple is not the revenue it generates but the fact that one and say: "look at that shiny workstation! I can't afford it and I don't need it, but Apple sure builds some nice stuff for $$$ pros".
 
  • Like
Reactions: 2Stepfan
but if transcoding 10K REDRAW they don't have anything special.

What is 10K Red Raw? Which Red camera can shoot 10K? Are you just making up random gibberish as you go along?

Show me which model here can shoot 10K..
 
  • Like
Reactions: 2Stepfan
Mac Pro has to be a part of Apple's lineup, and it has to be powerful and head-turning. Because it's a symbol. High price is also part of the branding. Current Mac Pro has it's buyer of course, but it's still a very niche machine. Which is exactly my point. It doesn't matter much whether it sells well or not. The value of Mac Pro to Apple is not the revenue it generates but the fact that one and say: "look at that shiny workstation! I can't afford it and I don't need it, but Apple sure builds some nice stuff for $$$ pros".

I agree it's important, and it can be considered part of their marketing, but I think it is more than merely a symbol. The people that buy Mac Pros are influencers--heavyweights within the "creatives" community. And having those people continue to use Macs contributes to the health and breadth of the Mac ecosystem. I.e., if it were purely a symbol, it wouldn't matter if people actually bought it. But it is important that pros do buy it for it to have its full intended non-monetary benefit for Apple.

Yes, professional GPUs support FP64 (usually at the rate of 1/2 or 1/4 of FP32). But customer GPU's don't have any meaningful double rescission support. Why would Apple need to compete in that area? If you use GPGPU workflows that rely on double precision you are most likely running it on a beefy cluster anyway.
Are you saying FP64 GPUs wouldn't be available even on a hypothetical AS Mac Pro? While I grant it is a niche case, I think scientists those who would like to use Mac Pros to do development work for double precision GPGPU workflows would benefit from such capability.
 
I agree it's important, and it can be considered part of their marketing, but I think it is more than merely a symbol. The people that buy Mac Pros are influencers--heavyweights within the "creatives" community. And having those people continue to use Macs contributes to the health and breadth of the Mac ecosystem. I.e., if it were purely a symbol, it wouldn't matter if people actually bought it. But it is important that pros do buy it for it to have its full intended non-monetary benefit for Apple.

I agree completely, that’s exactly what I meant.


Are you saying FP64 GPUs wouldn't be available even on a hypothetical AS Mac Pro? While I grant it is a niche case, I think scientists those who would like to use Mac Pros to do development work for double precision GPGPU workflows would benefit from such capability.

Metal doesn’t have double precision support, even for GPUs that do support it, that much is a fact. I think that is evidence for them not planning to introduce hardware FP64 support any time soon.

I do not know how common are applications that extensively rely on FP64. I think that only offering FP32 in hardware and a double-kind library type makes sense for GPUs. One problem with current hardware FP64 is that they are fast on some hardware and unbearably slow on other hardware. Using software extended precision gives you predictable performance on all hardware.

And besides, if you are a scientist running particle simulations on a GPU, you are not using a Mac. You are using a large supercomputer.
 
Mac Pro has to be a part of Apple's lineup, and it has to be powerful and head-turning. Because it's a symbol. High price is also part of the branding. Current Mac Pro has it's buyer of course, but it's still a very niche machine. Which is exactly my point. It doesn't matter much whether it sells well or not. The value of Mac Pro to Apple is not the revenue it generates but the fact that one and say: "look at that shiny workstation! I can't afford it and I don't need it, but Apple sure builds some nice stuff for $$$ pros".
I agree it's important, and it can be considered part of their marketing, but I think it is more than merely a symbol. The people that buy Mac Pros are influencers--heavyweights within the "creatives" community. And having those people continue to use Macs contributes to the health and breadth of the Mac ecosystem. I.e., if it were purely a symbol, it wouldn't matter if people actually bought it. But it is important that pros do buy it for it to have its full intended non-monetary benefit for Apple.
I doubt the importance of the Mac Pro. It exists because Apple figures they can make some money in this area. The high price is not branding, the high price is “We are NOT going to sell many of these, so they will be priced accordingly.“ The price likely simply comes down to the R&D expenditure divided by the number they expect to sell over the model lifetime. :)

Apple’s Mac sales are VERY likely driven more by the iPhone/iPad than any brand of Mac (Will I be able to share my bookmarks between my phone and this Mac?). Maybe back when the Mac was broadly an unknown quantity, there might have been some educational value in “Hey, everyone, the Mac can be really powerful, too!” But, when today’s customers needs are being met and in most cases exceeded by the iPad line, there’s very few sales, very VERY few sales, that will be driven by the existence of the very powerful, yet non-mobile, Mac Pro.
 
And besides, if you are a scientist running particle simulations on a GPU, you are not using a Mac. You are using a large supercomputer.
I meant that you might want to use a Mac workstation for development work prior to deployment. I.e., before you try deploying your N-particle simulation to the supercomputer, you might want to protype your code on a workstation (using fewer particles), rather than doing initial development work on the supercomputer itself.

Then again, if you're a scientist running simulations on GP-GPU's, most likely you still won't be using a Mac—because if you're in the latter category, you're probably using NVIDIA/CUDA.
 
Am I correct or wrong to say that once a Mac system with silicon is bought the user cannot put in more RAM or be able to upgrade during the usage of a lifetime of that system? Will they have to buy a new Mac if they want larger storage or RAM?
 
Am I correct or wrong to say that once a Mac system with silicon is bought the user cannot put in more RAM or be able to upgrade during the usage of a lifetime of that system? Will they have to buy a new Mac if they want larger storage or RAM?

Apple has not released any hardware details of the upcoming Macs. That said, I am fairly sure that Apple Silicon Laptops will have soldered-on RAM, just like the current ones.
 
  • Like
Reactions: Unregistered 4U
Apple has not released any hardware details of the upcoming Macs. That said, I am fairly sure that Apple Silicon Laptops will have soldered-on RAM, just like the current ones.
I didn’t want to give the post a thumbs up, because I felt that it would indicate that I liked the statement. I don’t really, but I wholeheartedly agree. Prepare to be fleeced on memory and storage. My hope is that as customers we will be compensated by having memory performance that is much higher than would be possible with socketed DRAM. I can’t really see why the SSD couldn’t be socketed though.
 
I still haven't seen just how many cores are needed in Apple Silicon to approximate the same GPU experience of the Radeon Pro 5700 XT in the high end iMac? Their full size card says 2560 cores. I know the iMac is using the mobile edition, still it makes one question the simplicity of graphic related cores in iPad Pro's ARM compared to high end iMacs processor and GPU. I am not being negative here, just questioning what it takes to be equivalent? 🙂

How fast are Apple’s new ARM Mac chips? It’s hard to tell - The Verge

But are Apple’s ARM chips actually powerful enough now to replace the likes of Intel and AMD? That’s still an open question — because at Apple’s 2020 Worldwide Developers Conference (WWDC), the company shied away from giving us any definitive answers.

The Metal Score of A14 is impressive. It's 137% higher than A12 and 72% higher than A13 according to Geekbench results. It's lot more than 30% higher GPU performance that Apple stated at WWDC.

A12 5307, A13 7308, A14 12571

It can mean that A14X and A14Z can also be much faster?

A12X 10860, A14X 25725
A12Z 11665, A14Z 27632

A12 with 4 GPU cores scores 5307. A12Z with 8 GPU cores scores 11665. 4 extra cores means 120% performance increase. An A14Z Mac SoC with 24 GPU cores could score 87876 in Metal. That's between Radeon Pro W5700XT and Radeon Pro Vega II.
 
My hope is that as customers we will be compensated by having memory performance that is much higher than would be possible with socketed DRAM.
Current Mac laptops have soldered RAM modules and, AFAIK, they are not more performant than DIMMS. They use the same standard.
Sure, Apple could use GDDR or even HBM as main RAM, but it's unlikely.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.