Thanks @crazy dave these are all great points!
I thought I laid it on thick enough but apparently not ?Well, you definitely fooled me ?
It's obvious to me current Intel-based Macs can support resizable BAR. But it's also obvious to me that it just doesn't matter. To explain why these things are obvious and true takes a lot of words, unfortunately.It’s far from obvious whether current Intel-based Macs are in principle capable of supporting resizable BAR. And frankly, even if they were I don’t really see how one can blame Apple for not investing resources into implementatino these features for practically obsoleted machines, especially given the fact that resizeable BAR mostly benefits high-end games (and even then, marginally), which are simply not available on macOS.
But isn't CUDA far superior to whatever is the alternative? All I've heard is that CUDA just works and is better than anything else other companies have come up with. Isn't that just good competition? If someone else can come up with a better software and hardware implementation, why don't they?Well, CUDA? There was a perfectly fine GPGPU API around - OpenCL, developed by Apple, managed by Kronos and fully embraced by AMD. Instead of helping to nurture this vision of unified GPU compute API, Nvidia used their market leader position to sabotage its adoption, pushing their own CUDA instead and denying the HPC GPU market to their competitors. Of course, it was a move that made perfect sense from business perspective for them, but it did end up making things worse for everyone else - especially the users - because now tons of useful software is locked behind the parasitic CUDA.
But isn't CUDA far superior to whatever is the alternative? All I've heard is that CUDA just works and is better than anything else other companies have come up with. Isn't that just good competition? If someone else can come up with a better software and hardware implementation, why don't they?
Perhaps Nvidia didn't agree with the OpenCL vision? Maybe it hamstrung their innovation?
How could you be ok with everything Apple does, but hate on Nvidia for doing the exact same thing?
Nvidia has been very innovative in both software and hardware.
But isn't CUDA far superior to whatever is the alternative? All I've heard is that CUDA just works and is better than anything else other companies have come up with.
Isn't that just good competition? If someone else can come up with a better software and hardware implementation, why don't they?
Nvidia has been very innovative in both software and hardware.
How could you be ok with everything Apple does, but hate on Nvidia for doing the exact same thing?
My gripe with Nvidia is not the fact that they roll their own thing but the active manipulation and suppression that they have been exercising using their position as a market leader.
Can you explain how Nvidia hampered the development of OpenCL on non-Nvidia devices?CUDA is superior is because Nvidia has been investing significant resources into improving it while actively sabotaging other initiatives they have been involved in.
According to Wikipedia: https://en.wikipedia.org/wiki/Graphics_processing_unitNvidia has been very innovative in both software and hardware.
But that's only because Apple has a very small market share. What if MS or nVidia prevented the use of Vulkan like Apple does?To put it differently, Apple refusing to use Vulkan (whatever their motivation might be) does not threaten Vulkan as a viable GPU API.
Probably the main reason why CUDA is superior is because Nvidia has been investing significant resources into improving it while actively sabotaging other initiatives they have been involved in. CUDA is not that different from OpenCL. It is just packaged differently — but the main feature of CUDA is that it is Nvidia-exclusive.
Oh, I completely agree. Again, as I wrote before, I believe that the way to go for GPU programming is every vendor having their own proprietary interface and open "APIs" would be just libraries that leverage these interfaces. My gripe with Nvidia is not the fact that they roll their own thing but the active manipulation and suppression that they have been exercising using their position as a market leader.
I am not sure I agree. Nvidia has been very innovative in marketing and strategising. They also have been employing cut-throat techniques such as manipulation and ruthless suppression of competition. I don't find their hardware or software to be very remarkable to be honest. For example, most of the performance of the critically acclaimed 3000 series comes from the fact that Nvidia has significantly increased the power consumption of their GPUs.
I do admit of course that their business strategy is spot on. They invested into right things at the perfect time. For instance, their ML accelerators, RT units or tile-based rendering. None of these are particularly impressive technology, but they have been brought to the market exactly when it was needed.
Are they doing exactly the same thing though? Apple is rolling their own API because the mainstream stuff — dictated by majority market leaders who have different technology — did not work for them. Besides, Apple has its own separate software and hardware platform with different rules and programming model anyway. Nvidia on the other hand is relying on a wider mainstream software and hardware ecosystem. In regards to CUDA specifically, Nvidia used their leadership position to actively sabotage an open source effort they themselves were part of to lock out other hardware vendors. I don't recall Apple doing anything like that. To put it differently, Apple refusing to use Vulkan (whatever their motivation might be) does not threaten Vulkan as a viable GPU API. However, Nvidia refusing to properly support OpenCL did deny AMD access to GPGPU market and effectively made things worse for all the GPU users — because Nvidia has basically buried the vision of unified cross-platform GPU compute API.
Can you explain how Nvidia hampered the development of OpenCL on non-Nvidia devices?
According to Wikipedia: https://en.wikipedia.org/wiki/Graphics_processing_unit
- Nvidia created the first consumer-level card released on the market with hardware-accelerated T&L
- Nvidia was first to produce a chip capable of programmable shading.
- Nvidia released CUDA before Khronos released OpenCL.
But that's only because Apple has a very small market share. What if MS or nVidia prevented the use of Vulkan like Apple does?
nVidia sabotaged openCL by simply not supporting it properly. Apple never supported Vulkan. And it is Apple itself that put the last nail in the openCL coffin when they adopted Metal.
As for Apple never "actively" sabotaging the competition (as it was suggested), it seems that's exactly what they're doing with Apple Arcade. They're giving money to developers so that their game isn't available on Android or other subscription platforms. How is that good for users?
Metal predates Vulkan.
Yes, but Mantle was AMD tech. Apple needed something that would work with Intel iGPUs in the Mac, in addition to iOS/iPadOS/tvOS devices.Vulkan evolved from Mantle and predates Metal.
Vulkan evolved from Mantle and predates Metal.
Yes, but Mantle was AMD tech. Apple needed something that would work with Intel iGPUs in the Mac, in addition to iOS/iPadOS/tvOS devices.
12900HK is not even close to M1 Max base on the real test. It consumed way more power so their marketing chart is totally fake. Intel Alder lake just consume too much of power so it's so stupid to even dare compare with M1 series.
Intel 7 node is more like TSMC 10nm so with IPC improvement I'm guessing it'll be close to AMD at similar wattage.
Not all Apple Arcade games are exclusives.
Intel 7 (née Intel 10nmESF) is close to TSMC 7nm+. Intel 10nm was close TSMC 7nm.
?12900HK is not even close to M1 Max base on the real test.
For most laptop use cases the end user wants more battery life. When you've hit the 100Whr airline limit then what else can you do but bring a 100Whr powerbank?
The processor with fewer threads (12900k) has to be run at higher frequency to match or exceed the one with more threads (5950X), and we know that power does not increase linearly with frequency, while it increases roughly linearly with the number of cores.12900k scores 27401 on Cinebench R23 at peak 251W
5950x scores 24071 on Cinebench R23 (I also get 23794 to 24371 on air cooling at peak 123W)
So, 12900k is ~15% faster at 2x the wattage.