Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ChrisA

macrumors G5
Original poster
Jan 5, 2006
12,918
2,170
Redondo Beach, California
What could we do with a 128 core Arm CPU with 128 PCIe4 lanes, with NVIDIA Ampere GPU?

This is not a hypothetical question. The above describes a real computer. It seems the Apple M1 is just a taste of the low end of what's coming. Here is what's going on at the high end of ARM-based computing. It shows what is possible inside an ARM-based Mac Pro or high-end iMac (although Apple is likely to use their own chips, this gives an idea of what the industry can do today.)
https://www.anandtech.com/show/1587...80-cores-up-to-33-ghz-at-250-w-128-core-in-q4
 
  • Haha
Reactions: idktbh

ChrisA

macrumors G5
Original poster
Jan 5, 2006
12,918
2,170
Redondo Beach, California
Until Nvidia has fix their problems with Apple, this is not going to happen for Macs.


What could we do? It's a processor with great performance and IO, what does a processor with great performance and IO do?
That is my question. A high-end ARM-based processor is not what you need to watch a Youtube video. As fast as it is, the video still takes the same amount of time to play.

But what would you do with something 30X faster than an M1 chip? OK, processing 8 streams of 8K video all at once takes a load of processing power but most people never do that. Back in the 1990s when processors reached a certain threshold of computing power a revolution took place. We were able to use a system with windows, a mouse, and a pointer rather than a command line. It allowed non-experts to use computers.

What new revolutionary apps will we see when compute power is 100x faster than today. And it looks like this 100x is just around the corner. "faster email" is not the answer. I'm thinking robotics, AI, and augmented reality all being mainstream.

BTW it is not Nvidia who needs to fix things with Apple. Nvidia makes all their libraries available and very aggressively publishes materials on "how to" that developers can read or watch. Enough so that there exist open-source Nvidia drivers. One does not need to enter into an agreement with Nvidia to make their hardware "work" on an OS - see the open-source example.
 

Gnattu

macrumors 65816
Sep 18, 2020
1,106
1,668
Enough so that there exist open-source Nvidia drivers.
You must be kidding me. If you know how the open-source Nvidia drivers are made and what's the current state of that project, you will not think that way. Intel and AMD have way better open-source drivers than Nvidia. Nvidia almost never publish documents for others to write a driver for their gpu, and the current "working" nouveau, is made by "observing". Developers store the state of the card before and after running a simplistic OpenGL program. After that, they diff the states in order to find out what was sent to the card. This is extremely ineffective, and it has not need to be that way, if Nvidia wants to help, even only providing documents, the situation would be way better.
After Pascal, Nvidia also locked the firmware and the open-source drivers can no longer control the frequency of the gpu, resulting an extremely low 3D performance as the cards are locked to its "booting frequency", which is usually the lowest working frequency. If Nvidia provides appropriate firmware this can be easily solved.

One does not need to enter into an agreement with Nvidia to make their hardware "work" on an OS
This is what it should be, but Nvidia does not think so. You literally have to agree with the Geforce driver EULA before you can install their driver.
 
Last edited:

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
You must be kidding me. If you know how the open-source Nvidia drivers are made and what's the current state of that project, you will not think that way. Intel and AMD have way better open-source drivers than Nvidia. Nvidia almost never publish documents for others to write a driver for their gpu, and the current "working" nouveau, is made by "observing". Developers store the state of the card before and after running a simplistic OpenGL program. After that, they diff the states in order to find out what was sent to the card. This is extremely ineffective, and it has not need to be that way, if Nvidia wants to help, even only providing documents, the situation would be way better.
After Pascal, Nvidia also locked the firmware and the open-source drivers can no longer control the frequency of the gpu, resulting an extremely low 3D performance as the cards are locked to its "booting frequency", which is usually the lowest working frequency. If Nvidia provides appropriate firmware this can be easily solved.


This is what it should be, but Nvidia does not think so. You literally have to agree with the Geforce driver EULA before you can install their driver.
Nvidia "open-source" drivers are binary blobs. They are considered non-free and many Linux distributions won't publish them.

From Linus Torvalds: Nvidia is the single worst company we've ever dealt with.
 

Gnattu

macrumors 65816
Sep 18, 2020
1,106
1,668
Nvidia "open-source" drivers are binary blobs. They are considered non-free and many Linux distributions won't publish them.

From Linus Torvalds: Nvidia is the single worst company we've ever dealt with.
I think you misunderstand the situation somehow.
On Linux, we have two drivers for nvida GPU: the open-source nouveau and the closed source nvidia.
nouveau is free and open-source, and it is included in Linux kernel, but it is almost useless for modern nvidia GPUs as nvidia does not want to help.
nvidia is the thing that described in your link, binary blobs and closed-source, not in kernel may not work with latest kernel, users have to wait for nvidia to update driver.
 
Last edited:
  • Like
Reactions: jdb8167

leman

macrumors Core
Oct 14, 2008
19,521
19,675
These server CPUs have as much in common with M1 as an industrial bucket-wheel excavator has with a sports car. It makes very little sense to compare them as they have very different design, very different purpose and very different performance characteristics. 128 PCIe lanes in a personal workstation machine? Why? What for?

Also, why do you even mention Nvidia in this context? Ampere Computing is a completely different company that has nothing to do with Nvidia.
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
To answer OP:s question - nothing.
It’s not a computer configuration targeted at single user scenarios. It IS possible to write code that is
a) embarrassingly parallell
b) perfectly load balanced
c) works on cache resident data
d) does a lot of work on each little patch of data or you will get bottlenecked by the memory subsystem anyway.

Codes that fulfill these requirements are used in benchmarketing, as they are the only ones that show much benefit from what the vendors want to sell to you.

I have never written any code that fulfilled the criteria above. Point d) above killed off any reasonable candidates.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.