Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
That's BS along the lines of 8GB on M1 equals 16GB, M1 iGPU is zero copy buffer and x64 is not, M1 iGPU equal high end discrete graphics, etc. Reality is AMD makes CPUs and GPUs for the 90%+ PC marketshare and only 7nm fab can provide that capacity. Nothing stopping AMD from allocating 5nm or next gen node capacity at TSMC but less of a need since their current 7nm surpasses 5nm M1 performance.
I'm surprised no one has pushed back on you on this. Apple is by far the largest consumer of TSMC semiconductor capacity.
 

Appletoni

Suspended
Original poster
Mar 26, 2021
443
177
Benchmark: LC0 (chess) speed

M1 GPU = 300 nps

Other mobile GPUs = 50000 - 100000 nps

Does it have tensor cores?



Does it have raytracing cores?
 

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,297
If you want GPU accelerated LC0 you're better off with Mac/Macbook + High Sierra + CUDA + Nvidia eGPU. Or, wait for Big Sur 11.4 to be released + OpenCL + AMD 6800 series eGPU. I don't think M1 supports eGPU.
 

maflynn

macrumors Haswell
May 3, 2009
73,682
43,740
[MOD NOTE]

A number of posts were removed for various rules violations. If you're not interested in discussing this topic, then please move on and as always please be respectful

MacRumors Rules for Appropriate Debate:
Respect
Guidelines: Show respect for your fellow posters. Expect and accept that other users may have strongly held opinions that differ from yours. In other words, basic human courtesy.

Rules:
  1. Name-calling. Name-calling falls into the category of insults and will be treated as such according to the forum rules, your own opinion about another member notwithstanding. You can't call a bigot a bigot, a troll a troll, or a fanboy a fanboy, any more than you can call an idiot an idiot. You can disagree with the content of another member's statement or give your evidence or opinion to dispute their claims, but you may not make a negative personal characterization about that member.
  2. Hate speech and group slurs. We prohibit discrimination, abuse, threats or prejudice against a particular group, for example based on race, gender, religion or sexual orientation, in a way that a reasonable person would find offensive.
  3. Taunting. Mocking or taunting another forum member is not acceptable. Posts that ridicule another member or obviously exaggerate or misstate their views (including purposely mis-quoting them) may be removed.
 
  • Like
Reactions: BigMcGuire

d5aqoëp

macrumors 68000
Feb 9, 2016
1,809
3,189
Apple’s 1gen products are always underpowered in GPU dept. Be it 1st iPhone or 1st iPad.
 

BigMcGuire

Cancelled
Jan 10, 2012
9,832
14,032
My M1 runs Starcraft 2 to a 4K monitor almost as well as my eGPU’d RX 580. None of the Intel integrated graphics could come close to that. Uh, I’m impressed and very happy. I’ll probably trade in and upgrade to the M2 or M1X when it comes out in the summer.

The fact that the M1 only came out on the 2 port Mac tells me (and anyone who knows anything about Apple products) it was not their high end option either way.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
If you want GPU accelerated LC0 you're better off with Mac/Macbook + High Sierra + CUDA + Nvidia eGPU. Or, wait for Big Sur 11.4 to be released + OpenCL + AMD 6800 series eGPU. I don't think M1 supports eGPU.
The M1 doesn’t support eGPUs mostly because there are no drivers for discrete GPUs and so far Apple hasn’t shown any inclination to release any. (Or to allow any third-parties to add them.) The actual Thunderbolt enclosure shows up in various device lists but the GPUs aren’t recognized.

The M1 has the best performing integrated GPU available but M1 Macs aren’t the best choice right now if you need a high performance discrete GPU.
 

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,297
Chess is more relevant than Geekbench that gets brought up more often. Chess is the pre-cursor to AI. A lot of the elites of current day AI either started out playing chess, wrote chess game and/or computer AI games like Dennis Hassabis. It's a field with no ceiling on writing your own paycheck. Not a topic for typical Joe though. Maybe start a thread comparing Geekbench scores if it's not for you.

https://en.m.wikipedia.org/wiki/Demis_Hassabis
 
Last edited by a moderator:

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Chess is more relevant than Geekbench that gets brought up more often. Chess is the pre-cursor to AI. A lot of the elites of current day AI either started out playing chess, wrote chess game and/or computer AI games like Dennis Hassabis. It's a field with no ceiling on writing your own paycheck. Not a topic for typical Joe though. Maybe start a thread comparing Geekbench scores if it's not for you.

https://en.m.wikipedia.org/wiki/Demis_Hassabis
That‘s just silly.

Geekbench includes workloads for cryptography, image and text compression, navigation, HTML5, SQLite, PDF and text rendering, Clang, camera, physics, Gaussian blur, face detection, horizon detection, image in painting, HDR, ray tracing, structure from motion, speech recognition, and machine learning.

”Chess” covers a tiny fraction of that.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Apple’s 1gen products are always underpowered in GPU dept. Be it 1st iPhone or 1st iPad.

M1 GPU is fine if you consider it’s context. It’s not particularly beasty but it’s sufficient for what it’s designed for.

I had a brief look at Lc0 project page, they don’t mention any specific support for Apple’s ML frameworks and it’s likely that it simply does not use M1 properly or efficiently.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
I thought they figured out that it was a memory issue, and the performance problems were solved by setting the lookup hash to the proper size?
Exactly. One of the more useful aspects of the discussion. One poster discovered that the M1 native version of Stockfish met their needs after all.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
A lot of the elites of current day AI either started out playing chess, wrote chess game and/or computer AI games like Dennis Hassabis. It's a field with no ceiling on writing your own paycheck. Not a topic for typical Joe though.
But it really is for the typical Joe! Because it's so simple, it's the perfect entry point for learning how to solve simple games. Anyone who want's a challenge or has basic knowledge in the field surly isn't using chess anymore. There are fare more interesting and complex games out there to solve with AI than chess.
 
  • Like
Reactions: JMacHack

mi7chy

macrumors G4
Oct 24, 2014
10,625
11,297
But it really is for the typical Joe! Because it's so simple, it's the perfect entry point for learning how to solve simple games. Anyone who want's a challenge or has basic knowledge in the field surly isn't using chess anymore. There are fare more interesting and complex games out there to solve with AI than chess.

Better exercise is getting an OS installed with more than one drive. Any progress?

Snippets of synthetic code don't compare to real world applications. Guess nothing was learned from Jim Keller.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
Better exercise is getting an OS installed with more than one drive. Any progress?
Installing Big Sur on an external drive seems pretty reliable now if you are using a Thunderbolt NVMe drive. Anything else is still hit or miss. Don't even bother trying with a spinning rust HDD.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
Better exercise is getting an OS installed with more than one drive. Any progress?
All done, you’re not following. Entirely a Windows installer problem, which was confirmed by a few people I know who work and worked for Microsoft (just took em a while, as with most people they’re busy). They even have an internal compatibility list of hardware that shows on what hardware this problem shows up and which hardware is fine. No one was able to tell me why they don’t make it public though. Plenty of people having issues with this all over the net.

Point still stands, chess is somewhat boring from an AI point of view. Much more interesting stuff out there, especially when real world applications are of interest.
Installing Big Sur on an external drive seems pretty reliable now if you are using a Thunderbolt NVMe drive.
He’s referring to another thread where I reported about the botched Windows installer and what a piece of junk it is. Depending on hardware, it can have issues with Win10 installation when multiple UEFI drives are present, which I had confirmed later by people who worked on the OS. Linux and macOS (Hackintosh) never had a problem and the Windows installer would just quit without giving a proper reason.

Chess is interesting for those that have it as a hobby. I played chess as a kid. I had a quick look at it again before I started my PhD in computer science (AI related). These days some of my students still start with chess, before moving on to more interesting and complex things for their bachelor thesis, which mostly is focused on real world problems. My master students can’t be bothered with chess either, they do pure theory with mathematical background or focus on things that are widely used in research (if they want to go the PhD route) or have strong ties to real-world applications in order to have a good start in the industry after their thesis.
 
Last edited by a moderator:
  • Like
Reactions: jdb8167

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Chess is more relevant than Geekbench that gets brought up more often. Chess is the pre-cursor to AI. A lot of the elites of current day AI either started out playing chess, wrote chess game and/or computer AI games like Dennis Hassabis. It's a field with no ceiling on writing your own paycheck. Not a topic for typical Joe though. Maybe start a thread comparing Geekbench scores if it's not for you.

https://en.m.wikipedia.org/wiki/Demis_Hassabis
Damn, you should write for Anandtech and tell them Geekbench isn’t relevant anymore now that we have an obscure chess benchmark.

Further than that we have an actual guy who designed(still Designs?) cpus, in this very thread, who says the M1 is impressive.

Apple released a cpu that sips power like a mobile chip and punches like a desktop chip. That’s a W.
 

Kung gu

Suspended
Oct 20, 2018
1,379
2,434
The M1 is made for low end Macs(still great but I want more power). I want to see the power of the high-end Macs, starting with the 14" and 16"MacBook pros
 

Jorbanead

macrumors 65816
Aug 31, 2018
1,209
1,438
Genuine question: Does this benchmark utilize the neural engine and machine learning features of M1? If AI is one of the purposes of this benchmark, you’d think you would want to actually test the hardware that was designed for that purpose.
 
  • Like
Reactions: AdamInKent

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Genuine question: Does this benchmark utilize the neural engine and machine learning features of M1? If AI is one of the purposes of this benchmark, you’d think you would want to actually test the hardware that was designed for that purpose.

From what I’ve seen it doesn’t utilize anything. It has an OpenCL backend (no idea it it’s even active on the Mac build) but most effort seems to be directed at the CUDA implementation.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
Genuine question: Does this benchmark utilize the neural engine and machine learning features of M1? If AI is one of the purposes of this benchmark, you’d think you would want to actually test the hardware that was designed for that purpose.
I was actually waiting for the chess defender to inform us about this. Guess googling it is taking a while. But since you ask, you're spot on. The answer is no. There's also no reason to utilize the NE. This is not what Stockfish is about, it's using a different approach which is why actual learning approaches without historic data can be much better at playing chess. It all depends on the setup and goal.

I remember a older paper that pointed out that most if not all chess engines are using heuristically determined subsets of moves, making these moves and then recursively doing it for new positions over and over again. This is in general bad for GPUs, as recursion performance is pretty bad on GPUs. OpenCL used to not support recursion, as it's not supported by all hardware. I'm mostly using CUDA for my research nowadays, but maybe @leman can shed some light on it if this is supported right now.

CUDA does support recursion, however it's heavily relying on stack size per thread and they're not that large. When overflowing, this get's pushed to global memory which is introducing further latency and makes things inefficient. That's why it's usually avoided on GPUs, especially when not knowing depth and width. Then again, M1 features unified memory so this should be a major benefit for such problems. So the big question is, does GPU/NE recursion work in the current Apple eco system. I remember errors calling recursive functions from kernels, but that could have changed with newer versions of Metal and M1.
 
Last edited by a moderator:

leman

macrumors Core
Oct 14, 2008
19,521
19,679
I was actually waiting for the chess defender to inform us about this. Guess googling it is taking a while. But since you ask, you're spot on. The answer is no. There's also no reason to utilize the NE. This is not what Stockfish is about, it's using a different approach which is why actual learning approaches without historic data can be much better at playing chess. It all depends on the setup and goal.

If I see it correctly, the second linked engine, lc0, uses neural networks, so it could definitely benefit from ML acceleration

I remember a older paper that pointed out that most if not all chess engines are using heuristically determined subsets of moves, making these moves and then recursively doing it for new positions over and over again. This is in general bad for GPUs, as recursion performance is pretty bad on GPUs. OpenCL used to not support recursion, as it's not supported by all hardware. I'm mostly using CUDA for my research nowadays, but maybe @leman can shed some light on it if this is supported right now.

Metal supports limited recursion since last year, as you can't really do ray tracing without it, but I would not rely on it for general purpose computation. Recursion has the tendency to introduce divergence, which takes away the very thing GPUs are good at (executing same code for different data points).

But for the case you'd describe, do you even need recursion? Sounds to me like one could do it iteratively, which is again something that GPU would be good at. You can probably even have every execution unit do separate move simulation and run thousands of such simulations at a time, but you might run into limitations of GPU register file and shared memory sizes (I have no idea how much space these chess algorithms require).
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
If I see it correctly, the second linked engine, lc0, uses neural networks, so it could definitely benefit from ML acceleration
Lc0 is different from Stockfish. Same developers though, but Lc0 is NN based. This indeed could benefit from NE inference.
But for the case you'd describe, do you even need recursion? Sounds to me like one could do it iteratively, which is again something that GPU would be good at.
That's another story, yes. Switching from recursion to iterations. Can probably be done without too much code-overhead, but I've never seen anyone do it for these types of problems. That being said, raytracing can be iterative as well, it's just not widely used. I've never done it. I wrote two different raytracers and successfully got around it. But there are a couple of papers on it, so it should work.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,145
1,901
Anchorage, AK
Hello again, I think you misunderstood, I am not the OP of this thread. As I said before after adjusting certain settings, Stockfish (a real world app not a benchmark) does perform great and almost beats out Intel.
A NICHE application that the vast majority of Mac and PC users would never run. Either way, it's a garbage way of comparing performance between M1 and x86-based machines.
 
  • Like
Reactions: JMacHack
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.