Geekbench AI was announced today. There are a number of scores that one can browse here:
Apart from the usual interest in score comparisons, what I found interesting is evidence to support what @leman and @name99 have stated previously. That is, while people assumed Apple were “marketing” the data by quoting Int8 as opposed to the Fp16 they used to quote when discussing the Neural Engine, it seems there isn’t as much difference between the two as there is on other platforms.
If we look at some figures here from this tweet we can see that usually, Fp16 performance is half that of Int8. On Apple Silicon devices, that doesn’t seem to be the case. Indeed while Int8 is higher, it is quite close between the two.
Interestingly, a couple of results for the M4 iPad Pro on iOS 18 show significant improvements in scores. So much so, that I am a little sceptical until verified.
https://browser.geekbench.com/ai/v1/4960
This also, but without a direct link, so I’m even more sceptical.
In any case, @leman and @name99 it looks like you were correct.
Apart from the usual interest in score comparisons, what I found interesting is evidence to support what @leman and @name99 have stated previously. That is, while people assumed Apple were “marketing” the data by quoting Int8 as opposed to the Fp16 they used to quote when discussing the Neural Engine, it seems there isn’t as much difference between the two as there is on other platforms.
If we look at some figures here from this tweet we can see that usually, Fp16 performance is half that of Int8. On Apple Silicon devices, that doesn’t seem to be the case. Indeed while Int8 is higher, it is quite close between the two.
Interestingly, a couple of results for the M4 iPad Pro on iOS 18 show significant improvements in scores. So much so, that I am a little sceptical until verified.
https://browser.geekbench.com/ai/v1/4960
This also, but without a direct link, so I’m even more sceptical.
In any case, @leman and @name99 it looks like you were correct.