Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
in speed tests, the a13 is faster to export 4k files than snapdragon 865
not even close to snapdragon 855! yes 855, not 865
iphone 11 pro max kinemaster 4k video export - 1 minute and 15 secs, pixel 4 with sd855, same video - 35 secs.... yeah
 
It's not Qualcomm that says "FASTEST IN ANY PHONE EVER ON THIS UNIVERSE BY 50%". IT'S APPLE
So? They can still be right. I ask again, what is the metric used here? Neither Apple nor Qualcomm specify it, so it's up to interpretation what it actually means and Apple could very well be right. Btw, the number of operations per second tells very little. For example, lets say your data x bits, but you can only perform operations on x/2 bits, then you have to share registers and perform multiple operations (at least twice).

It's like asking someone how much money he wants... "10k or 100k"? The first intuition would be 100k of course, the problem here is that the 100k is in Japanese Yen (about 1k in US dollar) and the 10k is in US dollar. I'd take the 10k over the 100k then, even if the 10k seems less on first sight.

Of ask someone how fast they're driving. One person says 150, the other 100. The 150 is in km/h, the 100 in mph. If I'd like to travel fast, I'd take the 100 in that case.
 
So? They can still be right. I ask again, what is the metric used here? Neither Apple nor Qualcomm specify it, so it's up to interpretation what it actually means and Apple could very well be right. Btw, the number of operations per second tells very little. For example, lets say your data x bits, but you can only perform operations on x/2 bits, then you have to share registers and perform multiple operations (at least twice).

It's like asking someone how much money he wants... "10k or 100k"? The first intuition would be 100k of course, the problem here is that the 100k is in Japanese Yen (about 1k in US dollar) and the 10k is in US dollar. I'd take the 10k over the 100k then, even if the 10k seems less on first sight.

Of ask someone how fast they're driving. One person says 150, the other 100. The 150 is in km/h, the 100 in mph. If I'd like to travel fast, I'd take the 100 in that case.
Read carefully! it says trillions operations, not floating point, therefore is basic OP, integer. both are using the same comparison, but apple is falling begind almost 50% with the latest A14. A13 probably is falling 100%....yet up until the other day almost everyone told us that A13 was the fastest chip in the galaxy. WRONG!
 
Read carefully! it says trillions operations, not floating point, therefore is basic OP, integer.
What integer? Standard Integer 32 bit? Int64? i128?

Also, a little hint... we're talking about the Neural Engine on Apples site and the Hexagon 698 on Qualcomm and not the whole SoC, right? Using integers in a neural engine or the 698 or anything that computes inference with a neural network model is probably the worst idea anyone can have. Neither Apple nor Qualcomm are that stupid to take a massive loss in precision or add the massive overhead of adding a numerical system to deal with this. NE/698 are not running CAD software where this usually the case. So it's a safe bet that floating points are used in either.

When was the last time you've seen an integer based neural network? When was the last time you've seen one used on a production system and not a lab environment?

Also, for reference, what is the actual claim here? Does Apple say they have the fastest neural engine or part of a SoC that deals with inference throughput or are they saying they have the fastest SoC with the A14? The 15TOPs for the Qualcomm are for the 698 only as well and not the full 865 SoC.
 
  • Like
Reactions: Tsepz
What integer? Standard Integer 32 bit? Int64? i128?

Also, a little hint... we're talking about the Neural Engine on Apples site and the Hexagon 698 on Qualcomm and not the whole SoC, right? Using integers in a neural engine or the 698 or anything that computes inference with a neural network model is probably the worst idea anyone can have. Neither Apple nor Qualcomm are that stupid to take a massive loss in precision or add the massive overhead of adding a numerical system to deal with this. NE/698 are not running CAD software where this usually the case. So it's a safe bet that floating points are used in either.

When was the last time you've seen an integer based neural network? When was the last time you've seen one used on a production system and not a lab environment?

Also, for reference, what is the actual claim here? Does Apple say they have the fastest neural engine or part of a SoC that deals with inference throughput or are they saying they have the fastest SoC with the A14? The 15TOPs for the Qualcomm are for the 698 only as well and not the full 865 SoC.
but 15 trillion is bigger number than 11 trillion! bigger is better!
7nm better than 5nm
865 waaaay bigger so better than 14
698 just look at that big number, never seen before on an apple chip
just look at those HUUUUUUUUUUUUGE numbers
/s
 
Indeed. It's free, it's open source, you can do your own benchmarks and submit them. Read the corresponding paper.
Nothing here...
Screenshot_20201015-184920.png
 
speed was never an issue when i needed to upgrade my phone, so i dont really get the hype when they talk about faster/better processor
 
speed was never an issue when i needed to upgrade my phone, so i dont really get the hype when they talk about faster/better processor
Yes, nowadays all cpus (same year) are pretty much equal and also good enough , but the way they are marketing is what i don't agree with.
 
Nothing here...
What do you mean, nothing there? I literally linked you to the repository containing all the information, configuration files and source code. Of course you're not going to find it in an app store. This is for developers and researchers, so you'll have to clone the repository, make changes to the configuration, add your own model or use an existing one and build the whole thing to be able to run it. Start by reading the paper and keep going from there: https://arxiv.org/abs/1911.02549.
 
  • Like
Reactions: Tsepz
What do you mean, nothing there? I literally linked you to the repository containing all the information, configuration files and source code. Of course you're not going to find it in an app store. This is for developers and researchers, so you'll have to clone the repository, make changes to the configuration, add your own model or use an existing one and build the whole thing to be able to run it. Start by reading the paper and keep going from there: https://arxiv.org/abs/1911.02549.

Clearly that's beyond his ability...as is reading as he's trying and failing to troll in a second post about the Dolby Vision ability and cannot even make sense of a simple sentence. He's an Android lover with his LG phone. Enough said.
 
Last edited:
Cry me a river! You don't agree with the marketing. LMAO! 🤣
Nowadays we all have many phones and also get to play with many others each day. I've played a lot with lg, s20 fe, s10 plus, a71, iPhone 11, note 9 and let me tell you all are more than ok. So what's your point? Do you agree with these "best ever in any" yet they were caughtup with numbers lower than competitors?
 
What integer? Standard Integer 32 bit? Int64? i128?

Also, a little hint... we're talking about the Neural Engine on Apples site and the Hexagon 698 on Qualcomm and not the whole SoC, right? Using integers in a neural engine or the 698 or anything that computes inference with a neural network model is probably the worst idea anyone can have. Neither Apple nor Qualcomm are that stupid to take a massive loss in precision or add the massive overhead of adding a numerical system to deal with this. NE/698 are not running CAD software where this usually the case. So it's a safe bet that floating points are used in either.

When was the last time you've seen an integer based neural network? When was the last time you've seen one used on a production system and not a lab environment?

Also, for reference, what is the actual claim here? Does Apple say they have the fastest neural engine or part of a SoC that deals with inference throughput or are they saying they have the fastest SoC with the A14? The 15TOPs for the Qualcomm are for the 698 only as well and not the full 865 SoC.

Looking forward to the answers to this...

What do you mean, nothing there? I literally linked you to the repository containing all the information, configuration files and source code. Of course you're not going to find it in an app store. This is for developers and researchers, so you'll have to clone the repository, make changes to the configuration, add your own model or use an existing one and build the whole thing to be able to run it. Start by reading the paper and keep going from there: https://arxiv.org/abs/1911.02549.

You are asking far too much of him now
 
Nowadays we all have many phones and also get to play with many others each day. I've played a lot with lg, s20 fe, s10 plus, a71, iPhone 11, note 9 and let me tell you all are more than ok. So what's your point? Do you agree with these "best ever in any" yet they were caughtup with numbers lower than competitors?

I don't take marketing literally. Like all advertisements I understand the there's an agenda behind every ad. I also don't waste my time complaining about it. Grow up.
 
Fastest CPU ever... But only in Geekbench.... 😂 😂 😂 Which is not open source software... 😂 😂 😂
 
Fastest CPU ever... But only in Geekbench.... 😂 😂 😂 Which is not open source software... 😂 😂 😂
In the meantime, have you learned how to read a scientific paper and how to use MLPerf (which is open source)?
I'm a bit surprised someone is asking for open source, but is then searching for open source software in an app store, because he can't figure out how to run the open source software... 😂😂😂
 
  • Like
Reactions: iTom17
Just had a quick look at Geekbench... what in the world is not to understand how they run the benchmark for the Neural Engine? It's literally given in text what they're doing:
The Machine Learning workload is an inference workload that executes a Convolutional Neural Network to perform an image classification task. The workload uses MobileNet v1 with an alpha of 1.0 and an input image size of 224 pixels by 224 pixels. The model was trained on the ImageNet dataset.

Here's the corresponding paper for MobileNets: https://arxiv.org/pdf/1704.04861v1.pdf
And some code snippet: https://github.com/osmr/imgclsmob/b...54e/pytorch/pytorchcv/models/mobilenet.py#L14
 
  • Like
Reactions: shadyman
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.