Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

SegNerd

macrumors 6502
Original poster
Feb 28, 2020
310
318
Let me be clear - I do NOT have any CSAM, I do NOT support CSAM, and I am NOT defending CSAM. But since this is a privacy issue, I am wondering...

If the hash list is going to be stored locally on our phones, wouldn't that make it possible to deliberately reverse-engineer some false positives that match the hash but are just a bunch of random dots? Crank out some random packs of 30 of those and have lots of people upload them to iCloud. Then Apple's surveillance team gets to spend their day looking at random blobs.
 
  • Like
Reactions: crawfish963
Yes, someone has already found the algorithm hidden in iOS 14 and found this to be possible, although now Apple claims that's not the algorithm they will be using.

And of course, if you are still concerned, you must be "ignorant".
 
I’m not saying you should do it (also not saying you shouldn’t do it). I’m just asking if it’s technically possible. Thank you for the info.
 
Let me be clear - I do NOT have any CSAM, I do NOT support CSAM, and I am NOT defending CSAM. But since this is a privacy issue, I am wondering...

If the hash list is going to be stored locally on our phones, wouldn't that make it possible to deliberately reverse-engineer some false positives that match the hash but are just a bunch of random dots? Crank out some random packs of 30 of those and have lots of people upload them to iCloud. Then Apple's surveillance team gets to spend their day looking at random blobs.
Where are you going to get that master list of hashes? The list is fully encrypted so unless you’re looking at actual CSAM to generate the hashes, that’s unlikely to happen.

Also, apple has a secondary perceptual hash check on their server which must verify the the CSAM match, so no, a photo of a bunch of dots won’t make it through to human review.

https://www.apple.com/child-safety/...del_Review_of_Apple_Child_Safety_Features.pdf read this.
 
Also, apple has a secondary perceptual hash check on their server which must verify the the CSAM match, so no, a photo of a bunch of dots won’t make it through to human review.

In theory it still could in some unlikely circumstances, but if it happens... So what? The human review team will only see the images that matched, not the rest of your library. So they'll look at a bunch of noise effectively, deem it a false match and move on. If it happens frequently enough by some odd coincidence that someone knew they had a collection that got spread around, Apple could just slightly tweak the hashing function and now that collection wouldn't match anymore
 
In theory it still could in some unlikely circumstances, but if it happens... So what? The human review team will only see the images that matched, not the rest of your library. So they'll look at a bunch of noise effectively, deem it a false match and move on. If it happens frequently enough by some odd coincidence that someone knew they had a collection that got spread around, Apple could just slightly tweak the hashing function and now that collection wouldn't match anymore
That we know of… You are instilling a lot of trust in Apple.
 
That we know of… You are instilling a lot of trust in Apple.
Until someone proves otherwise we can only assume the technology works as explained. I encourage security research that shows any and all flaws and vulnerabilities, but if we just go assume malice and hidden spyware, then there’s no sense storing anything in iCloud to begin with - Hell not even on your local device. In theory Apple could push an update that uploads all your photos and data un-encrypted to their servers tomorrow and by the time someone discovers it there’d already be a lot of folk on that update. Or they could push an update that holds the code to do so, but lies dormant for a week so a lot more update before the network traffic becomes apparent. But until we have reason to believe something fishy is going on, the only thing we can do is believe things work as described and keep funding security research to probe things and check - Security research Apple and others fund themselves too btw. If a security researcher finds an arbitrary code exploit in WebKit Apple will pay handsomely for that for example
 
In theory it still could in some unlikely circumstances, but if it happens... So what? The human review team will only see the images that matched, not the rest of your library. So they'll look at a bunch of noise effectively, deem it a false match and move on. If it happens frequently enough by some odd coincidence that someone knew they had a collection that got spread around, Apple could just slightly tweak the hashing function and now that collection wouldn't match anymore
If you ignore the existence of national security letters that could order apple to have the verification done by another party (for instance, the FBI), ignore the anti-encryption-without-backdoors sentiment in DC, and ignore Apple's cooperation with authoritarian regimes, it looks pretty good.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.