Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
That’s a quote from the original article that I remembered, not what I think about. I’m not part of that research team. Yeah, it sounds emotional, but humans are emotional creatures so, can’t fault on that one really.
Oh sorry! I wasn’t meaning you were putting that word in there! I was meaning that research team was injecting their own emotion.
 
  • Like
Reactions: Shirasaki
That’s a quote from the original article that I remembered, not what I think about. I’m not part of that research team. Yeah, it sounds emotional, but humans are emotional creatures so, can’t fault on that one really.

Destructive is usually not emotional.

A virus (pc) can be destructive.
Dynamite can be destructive.
A false accusation (pedo) can be destructive.

Depending on the topic, people can see a “destructive item / event” and have an emotional response.
 
  • Like
Reactions: Shirasaki
Oh sorry! I wasn’t meaning you were putting that word in there! I was meaning that research team was injecting their own emotion.
So I found the article https://www.engadget.com/princeton-university-researcher-apple-csam-oped-162004601.html

Sadly doesn’t mention any technical detail of interest, but rather addressing the abuse part. This leads me to think: maybe the tool apple is developing is actually so good at doing it’s job it’s being considered dangerous rather than ineffective. And solving people’s problem is never Apple’s specialty.
 
So I found the article https://www.engadget.com/princeton-university-researcher-apple-csam-oped-162004601.html

Sadly doesn’t mention any technical detail of interest, but rather addressing the abuse part. This leads me to think: maybe the tool apple is developing is actually so good at doing it’s job it’s being considered dangerous rather than ineffective. And solving people’s problem is never Apple’s specialty.

This is absolutely the case. This system would be very effective. People aren't worried about it not working. They're worried about it working too well, and being used in contexts beyond CSAM.

It's a valid concern, though it's not like a government has needed to wait for a system like this to begin to take action.
 
So I found the article https://www.engadget.com/princeton-university-researcher-apple-csam-oped-162004601.html

Sadly doesn’t mention any technical detail of interest, but rather addressing the abuse part. This leads me to think: maybe the tool apple is developing is actually so good at doing it’s job it’s being considered dangerous rather than ineffective. And solving people’s problem is never Apple’s specialty.

Try this… (adding @macOS Lynx)
 
The algorithm as-is (before delaying the roll out) is susceptible to poison attack since they don’t match pixel perfect image but feature based image. Therefore, someone could manufacture an image that triggers the warning that does nothing with child porn.

Dunno what will happen after the revision but there’s gonna be a way to reduce the chance of poison attack or work out another training entirely.
Yes, to get a file hash, you lose a lot of information. So two different pictures with a loss of information could potentially result in the same hash. This is also what scares me at some point.

BUT. Someone has to know exactly which pictures are flagged, and what the algorithm and key is to return the hash. Will someone really go this far?

ALSO. They said they won't report you right away. A human will take a look at the flagged picture first to see if it truly matches. So even if someone malicious took the effort to manufacture an image to send you in jail, it will not work.

The real problem here, however, is Apple could have access to one of your pictures that has nothing to do with that. They could have access to 1 photo of yours.
 
Last edited:
BUT. Someone has to know exactly which pictures are flagged, and what the algorithm and key is to return the hash. Will someone really go this far?
Dedicated criminals will go very far, as far as building their own cellular networks for example. For others maybe not that much, but some level of precaution may also apply.
ALSO. They said they won't report you right away. A human will take a look at the flagged picture first to see if it truly matches. So even if someone malicious took the effort to manufacture an image to send you in jail, it will not work.
Human integrity here would be the key if the image does get flagged and information does get decrypted.
The real problem here, however, is Apple could have access to one of your pictures that has nothing to do with that. They could have access to 1 photo of yours.
The threshold is 30 apparently, meaning the number would not be 1. Probably more. But I'd argue that's lower on the list of what people worried about for this feature.
 
  • Like
Reactions: PsykX
Apple is ONLY LOOKING FOR IMAGES THAT ALREADY EXIST IN A DATABASE OF KNOWN ILLEGAL CSAM. These images are not random kids taking baths.

NO, THEY'RE MACTHING HASH OF FEATURES OF KNOWN CSAM. THIS HAS FALSE POSITIVES. YOU THINK APPLE IS OK WITH 29 ILLEGAL PICTURES, BUT 30 IS WHERE THEY DRAW THE LINE?
 
  • Like
Reactions: dk001
NO, THEY'RE MACTHING HASH OF FEATURES OF KNOWN CSAM. THIS HAS FALSE POSITIVES. YOU THINK APPLE IS OK WITH 29 ILLEGAL PICTURES, BUT 30 IS WHERE THEY DRAW THE LINE?
Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

 
  • Haha
Reactions: Shirasaki and dk001
However what a lot of people fail to realize (as I see most comments pertaining to their iPhone's) is it just wasn't coming to your iPhone, it was also coming to your iPad and Mac.
This is the very concerning part. Apple has not said they will not put spyware into our iOS and MacOS devices. They have only postponed the implementation of the spyware. My feeling is, this is purely doe to the bad PR (and rightfully so) Apple has over this. Once this has died down a little, they will probably just sneak it into an iOS 15 (current MacOS also) update or the next whole number version of the OSes.

This is why I am personally hesitant to get a new Apple device because I do not want to be forced to have Apple created spyware on the device. Many others also share this concern. I guess we all shall wait and see how Apple will try to sneak this spyware into our devices. Hopefully people are as vocal about calling out this spyware for what it is then as they are now.
 
  • Like
Reactions: Beelzbub and dk001
Concerns about potential hacking is my concern: what if someone has airdrop set up to receive airdrops from everyone and some nefarious individual drops a bunch of questionable photos to someone? I think you still have to "accept" the airdrop, it doesn't just show up but.....What if someone breaks into someones icloud account and uploads photos which get shared to the phone? I realize these are sort of out of the box events but all it takes is a clever hacker to ruin someones life for the "lulz".
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.