I guess my main point is the “AFAIK” doing a lot of work in making your above point.
I’m glad folks are asking the question and not simply accepting everything as it comes, but as more comes to light about how the system works I am still feeling comfortable with it’s described design. Correct me if I have misunderstood, but the source lists of specific examples are sourced from a logical AND of examples from two responsible organizations, and the hashes of those examples are baked into the OS release. There is also a significant threshold of matched images needs to be reached before reports are pushed out to Apple’s human review team. So as described, it’s not a system some government can simply have Apple insert images and use generalized ML for identifying “politically unacceptable” content or OCR to identify and report out photo contents. Apple would have to build something specific to the requesting governments specs, which leaves us where we are right now as far as government being able to push for on phone scanning. In other words, to make this work as anything other than it’s described intention, it would have to be totally rewritten.
But I am listening to folks like you who are concerned. Without the legal requirement being in place yet, it does feel a bit overly aggressive and preemptive, so I appreciate this conversation.
Re Why is Apple doing this and why be mum about that… 2 guesses:
1. Why do it?: European Union is heading to the requirement for scanning (but yeah, why put it out before being required? Testing? Getting the public’s reaction?)
2. Why not respond yet?: PR? it needs to be bandied about in the public so they don’t look immediately defensive, they can here the specific complaints, and then hopefully address them.
So, what have I missed or misunderstood?
That is part of the problem. Many of us trusted Apple when it came to privacy - like they claim in their statements and billboards. Then they announce 3 new features, one of which is the CSAM checker followed by how to get around it. Huh?
For the number of matched, that is more along the lines of, according to Apple, to ensure they lower the chance of false positives. 30 weighs towards a lack of accuracy IMO. Definitely not risk adverse.
As for inserting images, I suspect they can. Surreptitiously? Likely not. Just taking it over? Maybe. Likely. Especially other Nation States. Can it search even with the iCloud Photo off? Likely. Can it search for other things? Likely. How often does the database get updated? There are just too many unknowns and Apple isn’t talking. The biggie is why on device? That part makes little sense from both a privacy perspective and with Apples stated goals of cleaning up CSAM in the iCloud. Highly inefficient.
1. If the EU was pushing for this, why not start it in the EU? They have, by their own admission, the biggest CSAM chunk globally.
2. That is what many of us are hoping; they engage in open honest discussion. It isn’t the scanning, it is the tool. In a couple of interviews, even some NCMEC board members are concerned especially with regards to privacy. The legality is wide open. Unchallenged. PR? Maybe. Maybe it was all a PR stunt that went wrong … We don’t know.
In the absence of answers, look for worst case risk assessment and try to get Apple to the table and discuss / do another evaluation. That is my hope.
I think you got a chunk however I suspect based on what I have learned that this process is more fragile and has more false positives than is “safe”. There is a reason MS keeps PhotoDNA out of public view.
Now it is a wait and see. One thing for me; I have learned that Apple is no better than Google, MS and others When it comes to my privacy.