Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So…they will continue to provide any legally requested data from iCloud…like they always have since iCloud existed…nothing changes…got it!
Except that with CSAM, there is a back door on your device. Hopefully they bury it for good.
 
  • Like
Reactions: dk001
…oh…you mean the same “back door” that has been scanning your personal photos for POI’s, breeds of dogs and cats, and types of flowers for years now?
Are those scans being shared with the government. Jesus why can’t some people see past Apples world. This is bad.
 
Are those scans being shared with the government. Jesus why can’t some people see past Apples world. This is bad.
The ones you are referring to aren't being shared with the government either.

The post I was responding to was referring to a "back door" as if Apple was adding something that did not exist before. In this case, it HAS existed for years now and I never heard anyone complain before about Apple scanning images "on device" before.

Please read the multiple white papers and the FAQ from Apple if you want to see what Apple is actually doing since your statement is 100% incorrect.
 
  • Angry
Reactions: KindJamz
...
The post I was responding to was referring to a "back door" as if Apple was adding something that did not exist before. In this case, it HAS existed for years now and I never heard anyone complain before about Apple scanning images "on device" before.
...
We're complaining now, and the timing of the complaint has no bearing on the merits of our concerns. And I guess you missed the part whereby once a threshold number of 'hits' is exceeded a human being will review your private picture. That is new, and we have no idea what the false positives might look like - normal pictures of your kids? You partner wearing lingerie or a swim suit? Who knows? You won't, because Apple will not tell you which pictures were flagged and which ones were examined by a human being, and you won't know who that human being was, nor how well they were vetted not to be a perv. Using local AI on the iPhone to police users without any indication of criminal activity is new. Claiming that the new system detects illegal content but guarantees privacy (it doesn't) is new. You're focusing on the engineering, which is what Apple has done, without taking into account the larger social and legal context.

Apple's proposed CSAM system is invasive; it is potentially open to abuse either directly or by copying it; and in the end it will merely trigger a never-ending computational arms race with those trying to hide illegal images. And those white papers Apple published are a blueprint for authoritarian regimes to copy the system and abuse it. This was a colossal blunder by Apple.

We will be the last generation to be able to put constraints on machine information-processing algorithms. We need to establish a code of ethics regarding their use. I suggest that using them to spy on people in the absence of evidence of any criminal activity whatsoever is wholly unethical, and this cr*p about the Apple system being optional ignores that it disables useful features if you opt out.
 
That is new, and we have no idea what the false positives might look like - normal pictures of your kids? You partner wearing lingerie or a swim suit? Who knows? You won't, because Apple will not tell you which pictures were flagged and which ones were examined by a human being, and you won't know who that human being was, nor how well they were vetted not to be a perv.
This is the part that you (and many others) either do not understand or do not want to believe the evidence given to you. Or my favorite part, people not believing Apple's detailed explanation and instead linking to some online know-it-all showing how perceptual hash imaging of a cat can be read as a dog. Not the same thing or even close to what Apple is doing despite people trying to dumb it down or compare it to that.

Trillion to one odds is enough for me as a chance that I will have multiple known CSAM images on my phone that are also uploaded to iCloud to be reviewed (as low res btw), but someone at Apple to confirm or deny they are indeed matching images from that limited database.

...trillion...to...one....

Again, you don't have to believe it (or understand how it works), but that's on you, not me.
 
The ones you are referring to aren't being shared with the government either.

The post I was responding to was referring to a "back door" as if Apple was adding something that did not exist before. In this case, it HAS existed for years now and I never heard anyone complain before about Apple scanning images "on device" before.

Please read the multiple white papers and the FAQ from Apple if you want to see what Apple is actually doing since your statement is 100% incorrect.
If the ones we are talking about aren’t being shared. Why scan?
 
This is the part that you (and many others) either do not understand or do not want to believe the evidence given to you. Or my favorite part, people not believing Apple's detailed explanation and instead linking to some online know-it-all showing how perceptual hash imaging of a cat can be read as a dog. Not the same thing or even close to what Apple is doing despite people trying to dumb it down or compare it to that.

Trillion to one odds is enough for me as a chance that I will have multiple known CSAM images on my phone that are also uploaded to iCloud to be reviewed (as low res btw), but someone at Apple to confirm or deny they are indeed matching images from that limited database.

...trillion...to...one....

Again, you don't have to believe it (or understand how it works), but that's on you, not me.
I am a neuroscientist who has studied biological information processing for decades. Consequently I have been following the research literature in machine perception and machine learning since the early 1980's because it is relevent to my research. I have read the Apple literature. I know very well the outline of how the proposed system works, although Apple, of necessity, is keeping many details hidden. Frankly I am getting really tired of those blithely accepting Apple's proposed system accusing me of not educating myself.

Let me be clear: I believe Apple's description. However, their system can be copied or subverted, the latter prevented only by the promises of Apple's current management. The system is prone to an unknown rate of false positives of unknown characteristics. I don't trust Apple's estimate of the false positive rate because I suspect it doesn't take into account that people often take a series of pictures that are very similar and that the statistics of natural images are constrained - Apple might be overestimating the variety in pictures that people take. The calculations for this estimate have not been made public so far as I can tell, so I cannot say for certain. The one thing that is certain is that the only way to know for sure is to actually try the system on customers, in a massive uncontrolled experiment. Moreover, the proposed system entails the possibility of a human being looking at pictures without the user's knowledge or specific consent. And ultimately it will be ineffective, as pedophiles learn to evade the system by modifying images.

Did you read the link I posted above about the successful attempts by computer science researchers to bypass detection in systems like the one Apple proposes? It is likely that pedophiles would use such tactics, strategically changing some aspects of a picture to escape detection. Did you read the conclusion of the researchers, which states that Apple would have to relax its criteria to detect selectively modified images to the point they would trigger >1 billion false positives a day? Here's the link again: https://forums.macrumors.com/thread...esearchers-in-new-study.2317024/post-30610726
 
I am a neuroscientist who has studied biological information processing for decades. Consequently I have been following the research literature in machine perception and machine learning since the early 1980's because it is relevent to my research. I have read the Apple literature. I know very well the outline of how the proposed system works, although Apple, of necessity, is keeping many details hidden. Frankly I am getting really tired of those blithely accepting Apple's proposed system accusing me of not educating myself.

Let me be clear: I believe Apple's description. However, their system can be copied or subverted, the latter prevented only by the promises of Apple's current management. The system is prone to an unknown rate of false positives of unknown characteristics. I don't trust Apple's estimate of the false positive rate because I suspect it doesn't take into account that people often take a series of pictures that are very similar and that the statistics of natural images are constrained - Apple might be overestimating the variety in pictures that people take. The calculations for this estimate have not been made public so far as I can tell, so I cannot say for certain. The one thing that is certain is that the only way to know for sure is to actually try the system on customers, in a massive uncontrolled experiment. Moreover, the proposed system entails the possibility of a human being looking at pictures without the user's knowledge or specific consent. And ultimately it will be ineffective, as pedophiles learn to evade the system by modifying images.

Did you read the link I posted above about the successful attempts by computer science researchers to bypass detection in systems like the one Apple proposes? It is likely that pedophiles would use such tactics, strategically changing some aspects of a picture to escape detection. Did you read the conclusion of the researchers, which states that Apple would have to relax its criteria to detect selectively modified images to the point they would trigger >1 billion false positives a day? Here's the link again: https://forums.macrumors.com/thread...esearchers-in-new-study.2317024/post-30610726
An article from Forbes that can be read in 30 seconds? C'mon...

Your second paragraph is clear; you don't believe them. I said that's fine.

And in your first paragraph, "Apple, of necessity, is keeping many details hidden." No one really knows how their system is different other than what they have mentioned (which in itself has shown that it IS different than existing hash programs). So how can it be fairly evaluated/compared at this point?

And I think that one of the largest companies in the world whose whole business is based on hardware and accompanying software might take some of of your concerns about false positives into consideration, no? Do you really think they will turn on a system that hasn't already been tested (not just in theory) and will be surprised by billions of false positives in the first day. Again...C'MON!!

I think we all know that Apple does not need our permission to turn this on and I believe they have already done so, even if it is limited to their own employees or some other small user group at this point. Criminals already know how to not get caught with this system; don't use iCloud. But if there is one thing in life that I am certain of and has been proven over and over again. 99.9% of criminals are stupid and always get caught.
 
...
Your second paragraph is clear; you don't believe them. I said that's fine.

I mean I do not believe they are lying. I believe they are mistaken.

And in your first paragraph, "Apple, of necessity, is keeping many details hidden." No one really knows how their system is different other than what they have mentioned (which in itself has shown that it IS different than existing hash programs). So how can it be fairly evaluated/compared at this point?

Absolutely agree. They have to keep the details hidden due to the aforementioned computational arms race with people trying to evade detection. I completely understand that. However, this fact hardly inspires confidence. Why should I give carte blanche to a spying system Apple can never fully describe?

And I think that one of the largest companies in the world whose whole business is based on hardware and accompanying software might take some of of your concerns about false positives into consideration, no? Do you really think they will turn on a system that hasn't already been tested (not just in theory) and will be surprised by billions of false positives in the first day. Again...C'MON!!

Well, judging from the uproar it seems pretty clear to me they didn't think this through adequately. I've worked in large research organisations. Being large and having technical expertise is not the same thing as having wisdom. I am not anti-Apple (I have used Apple products at work and in my home since the Lisa). I just think they got this seriously wrong.

I think we all know that Apple does not need our permission to turn this on and I believe they have already done so, even if it is limited to their own employees or some other small user group at this point. Criminals already know how to not get caught with this system; don't use iCloud. But if there is one thing in life that I am certain of and has been proven over and over again. 99.9% of criminals are stupid and always get caught.

Agreed about the utter stupidity of some criminals. However, if the CSAM system can be easily circumvented then why on earth foist it on people? They will be scanning the photos of people the vast majority of whom are innocent of any crime, creating false positives for no good reason and as a consequence, at some as yet unknown rate, triggering a human review process that violates privacy. Apple can do whatever they want. I am just expressing my dissatisfaction as a customer about what they propose to do. Apple should leave law enforcement to, well, law enforcement.
 
It is the tool that is the issue as it can be used in so many different ways all without the consent, knowledge of the user and without clear understanding of the potential misuse along with the potential legal impacts.

This needs a lot more discussion from all sides before something like this is even considered much less tried.
I hope this fades into nothingness.
 
Wow. So many answers to a question of a feature Google and Amazon already have since over 10 years.

The short answer: No. You shouldn’t.
 
Can you explain your comment a bit? Neither Google nor Amazon has implemented client side on device and their cloud scanning is on share.
Also, don't forget that nothing would be scanned in this proposed system if the user chooses to not upload their photos to iCloud, just like how Google or Amazon can't scan anything that isn't uploaded as well.
 
Also, don't forget that nothing would be scanned in this proposed system if the user chooses to not upload their photos to iCloud, just like how Google or Amazon can't scan anything that isn't uploaded as well.

Thanks for your "reminder"? however that is not what I was asking.
Appreciate the response however.
 
Also, don't forget that nothing would be scanned in this proposed system if the user chooses to not upload their photos to iCloud, just like how Google or Amazon can't scan anything that isn't uploaded as well.
Understood, but opting out reduces the functionality of iCloud, so it is a choice between two bad options. As a customer I am not happy with that.
 
  • Like
Reactions: KindJamz and dk001
Understood, but opting out reduces the functionality of iCloud, so it is a choice between two bad options. As a customer I am not happy with that.
So the other option would be to scan the cloud and then you’d be in the same boat.
 
  • Haha
Reactions: dk001
So the other option would be to scan the cloud and then you’d be in the same boat.
Actually I find that more acceptable. My iPhone is my property. Apple's servers are theirs. I doubt Apple would accept me scanning their servers to look for illegal content, just in case I might find something.

More importantly, Apple's scheme leverages local processing on mobile devices compatible with AI (in the broadest sense). The capability of local AI is only going to increase. It could reach a stage at which your phone refuses to record certain events or scenes, based on the whims of an authoritarian government. Alternatively, the camera and microphone might be continuously scanned by local AI to report you whenever you express thought crime, again at the whim of authoritarian regimes. Democracy has been under attack for some time now - do we really want to add more surveillance capabilities to authoritarian governments? I guarantee you that dictators around the world are having their government experts scan Apple's white papers about the CSAM system and asking whether the system can be replicated and modified to detect anti-government messages, flags, posters, memes, speech, etc. Apple appears to have looked only one move ahead. We need to be looking many moves ahead.
 
Actually I find that more acceptable. My iPhone is my property. Apple's servers are theirs. I doubt Apple would accept me scanning their servers to look for illegal content, just in case I might find something.

More importantly, Apple's scheme leverages local processing on mobile devices compatible with AI (in the broadest sense). The capability of local AI is only going to increase. It could reach a stage at which your phone refuses to record certain events or scenes, based on the whims of an authoritarian government. Alternatively, the camera and microphone might be continuously scanned by local AI to report you whenever you express thought crime, again at the whim of authoritarian regimes. Democracy has been under attack for some time now - do we really want to add more surveillance capabilities to authoritarian governments? I guarantee you that dictators around the world are having their government experts scan Apple's white papers about the CSAM system and asking whether the system can be replicated and modified to detect anti-government messages, flags, posters, memes, speech, etc. Apple appears to have looked only one move ahead. We need to be looking many moves ahead.

I’m sorry, but at what point in the current process does any government write the code or have access to the creation of iOS? And if they did, since this tech already exists and is on your phone already, again, where was your concern when Apple added it nearly 5 years ago?

The government has NOTHING to do with this. Apple is doing this on their own.

To be fair, your assertion that AI will report my “thought crimes” at the whim of authoritarian regimes because of this started my day off with a laugh.

Btw…Apple only put a camera and microphone on your phone so governments could watch and listen to you.

Whoops!! The chip planted in me when I got my COVID vaccine is telling me it’s time to get up and go to work.
 
I’m sorry, but at what point in the current process does any government write the code or have access to the creation of iOS? And if they did, since this tech already exists and is on your phone already, again, where was your concern when Apple added it nearly 5 years ago?

The government has NOTHING to do with this. Apple is doing this on their own.

To be fair, your assertion that AI will report my “thought crimes” at the whim of authoritarian regimes because of this started my day off with a laugh.

Btw…Apple only put a camera and microphone on your phone so governments could watch and listen to you.

Whoops!! The chip planted in me when I got my COVID vaccine is telling me it’s time to get up and go to work.

While the Government may not write the code, a couple have already expressed interest (including the US) on how they can leverage this tool to look for things other than CSAM.

Apple wrote it however they have to follow the law and if that can be used to leverage this….

Several steps.
 
Comparing actually concerning developments with wearing tin-foil hats is as inappropriate and wrong as it gets
Nope..."thought crimes" and "tin foil hats" (which I never mentioned) actually go hand in hand in this case.

I have always stated, why I may not agree, that I respect a certain level of concern with this tech being used on device. But the level of conspiracy theory that it has opened up on threads like this gets ridiculous at times. That response was a perfect example of that and I'll call it out every time.

A concern about governments changing laws to take advantage of tech like this is closer to actual reality (as @dk001 just stated above), but again, the tech is already on there in another form looking for other images. Where was the concern years ago about governments forcing Apple to change what it is looking for?

I actually believe that if a government wants to see something on my phone, there are WAY easier ways to do it than going through Apple...and I think they (or any high level hacker) can and has done that already (not to me probably...hah). Worry about reality...not AI reading my mind...I swear...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.