Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Whatever they do in their cloud is their business. Whatever happens on my phone is mine.
I don't have a problem if they scan the photos in the cloud, on their own servers, on their own time, using their own resources. I just don't see why my hardware, my battery, should be highjacked to do extra work for them. After all, CSAM does nothing for me, it only does for them. So they should do their own work and leave me and my device alone.

This is akin to installing speed detection devices in all cars, which then report speeding offences directly to the police. It's wrong. If the police want to catch me speeding, then let them use their own speed traps and do their own work. It's not my job to make their task easier.
 
Definitely, you don’t want to get caught with kiddie porn, everyone doing that should definitely leave. (Well, truthfully, they should stop doing kiddie porn and turn themselves in and get counseling)
What if I have never owned kiddie porn, but Apple's faulty AI feeds it to Apple somehow and I get raided and will lose my job and it'll ruin my life? Such things probably have happened and will happen.
What if my girlfriend is a midget? So many what is questions.
 
  • Like
Reactions: eltoslightfoot
What if I have never owned kiddie porn, but Apple's faulty AI feeds it to Apple somehow and I get raided and will lose my job and it'll ruin my life? Such things probably have happened and will happen.
What if my girlfriend is a midget? So many what is questions.

I do not think you understand how Apple’s proposed system would function. For it to be triggered, a several things need to happen:

1) a CSAM photo or video need to be pre-marked as such and entered into the database which Apple would then verify the hashtags against. So unless your supposed midget girlfriend’s photo was pre-marked as a CSAM material, you are fine. The same goes for you hugging or kissing any child. Apple’s system does not evaluate your material, instead comparing it with the existing database of the identified CSAM.

2) You need to have at least 30 marked images/videos to trigger a human review by Apple, where a human will review the images and make a decision.

3) Only if the previous two conditions are met will the authorities be notified.

I am not defending Apple here, as I think that if they really feel strongly about it, they should have at least mentioned it during the WWDC, instead of trying to sneak it in quietly, similar to what they did with older iPhones performance throttling a few years ago. They lacked transparency and this is not on, IMO. However, if somebody was really going after you with an intent to ruin your reputation, this is quite a complex way to accomplish it, as they would need to fool both the AI (machines) and the humans, reviewing the AI’s findings.

Have you watched Craig’s interview with the WSJ? It’s quite informative on the topic:

 
  • Like
  • Haha
Reactions: dk001 and keeper
I do not think you understand how Apple’s proposed system would function. For it to be triggered, a several things need to happen:

1) a CSAM photo or video need to be pre-marked as such and entered into the database which Apple would then verify the hashtags against. So unless your supposed midget girlfriend’s photo was pre-marked as a CSAM material, you are fine. The same goes for you hugging or kissing any child. Apple’s system does not evaluate your material, instead comparing it with the existing database of the identified CSAM.

2) You need to have at least 30 marked images/videos to trigger a human review by Apple, where a human will review the images and make a decision.

3) Only if the previous two conditions are met will the authorities be notified.

I am not defending Apple here, as I think that if they really feel strongly about it, they should have at least mentioned it during the WWDC, instead of trying to sneak it in quietly, similar to what they did with older iPhones performance throttling a few years ago. They lacked transparency and this is not on, IMO. However, if somebody was really going after you with an intent to ruin your reputation, this is quite a complex way to accomplish it, as they would need to fool both the AI (machines) and the humans, reviewing the AI’s findings.

Have you watched Craig’s interview with the WSJ? It’s quite informative on the topic:

People keep thinking the problem is we don't understand the process. Trust me, we do. We just don't believe Apple is telling us the whole story.
 
People keep thinking the problem is we don't understand the process. Trust me, we do. We just don't believe Apple is telling us the whole story.
That guy clearly did not if he was worried about pictures of his hypothetical midget girlfriend triggering a raid.
 
  • Like
Reactions: dk001
People keep thinking the problem is we don't understand the process. Trust me, we do. We just don't believe Apple is telling us the whole story.

Ok, but then we are entering into a much broader field of mistrust. Until now, did Apple give you a good reason not to trust them? And, if so, why are you still using their products?
 
  • Haha
Reactions: dk001
Ok, but then we are entering into a much broader field of mistrust. Until now, did Apple give you a good reason not to trust them? And, if so, why are you still using their products?
Not really....but when you look back, the signs were there. I literally chose them because they offered the best privacy and would never do something like install scanning software on my iphone/ipad/MacBook without my permission.
 
That guy clearly did not if he was worried about pictures of his hypothetical midget girlfriend triggering a raid.
The thing is that as ridiculous as that may sound, it is all completely unnecessary. Don't scan my device, no worries.
 
  • Like
Reactions: dk001
I do not think you understand how Apple’s proposed system would function. For it to be triggered, a several things need to happen:

1) a CSAM photo or video need to be pre-marked as such and entered into the database which Apple would then verify the hashtags against. So unless your supposed midget girlfriend’s photo was pre-marked as a CSAM material, you are fine. The same goes for you hugging or kissing any child. Apple’s system does not evaluate your material, instead comparing it with the existing database of the identified CSAM.

2) You need to have at least 30 marked images/videos to trigger a human review by Apple, where a human will review the images and make a decision.

3) Only if the previous two conditions are met will the authorities be notified.

I am not defending Apple here, as I think that if they really feel strongly about it, they should have at least mentioned it during the WWDC, instead of trying to sneak it in quietly, similar to what they did with older iPhones performance throttling a few years ago. They lacked transparency and this is not on, IMO. However, if somebody was really going after you with an intent to ruin your reputation, this is quite a complex way to accomplish it, as they would need to fool both the AI (machines) and the humans, reviewing the AI’s findings.

Have you watched Craig’s interview with the WSJ? It’s quite informative on the topic:


You like so many others are focused on “CSAM” and not looking at the potential of the tool Apple is introducing nor the impact such a tool could have if misused.

Sadly Fed’s “explanation” has a lot of gaps and gloss.
 
Last edited:
Is that you don’t trust them, who is the we? You don’t represent me.

Thank goodness for that. You should be able to make your own decisions.
There are a lot of users, privacy experts, security experts, academics, legal experts, surprisingly the ACLU, even some of the NCMEC board members are concerned.

This change needs a whole lot more review and discussion including experts outside of the Apple universe. Hopefully that is what Apple is going to do.
 
You like sop many others are focused on “CSAM” and not looking at the potential of the tool Apple is introducing nor the impact such a tool could have if misused.

Sadly Fed’s “explanation” has a lot of gaps and gloss.

I get it, but why so much mistrust? I mean, a simple hammer or scissors can cause a lot of harm if misused. Craig’s interview indeed did sound somewhat apologetic to me and the journalist could also have asked him some more interesting questions, starting from why could not Apple scan the iCloud contents to begin with, instead of choosing to do it on the devices themselves? It would still serve their intended purpose, but would probably have less resistance. 🤷🏻‍♂️
 
I get it, but why so much mistrust? I mean, a simple hammer or scissors can cause a lot of harm if misused. Craig’s interview indeed did sound somewhat apologetic to me and the journalist could also have asked him some more interesting questions, starting from why could not Apple scan the iCloud contents to begin with, instead of choosing to do it on the devices themselves? It would still serve their intended purpose, but would probably have less resistance. 🤷🏻‍♂️
Paranoia
 
Stratechery:


It highlights the difference between trust about technology, and trust about policy.

He seems to think it's bad to trust policy because it's easier to change. Well, it isn't difficult to change code either.

Personally, I have come to the conclusion several years ago that you can't have good privacy without good policy. Trusting technology won't work. Therefore I only choose companies in which I trust and have a good policy.
 
You like so many others are focused on “CSAM” and not looking at the potential of the tool Apple is introducing nor the impact such a tool could have if misused.

The CSAM Detection system is inefficient for being misused by powerful governments. There are many other technologies in the iPhone already which are much better suited for surveillance of the population or finding people with unpopular opinions.

If I was living in a country where I considered the government my enemy, the CSAM Detection System would be quite low on my list of worries.

Also I live in a country without an oppressive government. In fact, I trust all three branches of government for the most port including the police. I trust my phone company, my bank, my insurance company, my neighbours and even most strangers.
 
I get it, but why so much mistrust? I mean, a simple hammer or scissors can cause a lot of harm if misused. Craig’s interview indeed did sound somewhat apologetic to me and the journalist could also have asked him some more interesting questions, starting from why could not Apple scan the iCloud contents to begin with, instead of choosing to do it on the devices themselves? It would still serve their intended purpose, but would probably have less resistance. 🤷🏻‍♂️

It isn’t so much CSAM, rather the initiation of on device scanning. CSAM seems to be the roll out flavor to garner public acceptance? This tool with very small changes, unknowing to the device user, could be used to scan for a whole lot more. On device surveillance state and Apple is the leader? That is a far far cry from what they appeared to be, till now. Trust broken easily and so very hard to rebuild.

Despite the rhetoric, the on device scan that Apple is trying does little to clean the iCloud CSAM issue. Kind of like filtering toilet water. All the previous crap is still there. Untouched.
 
The CSAM Detection system is inefficient for being misused by powerful governments. There are many other technologies in the iPhone already which are much better suited for surveillance of the population or finding people with unpopular opinions.

If I was living in a country where I considered the government my enemy, the CSAM Detection System would be quite low on my list of worries.

Also I live in a country without an oppressive government. In fact, I trust all three branches of government for the most port including the police. I trust my phone company, my bank, my insurance company, my neighbours and even most strangers.

Can you name a few?
Cool that you can. Sadly in mine that trust has been seriously eroded in the last couple of decades.
 
It isn’t so much CSAM, rather the initiation of on device scanning. CSAM seems to be the roll out flavor to garner public acceptance? This tool with very small changes, unknowing to the device user, could be used to scan for a whole lot more. On device surveillance state and Apple is the leader? That is a far far cry from what they appeared to be, till now. Trust broken easily and so very hard to rebuild.

Despite the rhetoric, the on device scan that Apple is trying does little to clean the iCloud CSAM issue. Kind of like filtering toilet water. All the previous crap is still there. Untouched.
On device scanning has existed since the beginning. Your email, your text messages, the contents of your files, faces in photos… that’s how searching for things on the phone works… everything you might want to find is scanned, indexed, and counted. So I’m not sure how you would have a useful computer if it didn’t scan and index your information.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.