Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Vlad Soare

macrumors 6502a
Mar 23, 2019
675
652
Bucharest, Romania
Whatever they do in their cloud is their business. Whatever happens on my phone is mine.
I don't have a problem if they scan the photos in the cloud, on their own servers, on their own time, using their own resources. I just don't see why my hardware, my battery, should be highjacked to do extra work for them. After all, CSAM does nothing for me, it only does for them. So they should do their own work and leave me and my device alone.

This is akin to installing speed detection devices in all cars, which then report speeding offences directly to the police. It's wrong. If the police want to catch me speeding, then let them use their own speed traps and do their own work. It's not my job to make their task easier.
 

iHorseHead

macrumors 68000
Jan 1, 2021
1,602
2,010
Definitely, you don’t want to get caught with kiddie porn, everyone doing that should definitely leave. (Well, truthfully, they should stop doing kiddie porn and turn themselves in and get counseling)
What if I have never owned kiddie porn, but Apple's faulty AI feeds it to Apple somehow and I get raided and will lose my job and it'll ruin my life? Such things probably have happened and will happen.
What if my girlfriend is a midget? So many what is questions.
 
  • Like
Reactions: eltoslightfoot

one more

macrumors 603
Aug 6, 2015
5,159
6,577
Earth
What if I have never owned kiddie porn, but Apple's faulty AI feeds it to Apple somehow and I get raided and will lose my job and it'll ruin my life? Such things probably have happened and will happen.
What if my girlfriend is a midget? So many what is questions.

I do not think you understand how Apple’s proposed system would function. For it to be triggered, a several things need to happen:

1) a CSAM photo or video need to be pre-marked as such and entered into the database which Apple would then verify the hashtags against. So unless your supposed midget girlfriend’s photo was pre-marked as a CSAM material, you are fine. The same goes for you hugging or kissing any child. Apple’s system does not evaluate your material, instead comparing it with the existing database of the identified CSAM.

2) You need to have at least 30 marked images/videos to trigger a human review by Apple, where a human will review the images and make a decision.

3) Only if the previous two conditions are met will the authorities be notified.

I am not defending Apple here, as I think that if they really feel strongly about it, they should have at least mentioned it during the WWDC, instead of trying to sneak it in quietly, similar to what they did with older iPhones performance throttling a few years ago. They lacked transparency and this is not on, IMO. However, if somebody was really going after you with an intent to ruin your reputation, this is quite a complex way to accomplish it, as they would need to fool both the AI (machines) and the humans, reviewing the AI’s findings.

Have you watched Craig’s interview with the WSJ? It’s quite informative on the topic:

 
  • Like
  • Haha
Reactions: dk001 and keeper

eltoslightfoot

macrumors 68030
Feb 25, 2011
2,564
3,126
I do not think you understand how Apple’s proposed system would function. For it to be triggered, a several things need to happen:

1) a CSAM photo or video need to be pre-marked as such and entered into the database which Apple would then verify the hashtags against. So unless your supposed midget girlfriend’s photo was pre-marked as a CSAM material, you are fine. The same goes for you hugging or kissing any child. Apple’s system does not evaluate your material, instead comparing it with the existing database of the identified CSAM.

2) You need to have at least 30 marked images/videos to trigger a human review by Apple, where a human will review the images and make a decision.

3) Only if the previous two conditions are met will the authorities be notified.

I am not defending Apple here, as I think that if they really feel strongly about it, they should have at least mentioned it during the WWDC, instead of trying to sneak it in quietly, similar to what they did with older iPhones performance throttling a few years ago. They lacked transparency and this is not on, IMO. However, if somebody was really going after you with an intent to ruin your reputation, this is quite a complex way to accomplish it, as they would need to fool both the AI (machines) and the humans, reviewing the AI’s findings.

Have you watched Craig’s interview with the WSJ? It’s quite informative on the topic:

People keep thinking the problem is we don't understand the process. Trust me, we do. We just don't believe Apple is telling us the whole story.
 

JBGoode

macrumors 65816
Jun 16, 2018
1,360
1,922
People keep thinking the problem is we don't understand the process. Trust me, we do. We just don't believe Apple is telling us the whole story.
That guy clearly did not if he was worried about pictures of his hypothetical midget girlfriend triggering a raid.
 
  • Like
Reactions: dk001

one more

macrumors 603
Aug 6, 2015
5,159
6,577
Earth
People keep thinking the problem is we don't understand the process. Trust me, we do. We just don't believe Apple is telling us the whole story.

Ok, but then we are entering into a much broader field of mistrust. Until now, did Apple give you a good reason not to trust them? And, if so, why are you still using their products?
 
  • Haha
Reactions: dk001

eltoslightfoot

macrumors 68030
Feb 25, 2011
2,564
3,126
Ok, but then we are entering into a much broader field of mistrust. Until now, did Apple give you a good reason not to trust them? And, if so, why are you still using their products?
Not really....but when you look back, the signs were there. I literally chose them because they offered the best privacy and would never do something like install scanning software on my iphone/ipad/MacBook without my permission.
 

eltoslightfoot

macrumors 68030
Feb 25, 2011
2,564
3,126
That guy clearly did not if he was worried about pictures of his hypothetical midget girlfriend triggering a raid.
The thing is that as ridiculous as that may sound, it is all completely unnecessary. Don't scan my device, no worries.
 
  • Like
Reactions: dk001

dk001

macrumors demi-god
Oct 3, 2014
11,142
15,496
Sage, Lightning, and Mountains
I do not think you understand how Apple’s proposed system would function. For it to be triggered, a several things need to happen:

1) a CSAM photo or video need to be pre-marked as such and entered into the database which Apple would then verify the hashtags against. So unless your supposed midget girlfriend’s photo was pre-marked as a CSAM material, you are fine. The same goes for you hugging or kissing any child. Apple’s system does not evaluate your material, instead comparing it with the existing database of the identified CSAM.

2) You need to have at least 30 marked images/videos to trigger a human review by Apple, where a human will review the images and make a decision.

3) Only if the previous two conditions are met will the authorities be notified.

I am not defending Apple here, as I think that if they really feel strongly about it, they should have at least mentioned it during the WWDC, instead of trying to sneak it in quietly, similar to what they did with older iPhones performance throttling a few years ago. They lacked transparency and this is not on, IMO. However, if somebody was really going after you with an intent to ruin your reputation, this is quite a complex way to accomplish it, as they would need to fool both the AI (machines) and the humans, reviewing the AI’s findings.

Have you watched Craig’s interview with the WSJ? It’s quite informative on the topic:


You like so many others are focused on “CSAM” and not looking at the potential of the tool Apple is introducing nor the impact such a tool could have if misused.

Sadly Fed’s “explanation” has a lot of gaps and gloss.
 
Last edited:

dk001

macrumors demi-god
Oct 3, 2014
11,142
15,496
Sage, Lightning, and Mountains
Is that you don’t trust them, who is the we? You don’t represent me.

Thank goodness for that. You should be able to make your own decisions.
There are a lot of users, privacy experts, security experts, academics, legal experts, surprisingly the ACLU, even some of the NCMEC board members are concerned.

This change needs a whole lot more review and discussion including experts outside of the Apple universe. Hopefully that is what Apple is going to do.
 

one more

macrumors 603
Aug 6, 2015
5,159
6,577
Earth
You like sop many others are focused on “CSAM” and not looking at the potential of the tool Apple is introducing nor the impact such a tool could have if misused.

Sadly Fed’s “explanation” has a lot of gaps and gloss.

I get it, but why so much mistrust? I mean, a simple hammer or scissors can cause a lot of harm if misused. Craig’s interview indeed did sound somewhat apologetic to me and the journalist could also have asked him some more interesting questions, starting from why could not Apple scan the iCloud contents to begin with, instead of choosing to do it on the devices themselves? It would still serve their intended purpose, but would probably have less resistance. 🤷🏻‍♂️
 

keeper

macrumors 6502a
Apr 23, 2008
520
303
I get it, but why so much mistrust? I mean, a simple hammer or scissors can cause a lot of harm if misused. Craig’s interview indeed did sound somewhat apologetic to me and the journalist could also have asked him some more interesting questions, starting from why could not Apple scan the iCloud contents to begin with, instead of choosing to do it on the devices themselves? It would still serve their intended purpose, but would probably have less resistance. 🤷🏻‍♂️
Paranoia
 

hans1972

Suspended
Apr 5, 2010
3,759
3,399
Stratechery:


It highlights the difference between trust about technology, and trust about policy.

He seems to think it's bad to trust policy because it's easier to change. Well, it isn't difficult to change code either.

Personally, I have come to the conclusion several years ago that you can't have good privacy without good policy. Trusting technology won't work. Therefore I only choose companies in which I trust and have a good policy.
 

hans1972

Suspended
Apr 5, 2010
3,759
3,399
You like so many others are focused on “CSAM” and not looking at the potential of the tool Apple is introducing nor the impact such a tool could have if misused.

The CSAM Detection system is inefficient for being misused by powerful governments. There are many other technologies in the iPhone already which are much better suited for surveillance of the population or finding people with unpopular opinions.

If I was living in a country where I considered the government my enemy, the CSAM Detection System would be quite low on my list of worries.

Also I live in a country without an oppressive government. In fact, I trust all three branches of government for the most port including the police. I trust my phone company, my bank, my insurance company, my neighbours and even most strangers.
 

dk001

macrumors demi-god
Oct 3, 2014
11,142
15,496
Sage, Lightning, and Mountains
I get it, but why so much mistrust? I mean, a simple hammer or scissors can cause a lot of harm if misused. Craig’s interview indeed did sound somewhat apologetic to me and the journalist could also have asked him some more interesting questions, starting from why could not Apple scan the iCloud contents to begin with, instead of choosing to do it on the devices themselves? It would still serve their intended purpose, but would probably have less resistance. 🤷🏻‍♂️

It isn’t so much CSAM, rather the initiation of on device scanning. CSAM seems to be the roll out flavor to garner public acceptance? This tool with very small changes, unknowing to the device user, could be used to scan for a whole lot more. On device surveillance state and Apple is the leader? That is a far far cry from what they appeared to be, till now. Trust broken easily and so very hard to rebuild.

Despite the rhetoric, the on device scan that Apple is trying does little to clean the iCloud CSAM issue. Kind of like filtering toilet water. All the previous crap is still there. Untouched.
 

dk001

macrumors demi-god
Oct 3, 2014
11,142
15,496
Sage, Lightning, and Mountains
The CSAM Detection system is inefficient for being misused by powerful governments. There are many other technologies in the iPhone already which are much better suited for surveillance of the population or finding people with unpopular opinions.

If I was living in a country where I considered the government my enemy, the CSAM Detection System would be quite low on my list of worries.

Also I live in a country without an oppressive government. In fact, I trust all three branches of government for the most port including the police. I trust my phone company, my bank, my insurance company, my neighbours and even most strangers.

Can you name a few?
Cool that you can. Sadly in mine that trust has been seriously eroded in the last couple of decades.
 

AJACs3

macrumors member
Jan 6, 2014
90
117
It isn’t so much CSAM, rather the initiation of on device scanning. CSAM seems to be the roll out flavor to garner public acceptance? This tool with very small changes, unknowing to the device user, could be used to scan for a whole lot more. On device surveillance state and Apple is the leader? That is a far far cry from what they appeared to be, till now. Trust broken easily and so very hard to rebuild.

Despite the rhetoric, the on device scan that Apple is trying does little to clean the iCloud CSAM issue. Kind of like filtering toilet water. All the previous crap is still there. Untouched.
On device scanning has existed since the beginning. Your email, your text messages, the contents of your files, faces in photos… that’s how searching for things on the phone works… everything you might want to find is scanned, indexed, and counted. So I’m not sure how you would have a useful computer if it didn’t scan and index your information.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.